Healthchecks.io Durumu

Yardıma ihtiyacın?

Bize e-posta gönder
Welcome to the Healthchecks.io status page. If there are interruptions to service, we will post a report here, and on our Mastodon account. 

Önceki olaylar

Oca, 2026
   
 
Ara, 2025
    
Kas, 2025
     
  1. Oca, 2026

    1. Network instabilityNotification SenderPing APIDashboard
      Başladı:
      Süre:
      We are investigating network instability between our servers.

      Araştırma:

      The network instability manifested as UDP packet loss between specific server pairs. Currently all servers can contact each other again, and pings are being processed normally. We do not yet know what was causing the packet loss.

      Doğrulanıyor:

      Hetzner has posted an incident report about a core router  fault. The incident start time (23:05 UTC) lines up precisely with the moment we started seeing packet loss, so this is most likely the cause of the packet loss.


      Çözüldü:

      The issue is resolved.
    2. Süre: -
      We will be upgrading HAProxy on our load balancer servers on Tuesday, January 6, 2026. Planned changes:

      * Upgrade HAProxy to version 3.2
      * Update the list of accepted SSL/TLS ciphers from a hand-picked list to HAProxy defaults
      * Later in the week, gradually enable HTTP/3. 

      These changes should not affect the vast majority  clients, but could in theory cause connection issues for very old legacy clients. If you notice persistent ping request failures starting from January 6, please let us know.
  2. Ara, 2025

    Tüm sistemler çalışır durumda.

  3. Kas, 2025

    1. Başladı:
      Süre:
      We are currently investigating intermittent connection timeouts on our ping endoints (hc-ping.com) and the main website(healthchecks.io).
      Post-mortem
      Between November 1, 23:30 UTC and November 2, 1:10 UTC, Healthchecks.io (both the main website, healthchecks.io, and the ping endpoints, hc-ping.com) had intermittent connectivity issues in the form of some requests taking multiple seconds to finish, or timing out entirely. Usually, such symptoms are caused by infrastructure problems out of our control, but this time the problem was on our side: our load balancers were running out of available connection slots. Our load balancers were dealing with a flood of ping requests from a misconfigured client. The load balancers were using the “tarpit” rate-limiting to slow the client’s requests down. Unfortunately, this did not work as intended: the client did not slow down, but the load balancers were hitting their connection count limits and rejecting legitimate requests.

      Read full post-mortem here.

      Tanımlanmış:

      We have identified and blocked network flood from a set of specific IP addresses.

      Çözüldü:

      The issue is resolved.