Healthchecks.io Status

Need support?

E-mail us
Welcome to the Healthchecks.io status page. If there are interruptions to service, we will post a report here, and on our Mastodon account. 

Past incidents

Apr, 2022
    
 
Mar, 2022
 
   
Feb, 2022
 
      
  1. Apr, 2022

    1. Major stability problemsNotification SenderPing APIDashboard
      Started:
      Duration:
      Major stability problems – investigating
      Post-mortem
      Here's a quick recap of this outage. 

      Yesterday, Hetzner datacenters in Falkenstein were hit with by a large DDOS attack. As a mitigation, Hetzner throttled UDP traffic on ports 9000 and above.

      Healthchecks.io uses Wireguard for private communication between servers (load balancers to web servers, web servers to database servers). Wireguard works over UDP, and, after the throttling started, the available bandwidth between servers dropped to below 1Mbit/s. 

      After figuring out what had happened, I updated Wireguard configuration to use a port number below 9000. After deploying the change, Healthchecks resumed normal operation.

      The outage lasted almost 2 hours. During the outage, the ping API was accepting and processing some but not all pings. The web UI and the notification sender was completely non-operational. When normal operation resumed, Healthchecks sent out a wave of false alerts due to pings that were not received on time. 

      This was an unfortunate event, I apologize for the trouble caused by failing pings, non-operational management API, and the eventual false alerts. Still, there are also several positive aspects, in the "it could have been worse" sense, I would like to acknowledge:

      • TCP was still working. I could access the servers over SSH the whole time, so I had at least some control over the situation.
      • The Wireguard port change worked as a workaround. Without it, the outage would have continued several more hours.
      • The primary database server got a long overdue reboot, and is now running a newer kernel. 
      • When the problem hit, I was at home, awake, and able to respond immediately. 

      PS. If you notice any lingering issues, have any suggestions or questions, please let me know at contact@healthchecks.io. Thank you!

      –Pēteris

      Investigating:

      We're still experiencing major issues. Ping handler is working somewhat, web dashboard is down. 
      The root issue is a slowdown of UDP traffic between servers in the datacenter.

      Status update from Hetzner: https://status.hetzner.com/incident/129728ce-ba25-49b6-96cc-aafcd39ab0b7

      Verifying:

      Updated Wireguard configuration to use a port number below 9000. Service is back online, we're hopefully back on track.

      Resolved:

      The issue is resolved.
    2. Started:
      Duration:
      Our object storage provider is experiencing a degraded performance, there is currently a backlog of ping bodies not yet uploaded to object storage. No ping bodies are lost, and will be available for viewing and download eventually.

      Resolved:

      The issue is resolved.
    3. Issues with ougoing emailsNotification Sender
      Started:
      Duration:
      Unfortunately our SMTP provider is having another outage. Message from them: "Our server has a temporary delivery issue. Our developers are aware of them and they are working on a solution."

      Verifying:

      The SMTP service seems to be back up and more or less caught up with the backlog. The sending delay is still higher than usual.

      Resolved:

      The issue is resolved.
    4. Started:
      Duration:
      We're currently unable to send Signal messages to some Signal users (first-time messages to users that we have never sent messages in the past). The error messages from Signal indicate a rate-limiting issue. Previously, we could temporarily work around the rate-limiting by manually solving a CAPTCHA, but currently the issue persists even after solving CAPTCHA. 

      Identified:

      The Signal servers apply rate-limits per sender account, and, obviously in retrospect, also per sender IP. If server A hits a rate-limit, and we submit a CAPTCHA solution from server B, it will not work. We're making changes on our side to take that into account. 

      Resolved:

      The issue is resolved.
    5. Issues with ougoing emailsNotification Sender
      Started:
      Duration:
      Our SMTP relay provider is experiencing issues, looking into it.

      Resolved:

      The issue is resolved.
    6. Issues with outgoing emailNotification Sender
      Started:
      Duration:
      We're seeing issues with outgoing email, looking into it.

      Identified:

      Received an update from SMTP relay provider's support: "Our developers are currently performing a demanding system deploy which may be influencing these SSL errors."

      Resolved:

      Received an update from the SMTP relay provider (Elastic Email) – the email delivery issue has been resolved.

      During the incident, email delivery was failing intermittently, some connections to the SMTP relay went through, some failed. In total, 363 send attempts failed. Although Healthchecks retries failed email deliveries, the retry window is short, so some email messages were unfortunately lost during the outage.

      I will try to get more details about the outage from Elastic Email, and will investigate fallback options.
  2. Mar, 2022

    All systems operational.

  3. Feb, 2022

    1. Started:
      Duration:
      We've found a problem with Signal notification delivery: some messages don't get delivered, and trigger a "signal-cli call timed out" error message in the web UI. We are working on a fix.

      Verifying:

      •  Upgraded to the just released signal-cli version (0.10.4), which fixes the reliability problem.
      • Also, added additional logging and alerting on our side, to catch similar issues sooner in the future.

      Resolved:

      The issue is resolved.