We build on Hetzner. We like Hetzner. But their object storage has had open incidents for nearly three months and there's almost nothing about it on the internet. Here's what we know.
It Started With 504s
The first sign was subtle: occasional origin errors surfacing through our CDN. Not enough to trigger an alert, but enough to notice. A 504 here and there — the kind of thing you initially chalk up to transient network hiccups.
Then the hiccups got less transient.
The problem with 504s from an object storage backend isn't just the error itself. It's what happens downstream: if your CDN layer caches that error response, you've now got cache poisoning. The origin recovers, but your CDN keeps serving the stale 504 to users until the TTL expires. A slow storage backend is manageable. A CDN that amplifies its errors is a different problem entirely.
After the third or fourth round of this, we stopped waiting for it to resolve on its own.
The Incidents Hetzner Filed — And Nobody Found
Hetzner does publish status updates. They're just not easy to stumble upon if you're not subscribed.
Since January 2026, there have been two open incidents affecting Hetzner Object Storage in NBG1 — both still unresolved as of the time of writing.
Incident 1: High Utilization → Timeouts
Started 2026-01-15. Still open.
"We are currently experiencing exceptionally high utilization which may result in occasional request timeouts. Our team is already working intensively on expanding our hardware capacity."
The update from March:
"If possible, creating new buckets in the HEL location will yield the best performance."
Read between the lines: NBG1 is under sustained capacity pressure. Hetzner's own recommendation is to route around it.
Incident 2: Write Limitations on Existing Buckets
Started 2026-03-05. Still "In progress".
"Due to ongoing capacity constraints, we will be reducing the available write limitations for some existing buckets on NBG1. This measure affects existing buckets only — newly created buckets are not impacted. This change is necessary to ensure continued stability and reliable service for all users."
So: if your buckets were created before March 5th, your write throughput has been silently throttled. No notification, no email — just a status page entry.
What "Capacity Constraints" Actually Means
Hetzner's language is carefully neutral, as it should be. But the pattern is readable: this is a hardware capacity problem that has outpaced their ability to resolve it. "Expanding hardware capacity" and "long-term solution to restore full capacity" are not phrases you use for a software bug or a transient network issue.
The most likely explanation — which aligns with what we know about the broader server hardware market in early 2026 — is a RAM shortage. Object storage systems are memory-intensive at scale. When you run out of capacity to add nodes, per-request latency climbs, timeouts increase, and the easiest short-term relief valve is to throttle write operations.
That also explains the latency numbers we later measured in our benchmark (see Part 2): Hetzner's median small-file TTFB of 100ms, compared to IONOS's 12ms, may not just be an architectural difference. A backend under memory pressure is a slower backend.
The Timeline
| Date | Event |
|---|---|
| 2026-01-15 | Hetzner opens incident: high utilization, timeouts begin |
| 2026-03-05 | Hetzner throttles write limits on existing NBG1 buckets |
| 2026-03-06 | We begin benchmarking alternatives (see Part 2) |
| 2026-03-24 | Hetzner updates incident: recommends HEL over NBG1 |
| 2026-04-03 | Both incidents still open — 79 days and counting for incident 1 |
Why There's Nothing on the Internet
I searched. There are no blog posts, no Hacker News threads, no Reddit discussions about this. The Hetzner community forum has a handful of support questions about timeouts, but nothing connecting the dots.
A few reasons this stayed quiet:
1. Hetzner's status page requires you to look. There's no push mechanism unless you're subscribed per-product in their admin panel. Most developers aren't.
2. 504s are easy to misattribute. It looks like a network blip, a client bug, a CDN misconfiguration. You fix the symptom and move on.
3. Hetzner is trusted. People don't go looking for problems with infrastructure they've relied on for years.
What To Do If You're On Hetzner Object Storage
Short term:
- Check your monitoring for elevated 504 / 503 error rates from your storage backend
- If you're on NBG1, consider whether migrating new buckets to HEL is practical
- If you use a CDN, make sure you're not caching 5xx responses from your origin
Longer term:
- This is what triggered our own provider evaluation. We benchmarked four German S3-compatible alternatives. The results are in Part 2.
A Note on Fairness
Hetzner is a good company and generally excellent infrastructure. Their dedicated servers, cloud VMs, and networking have been reliable for us across many years. This is a specific product — Object Storage NBG1 — under capacity stress, at a specific point in time. The fact that they've published and maintained these status entries (rather than quietly hoping nobody notices) is to their credit.
That said: if your production system depends on it, you should know.
Sources: Incident 1 · Incident 2 · Status checked 2026-04-03
