On 7 May 2026 a fire broke out at the NorthC datacenter in Almere. The power and cooling rooms — three floors of them — were fully ablaze. No data was lost, fortunately, but the impact on customers was significant.
A sample of the affected organisations: KVK (Chamber of Commerce), CBS (Statistics Netherlands), RDW (Vehicle Authority), Utrecht University, Univé, Transdev, Rederij Doeksen (the Wadden ferry service had to check passengers in by hand) and many GP practices. For days. In some places still ongoing.
For the record: no Virtual Computing customers were affected. Our services run entirely in our own twin-datacenters in Den Bosch and Eindhoven, not in Almere. But this incident is exactly why we're built the way we are — and the lesson reaches further than one location.
Having a datacenter ≠being prepared for an outage
It's easy to point fingers at the datacenter when something like this happens. A datacenter primarily delivers five things: power, cooling, security, connectivity and space. That's what happened here too — the power and cooling rooms were even housed in a separate building specifically to prevent fire from spreading to the servers. That worked: no data lost.
But we still see the impact: websites, telephony, applications, APIs, workspaces, cloud data — entire organisations grind to a halt. Even critical processes. And that's where the real lesson sits.
Redundancy isn't just a second location on paper or compliance with an ISO standard. Redundancy means your services actually keep running when something goes wrong. Ideally automatically.
Architecture matters more than hardware or a fancy datacenter building.
Where does the responsibility lie?
Ultimately with the organisation itself. With the owner of the data and services. Legally that's how it's structured too — a datacenter or cloud provider can handle the execution, but the duty of care for continuity stays with you.
If your telephony, DNS, authentication, storage, backups, virtual machines and network all depend on a single physical location, you technically still have one single point of failure. You don't even need to lose data to come to a complete standstill.
That's exactly why you work with:
- Distribution — services spread across multiple locations
- Replication — synchronous or near-synchronous data copies between locations
- Failover — automatic switchover, tested, not manual improvisation in a crisis
- Offsite backups — geographically separated, not in the same building as the primary data
- Multiple networks — no single uplink, no single DNS resolver, no single transit provider
- Disaster recovery scenarios — that are actually tested periodically
Not because you expect things to go wrong. Because you know they eventually will, somewhere.
What if systems are still down hours later?
If systems are still down many hours — let alone days — later, then somewhere in the chain there's no working failover, insufficient distribution, or an untested recovery process. That isn't an assumption — it's what the facts tell you.
Continuity > uptime
At Virtual Computing we therefore don't just talk about uptime. We talk about continuity. About availability. Always.
Our twin-datacenter setup runs synchronously in Den Bosch and Eindhoven — physically separated, with synchronous replication of workspaces, storage and network. ISO 27001 and NEN 7510 certified. If one location goes down completely — fire, power outage, fibre cut or any other reason — the other takes over. In practice customers barely notice.
That isn't marketing. That's how we built it, because we don't want a datacenter incident to mean a GP practice can't check in patients or a ferry has to count holiday-makers by hand.
Four questions to ask your IT provider today
Before it happens to you — and it can happen to you — four questions you should be asking now:
- Where is my data physically stored? One location, or several — and how far apart?
- What is my RTO (Recovery Time Objective)? How long am I down in a total location failure? Hours? Days?
- What is my RPO (Recovery Point Objective)? How much data do I lose in the worst case? Minutes? A day?
- When was the last failover test? Not "we *can* do a failover" — but when did it last demonstrably happen?
If you don't get a clear answer, you know where you stand.
In closing: respect for the people behind the incident
Behind an incident like this are engineers and operators under enormous pressure. Nobody wants to be hit by a scenario like this. Respect for the emergency services who worked on it for hours. And respect for the teams at NorthC and the affected organisations now in rebuild mode.
Real IT security doesn't prove itself when everything goes well — but in the moments when something goes wrong. This week, that's becoming painfully visible in Almere.
---
Want to know whether your IT can survive a single-point-of-failure scenario? Book a no-obligation advice call or learn how our twin-datacenter cloud works. Call 085-013 4500 for a quick first check.
- NOS — No data lost in Almere fire (Dutch)
- Tweakers — NorthC recovery to take at least three days (Dutch)
Related services
Written by
Related articles
FD Gazelle award for the third year
Virtual Computing BV once again received a Gazelle in 2024. This FD Gazellen Award confirms our consistent growth.
NieuwsFD Gazelle award once again this year
In 2025, Virtual Computing BV once again received an FD Gazelle. We have now received this award for 4 consecutive years.
NieuwsHow we migrated 550 VMs from VMware to XCP-ng
After the Broadcom acquisition and the new VMware licensing structure, we went looking for a mature open alternative. In late 2024 we migrated our complete hypervisor fleet from VMware to Vates VMS (XCP-ng). Here's our own story.