All articles

Future of Data Management

Major Cloud Outages Push Enterprises Toward Hybrid Resilience

The Data Wire - News Team
|
November 12, 2025

Oluwaseun Odeyemi, a Data Architect at the UK Health Security Agency, explains how to build a hybrid model for future resilience against major cloud outages.

Credit: Outlever
Key Points
  • In response to recent cloud outages, some leaders are building hybrid systems to avoid vendor lock-in and ensure future resilience.

  • Oluwaseun Odeyemi, a Senior Enterprise Data Architect at the UK Health Security Agency, explains how his model uses on-premise servers for critical transactions and multiple cloud vendors for other tasks.

  • The hybrid approach creates layers of redundancy to keep essential services online and prioritizes customer-facing services above all else.

Many organizations have perfectly good on-prem servers sitting in storage, doing nothing, because they've moved all critical applications to the cloud.

Oluwaseun Odeyemi

Senior Enterprise Data Architect
UK Health Security Agency

Widespread outages at major cloud providers are exposing a vulnerability in enterprise strategy: over-dependence on a single vendor creates systemic business risk. For many organizations, going "all-in" on a single cloud provider is now considered a liability. A single provider failure can disrupt customer-facing applications across borders and erode customer trust. The fallout is prompting some leaders to reassess their enterprise architecture. Now, the focus is shifting from pure cost-cutting to building systems that can withstand such an outage in the future.

One data leader with a new blueprint for resilience is Oluwaseun Odeyemi, Senior Enterprise Data Architect at the UK Health Security Agency. With over a decade of experience in data architecture and AI, his perspective was shaped during his tenure at firms like EY and PwC. Today, Odeyemi believes that the standard approach of total dependency can be a significant strategic vulnerability.

According to Odeyemi, the core issue is a flawed business calculation. In the rush to the cloud, many organizations abandoned perfectly functional on-premise infrastructure. "Many organizations have perfectly good on-prem servers sitting in storage, doing nothing, because they've moved all critical applications to the cloud," Odeyemi says.

  • Trust gamble: Meanwhile, the widespread dependence on cloud providers has created a single point of failure. "You have to decide: do you want to drive 100% trust with your client, or are you willing to gamble with their emotions by assuming one day of downtime won't affect their trust in you as a business?"

For Odeyemi, the solution is a pragmatic hybrid model that prioritizes business continuity. "I propose a hybrid architecture where you host your bank app in a standby on-premise environment. You can then move historical datasets to the cloud, while keeping recent transactions available on-premises. This way, while the cloud provider is resolving an issue, your customers can still transact using your on-prem database, and everything works fine." To escape the "data gravity trap," where massive datasets become too expensive to move, his blueprint uses a disciplined data partitioning strategy.

  • Customers over code: It’s a simple but powerful heuristic that creates a clear hierarchy of what must remain operational, Odeyemi explains. "It is better for internal reports or analytics models using historical records to go offline than for customer-facing applications to fail. You must always protect what your client depends on to perform their transactions."
  • Redundancy in action: Already, Odeyemi is putting his multi-cloud vision into practice. "In a proof-of-concept I built, the source data is in an AWS S3 bucket, but I used a Microsoft Fabric 'Shortcut' to access and process those files within the Fabric environment," he continues. "This means if the AWS storage goes down, I'm not stranded. I can still leverage the mirrored data I have in Fabric for my reporting."

Today, the need for resilience is compounded by the very AI most providers are racing to implement, Odeyemi says. As AI generates more code, it also introduces a new type of risk. Here, automated errors can propagate quickly across cloud environments.

  • AI accelerant: For Odeyemi, this only raises the stakes. "An AI might use a stale or non-functioning library that, while functional in a development environment, fails upon deployment to the live cloud. The recent back-to-back outages from AWS and Azure are a signal. While AI is optimizing how we work, it is also creating a new, faster path to systemic failure."

For cloud-native organizations, where returning to on-premise infrastructure isn't a viable option, Odeyemi's principle also applies. The strategy translates into building redundancy across vendors. "If an on-premise server can't handle your transaction volume, then limit your risk by using two cloud providers at most," he recommends. "By using both Azure and AWS, for example, you are hedging against the likelihood that both will experience a major outage at the same time." Ultimately, his final message is clear: the time has come for organizations to turn the lessons they learned from these outages into a practical blueprint for a more resilient future.

Related Stories