AWS Architecture Blog
Sovereign failover – Design for digital sovereignty using the AWS European Sovereign Cloud
Organizations operating across multiple jurisdictions need to consider the impact of regulatory changes or geopolitical events on their access to cloud infrastructure. This post explains how to design failover architectures that span AWS partitions—including the AWS European Sovereign Cloud, AWS GovCloud (US) and other AWS Regions in the global infrastructure — so workloads can continue operating when sovereignty requirements shift.
Although the AWS European Sovereign Cloud is designed to help customers with operational autonomy and data residency requirements, it can also be used to address broader geopolitical and sovereignty risks. This post explores the architectural patterns, challenges, and best practices for building cross-partition failover, covering network connectivity, authentication, and governance. By understanding these constraints, you can design resilient cloud-native applications that balance regulatory compliance with operational continuity.
Understanding sovereignty risks
Digital sovereignty entails managing digital dependencies — deciding how data, technologies, and infrastructure are used, and reducing the risk of loss of access, control, or connectivity. As with any disaster recovery strategy, there are several means to provide continuity for the systems to be designed. Most of them involve some form of failover architecture, i.e. providing a second set of infrastructure to be used when the disaster incapacitates the original infrastructure. What differs for sovereign disaster recovery are the control mechanics and structures of the target to fail over to. Incorporating the AWS European Sovereign Cloud into your workload design adds failover capabilities that help you to reestablish or maintain enhanced sovereignty if the primary environment becomes unavailable.
As regulatory requirements evolve, modern failover architectures must account for sovereign environments such as the AWS European Sovereign Cloud, AWS GovCloud (US), and multi-vendor deployments. This post focuses on three core areas for incorporating sovereignty requirements into failover design: failover strategy, network connectivity across isolated partitions, and authentication and authorization in cross-partition architectures. These patterns apply to both short regional outages and long-term partition failures.
Understanding AWS partitions
As a global cloud provider, AWS operates multiple infrastructure partitions tailored to meet specific operational and regulatory requirements. In addition to its AWS global infrastructure, AWS offers specialized partitions such as AWS GovCloud for US government agencies, the AWS China Regions, and the AWS European Sovereign Cloud for customers that require stringent data residency and control within the EU.
Each partition is a logically isolated group of AWS Regions with its own set of resources, including AWS Identity and Access Management (IAM). Because of this separation, partitions act as hard boundaries. Credentials don’t carry over, and services such as Amazon S3 and features like S3 Cross-Region Replication or AWS Transit Gateway inter-region peering cannot function across partitions. These limitations are intentional, providing operational isolation. AWS GovCloud (US), launched in 2011, supports US public sector customers with compliance needs such as FedRAMP and ITAR. The AWS China regions are operated through local partnerships to meet Chinese data sovereignty laws. Similarly, the AWS European Sovereign Cloud is a partition built entirely within the EU, launched in 2026.
These partitions provide enhanced data control and physical infrastructure isolation, making them essential if you operate in regulated sensitive sectors and need to satisfy strict compliance requirements.
Key benefits of AWS partitions
AWS introduced partitions for several reasons. They are key to helping customers meet country-specific compliance and regulatory requirements, whether in AWS GovCloud (US), AWS China, or the AWS European Sovereign Cloud. This is underpinned by multiple safeguards and controls, including physical, logical, and operational separation of the cloud infrastructure between partitions. This directly corresponds to the security aspects of partitions. Partitions allow AWS to provide a complete isolation of resources, which helps manage security, especially for architectures running sensitive workloads.
Another important point to keep in mind when talking about partitions is service availability. Not all AWS services are available in every partition. To learn more about the AWS services available by Region, refer to AWS Capabilities by Region.
Cross-partition architectures
A cross-partition architecture enables partition failover by deploying resources and infrastructure across multiple isolated AWS partitions. Because partitions are fully separated by identity, networking, and service boundaries, failover can’t simply switch between them as within a single partition or region. Instead, environments must be pre-provisioned and kept in sync through internal or external tooling. Without such an architecture, failover between partitions is impractical. Cross-partition architectures make failover possible but require duplicate infrastructure, separate identity systems, and custom data synchronization.

Figure 1: Different reasons for failover and their possible locations
When designing cross-Region or cross-partition failover strategies, the choice of Regions depends on the type of disaster you want to mitigate:
- Natural disasters – select Regions in different geographic zones or with distinct geographic features.
- Technical disasters – separate workloads across independent parts of the global technical infrastructure, such as power grids, networks, and other shared resources.
- Human-driven disasters – consider political, socioeconomic, and legal factors that might affect operations.

Figure 2: Active-active failover scenario including a sovereign failover option
Partition failover
Cross-partition workloads arise from industry needs to maintain continuity across sovereign domains while meeting regional regulations. Examples include military and defense connecting specialized clouds (such as AWS GovCloud (US)) with commercial environments, and emergency response systems requiring secure partition isolation combined with unified management (a single pane of glass approach). Control planes managing workloads across partitions are critical for handling multi-tenant structures, enabling centralized metrics, log aggregation, onboarding, security management and more.
However, cross-partition connections increase operational complexity, security and compliance overhead, costs, and governance challenges. These factors make it important to implement such architectures only when they are truly required. Standard cloud resilience models range from simple backups to multi-site setups, and can be implemented across multiple Availability Zones as well as multiple Regions. The same concept equally applies across multiple partitions. We can move backups into a second partition to be able to recover into that partition. Equally we can run an application pilot light in another partition. This greatly reduces the cost of the infrastructure required in the second partition because it will only be built up when needed. Finally, warm standby or multi-site active-active setups mainly differ in the need for more complex network synchronization across partitions.

Figure 3: Different types of disaster recovery scenarios
You might also consider vendor independence as an additional sovereignty requirement when planning failover. One way to achieve vendor independence is to use another cloud provider. However, failing over to another AWS partition is simpler than switching cloud providers because you can reuse your infrastructure as code templates across partitions.
Reasons to connect partitions
Although partitions are designed for isolation, some workloads within a partition might need to communicate with workloads in less regulated partitions or with external systems accessible over the public internet. For such instances several architectural strategies and the corresponding architectural decisions should be considered. There might be use cases where you need AWS Services to communicate across partitions and orchestrate actions spanning multiple partitions, such as:
- Cross-domain applications
- Feature parity and service availability
- Cost-optimization while meeting security demands
- Infrastructure consolidations
- Control plane patterns
Implementing these use cases requires a deeper look into the technical aspects of connecting partitions from both a network standpoint and a security standpoint.
Regional connections vs. connected partitions
Regional connections let you link AWS Regions within the same partition using features like S3 Cross-Region Replication and Transit Gateway peering, facilitating relatively seamless workload distribution and failover within the partition’s global infrastructure. Understanding the distinction between regional connections and connected partitions is crucial for designing resilient, compliant architectures that meet both operational and regulatory demands.
Connecting partition networks
You can connect AWS partitions in three ways: internet connectivity secured by TLS, IPsec Site-to-Site VPN over the internet, or through an AWS Direct Connect gateway to on-premises routers or using Direct Connect point of presence (PoP) partner connections to another Direct Connect PoP. Each approach offers different trade-offs in terms of security complexity and recovery. For more information about connectivity patterns between AWS GovCloud (US) and the global AWS infrastructure, see Connectivity patterns between AWS GovCloud (US) and AWS commercial partition. In addition to the customer gateway solution shown previously, partners located in Direct Connect PoPs can provide cross-partition connectivity services. These services can move traffic from one Direct Connect PoP to another. Such a setup enables dedicated lines between the AWS European Sovereign Cloud Direct Connect PoPs and the Direct Connect locations in other partitions.
Because IAM credentials don’t work across partitions, you need to create separate roles or use external identity providers. Common approaches include using IAM roles with trust relationships and external IDs, AWS Security Token Service (AWS STS) regional endpoints, resource-based policies, or cross-account roles managed through AWS Organizations. A modern best practice is to federate identities from a single, centralized identity provider to multiple partitions, avoiding the need for IAM users wherever possible. If IAM users are still used, credentials can be stored in AWS Secrets Manager, rotated using Lambda, and a backup user can improve availability. These patterns are often combined with standard access controls, such as Amazon API Gateway with authorizers, to secure cross-partition interactions. For a deeper dive into cross-partition authentication and authorization with AWS IAM, see IAM Identity Center for AWS environments spanning AWS GovCloud (US) and standard Regions.
When securing communication between AWS partitions, certificate-based approaches present both opportunities and challenges. Because AWS Certificate Manager (ACM) certificates and AWS Private Certificate Authority (AWS Private CA) are bound to individual partitions, you must typically deploy and manage separate public key infrastructure (PKI) infrastructures in each environment, including dedicated root CAs and manual handling of private key transfers. To establish secure cross-partition communication, a more advanced solution involves using double-signed certificates, where root CAs in each partition cross-sign each other’s certificates, creating a bidirectional chain of trust. Implementing this requires setting up root CAs with AWS Certificate Manager Private CA, establishing cross-signing agreements, managing trust stores across partitions, and handling complex certificate validation and revocation checks. You must also comply with differing regulatory requirements and maintain detailed audit trails. Although this approach adds operational complexity, it is essential for enabling authenticated, encrypted communication across isolated partitions, particularly in regulated environments where security and compliance are paramount.
Managing AWS Organizations across partitions
Setting up AWS European Sovereign Cloud accounts within your AWS Organization must be done in a completely separate organization. In the AWS GovCloud (US) partition, accounts can be paired into a commercial organization, as described in Inviting Accounts into an Organization for AWS GovCloud. With sovereignty as the main goal, failing over to an AWS European Sovereign Cloud-only state is simpler if the AWS Organizations setup is separate from the start. This doesn’t require starting from scratch. Instead, you can manage the same organizational units (OUs) and policies for the AWS European Sovereign Cloud by reusing your existing deployment automation.
Ideally, AWS Organizations account structures should be separated to make it straightforward to use the AWS landscape within the AWS European Sovereign Cloud without relying on the other partitions.

Figure 4: connectivity and service distribution across AWS partitions like the AWS European Sovereign Cloud
Security controls should be tailored per partition using distinct Service Control Policies (SCPs), with AWS Control Tower managing the commercial side. Networking requires isolated Transit Gateways, separate Amazon Route 53 DNS zones, and secure cross-partition communication using AWS PrivateLink. For monitoring, AWS Config aggregators and AWS Security Hub instances must be configured separately in each partition, while consolidated billing can be managed through Organizations. It’s important to consider limitations (for example, AWS Control Tower can’t directly manage AWS GovCloud (US) or AWS European Sovereign Cloud accounts), and the limited availability of some AWS Organizations features in these partitions. Overall, this approach supports governance, security, and operational clarity across partitions.
Conclusion
Navigating sovereignty-driven cloud architectures requires a strategy that addresses partition isolation, network connectivity, and secure cross-partition authentication. Prioritizing sovereignty in failover design adds complexity, but it might be worth the trade-off if your workloads need protection against geopolitical risks or regulatory changes. Start by identifying the disaster scenarios that matter most to your business, then select the simplest architecture that addresses those risks. By designing proactively for evolving regulations, you can maintain both compliance and resilience in the cloud.