AWS Public Sector Blog

Connectivity patterns between AWS GovCloud (US) and AWS commercial partition

AWS branded background with text overlay that says "Connectivity patterns between AWS GovCloud (US) and AWS commercial partition"

Amazon Web Services (AWS) GovCloud (US) supports customers who must adhere to several security and compliance requirements, including:

Workloads that don’t adhere to these compliance requirements can be deployed in the AWS standard partition. A partition is a group of Regions and the AWS standard partition encompasses commercial AWS Regions. AWS GovCloud (US) is a separate partition.

AWS GovCloud (US) was architected to have isolation (both physically and logically) from other AWS partitions for compliance. For this reason, AWS services, used to privately interconnect virtual private cloud (VPC) hosted resources within the same partition like AWS PrivateLink, Amazon Virtual Private Cloud (Amazon VPC) peering, or AWS Transit Gateway peering, cannot span from AWS GovCloud (US) to commercial Regions natively by design. In conversations with customers, we’ve found that there may be use cases where customers must interconnect both partitions. In some cases, they host workloads in both partitions to be cost-optimized while meeting security demands. However, there are single data stores (such as Active Directory for identity) that need to be replicated between partitions. Additionally, some customers are trying to consume a software as a service (SaaS) offering (AWS or third-party) built in one of the two partitions while their workload exists in the other. In some cases, they are looking to establish connectivity between the partitions for orchestration or data transfer purposes.

In this post, we will highlight four connectivity patterns customers can use to interconnect VPC hosted systems cross partition.

Pattern 1: Connectivity over the internet using native TLS encryption

Architectural diagram showcasing TLS encryption at the application layer between partition using elastic ips and a ec2 instances

Figure 1. Architectural diagram described in this blog of TLS encryption at the application layer between partition.

If the application is using TLS encryption natively (such as API traffic on https/443), then use Elastic IP addresses (EIPs) to communicate. This is the most performant and operationally sustainable pattern as you remove the requirement to maintain and configure Amazon Elastic Compute Cloud (Amazon EC2) based virtual private network (VPN) appliances, and the Direct Connect gateway (DXGW). As part of the shared responsibility model, you are responsible for ensuring encryption-in-transit modules and algorithms meet your compliance needs. This option does not provide encryption at the network transport layer. This is a common architecture used by customers to move data between environments using AWS DataSync. DataSync is an online data transfer service that automates data transfer between on-premises storage, edge devices, other cloud provider storage offerings and AWS managed storage offerings.

Note that in patterns 1, 2, and 4, traffic will stay on the AWS backbone network and will not traverse public internet. As noted in the VPC FAQ when using public IP addresses, all communication between instances and services hosted in AWS use AWS’s private network. Packets that originate from the AWS network with a destination on the AWS network stay on the AWS global network, except traffic to or from AWS China Regions.

Pattern 2: Connectivity using IPSec AWS Site-to-Site VPN over the internet

Architectural diagram of Site-to-Site VPN between partitions using Transit Gateway and self-managed EC2 based appliance

Figure 2. Architectural diagram described in this blog of Site-to-Site VPN between partitions using Transit Gateway and EC2 based appliance.

AWS Site-to-Site VPN is a managed service that uses Internet Protocol security (IPSec) to create encrypted tunnels. IPSec-based VPN is a common approach for enabling multi-partition private connectivity because it provides encryption-in-transit for those systems that cannot support encryption at the application layer or be exposed via an internet gateway. Although it is isolated, AWS GovCloud (US) is not an air-gapped Region. It’s connected to the internet, which allows you to set up point-to-point connections. AWS does not currently support VPNs between gateways (virtual private gateways, Transit Gateways). In order to implement a tunnel, you need to use third-party virtual appliances.

In this scenario, we launch the virtual appliance in AWS GovCloud (US), but the appliance itself can be deployed in either partition. The maximum bandwidth per VPN tunnel is 1.25 gigabits pers second (Gbps). For higher performance, enable equal-cost multipath (ECMP) to aggregate throughput across VPN connections and allow scalability beyond the default maximum limit of 1.25 Gbps per tunnel. This architecture does introduce Transit Gateway data processing charges, Transit Gateway attachment charges, and Amazon EC2 variable cost based on the size and licensing agreement you have with your specific vendor. Before implementing this architecture, it’s important to ensure you’re using a NIST-approved algorithm in your IPSec implementation in addition to FIPS-validated cryptographic modules. See the documentation for more information on this topic. If you wish to learn more about variations to this architecture, refer to the

Pattern 3: Connectivity through on-premises customer gateway

Architectural diagram described in this blog of connectivity through on-premises gateway leveraging Direct Connect Gateway

Figure 3. Architectural diagram described in this blog of connectivity through on-premises gateway leveraging Direct Connect Gateway.

In this scenario, we have separate AWS Direct Connect transit virtual interfaces (VIFs) or VPN tunnels to a VPC in each Region (AWS GovCloud (US) and commercial). This is a hub and spoke connectivity model where filtering/control needs to occur on premises and pass through some type of firewall-based devices hosted in on-premises data centers. To establish private connectivity back to on premises, use an IPSec VPN over the internet or Direct Connect for more performant, resilient connectivity. Keep in mind that since the traffic is getting hairpinned via the on-premises network, it will add latency, data transfer out (DTO) cost, and Transit Gateway data transfer charges. In the preceding Figure 3, the dashed orange line represents the traffic flows that are permitted. The Direct Connect gateway represented here is a global construct (not tied to a specific Region), allowing you to connect to any AWS Region. It also allows north-south connectivity and does not enable Border Gateway Protocol (BGP) prefix propagation between associated Transit Gateways or virtual private gateways associated with the same Direct Connect gateway on the same virtual interface. This indirectly denies east-west traffic between associations, except when a supernet is advertised across two or more VPCs, which have their attached virtual private gateways associated to the same Direct Connect gateway and on the same virtual interface.

In this case, VPCs can communicate with each other through the Direct Connect endpoint. For example, if you advertise a supernet (for example, 10.0.0.0/8 or 0.0.0.0/0) that overlaps with the VPCs attached to a Direct Connect gateway (for example, 10.0.0.0/24 and 10.0.1.0/24), and on the same virtual interface, then from your on-premises network, the VPCs can communicate with each other. From then, your on-premises network, and VPCs can communicate with each other.

Note: Performance is driven by the DX location as well as resiliency. If you don’t have a redundancy on your Direct Connect connections using a supernet to connect both partitions, it will introduce a single point of failure if you lose the direct connect connection.

Note: For the purpose of this blog we represent a single Direct Connect Point of Presence (POP), you should follow Direct Connect resilience recommendations based on your requirements. If you wish to learn more about Direct Connect resilience best practices, please visit Direct Connect Resiliency Recommendations.

Pattern 4: Connectivity using transit VPC and Transit Gateway Connect attachment

Architectural diagram described in this blog of connectivity using transit VPCs

Figure 4: Architectural diagram described in this blog of connectivity using transit VPCs.

Customers can also use the transit VPC solution. Within the transit VPC, we have redundant third-party virtual appliances in different Availability Zones for high availability purposes. We then connect to Transit Gateway using the Direct Connect attachments to allow BGP over Generic Routing Encapsulation (GRE) tunnels. Note that while GRE provides simplified connectivity with Transit Gateway, it does not provide encryption. If you need to extend encryption, you can use IPSec VPN between the virtual appliances and Transit Gateway instead of GRE (as described in pattern 2 earlier).

Conclusion

In this post, we discussed some common architecture patterns customers can use to connect AWS GovCloud (US) and standard AWS Regions. While these architectures allow you to communicate across partition, as part of the shared responsibility model, you’re responsible for data movement and traffic encryption. Before implementing any of these architectures, ensure they meet controls of the compliance frameworks with which they must comply. Contact us to learn more.