Networking & Content Delivery

Designing hyperscale Amazon VPC networks


Amazon Web Services (AWS) customers are continuously increasing the number of applications and workloads they run on AWS, driven by accelerated cloud adoption and environment expansion. An environment can be considered “Hyperscale” once it supports thousands of application endpoints and tens or hundreds of gigabits of traffic per second. Hyperscale environments on AWS favor simple and agile network designs. This allows you to integrate new use cases, and maximize the effectiveness of network and infrastructure components—including Amazon Virtual Private Cloud (VPC), VPC peering, Elastic Load Balancers (ELBs), NAT Gateways, Elastic Compute Cloud (EC2) instances, AWS PrivateLink, AWS Transit Gateway, and more.

When operating at Hyperscale, additional planning is required for network connectivity, security, private IPv4 address scarcity and overlap. Successful Hyperscale deployments use the right services and features for each connectivity use-case, whilst standardizing on common architecture patterns. Hyperscale network design involves leveraging cell-based infrastructure units, and understanding the benefits and limitations of each of the various AWS networking services.

In this blog post we focus on best practices and considerations you can use when designing Hyperscale Amazon VPC networks. We’ll explore different deployment and connectivity options, with a focus on how to use them to meet the needs of Hyperscale environments.

Prerequisites and assumptions

This is a 300-level post. We assume that you are familiar with fundamental networking constructs on AWS – Amazon VPC, VPC peering, VPC sharing, AWS PrivateLink, AWS Transit Gateway and AWS Cloud WAN, for example. We won’t focus on defining these services, but we do outline their capabilities and how you can use them in your large-scale designs. Here are some initial considerations regarding networks and workloads on AWS that guide our approach to this blog:

  • Managing networks on AWS often requires connecting many VPCs from many accounts to specific enterprise networks, and/or the internet. Therefore, planning and managing your network design is the foundation of how you provide resource boundaries within your environment. Network designs should also meet the security needs of your deployment, with additions such as isolation or inspection of specific types of traffic.
  • Network connectivity can be grouped into three different types: (1) connectivity between your on-premises network and your AWS environment, (2) connectivity to and from the internet, and (3) connectivity across your AWS environments. Interconnecting VPCs and on-premises networks at scale means choosing the right service or feature for the use-case. We discuss these options and their merits in this blog post.

Designing hyperscale networks with Amazon VPCs

When architecting workloads on AWS, the six pillars of the Well-Architected framework can be used as guidance. These are: operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability. You’ll notice that networking isn’t listed. However, networking is the foundation for any deployment on AWS. It’s helpful to consider networking-focused best practices when operating hyperscale environments. Whilst not an exhaustive list, we’ll define some of these below.

1. Consider application driven network connectivity

Understanding your application flows and how they drive the network traffic patterns is critical to choosing the right connectivity solution. From a directional perspective, traffic within a network falls within into two basic classifications – (1) north-south, and (2) east-west. North-south traffic refers to traffic flowing between an internal network and the internet, while the east-west traffic refers to traffic flowing between endpoints in an internal network. Depending on the direction of traffic, different connectivity services may be used, for example, VPC peering or an Internet Gateway (IGW). Additionally, the right network architecture for your use case can be influenced by the following traffic attributes:

  • Traffic source: Asking whether an application is the initiator of a network session or the responder to an initiation will drastically influence your architectural decisions. For example, VPC peering allows two-way initiation of traffic between two VPCs, based on your route table configuration. Whereas, AWS PrivateLink restricts traffic initiation from a client to its VPC endpoint only, strengthening your deployment’s security posture.
  • Throughput: The amount of data transferred raises an important decision on which network connectivity solution should be used. For example, you might consider data sent to and from Amazon S3 to be a high-throughput traffic flow. Such high-volume flows should traverse the lowest number of hops possible. This helps minimize resource consumption, latency, and cost. For example, using S3 gateway endpoints directly in the VPC where the S3 traffic is initiated, and Identity and Access Management (IAM) policies to secure access, is a recommended pattern.
  • Latency: For certain traffic flows, low latency is a critical requirement. It’s important to understand the factors that drive the need for low latency, and the impact of not meeting those needs. Understanding your latency budget also helps, and how latency adds up when traversing the network.

Traffic attributes help us understand how to take advantage of different connectivity models and achieve the right network architecture. Data pipelines, streaming analytics, or mission-critical applications all need different things from the network. Standardizing on a connectivity option or feature ensures easier management and operations. However, multiple connectivity patterns can be used as part of an overall network architecture to a accommodate different use-cases.

2. Choose your operational model and data paths

Different organizations can have different operational models which can drive VPC, network connectivity, and application infrastructure ownership. For example, the network team can own connectivity to AWS Transit Gateway, Cloud WAN or Direct Connect, while application teams own their infrastructure, including the VPCs. Alternatively, the network team owns and maintains all network services, along with the VPCs. When considering your network design on AWS, resource ownership and team interaction can often have an effect on the chosen operational model. Here, we identify three operational models:

  • Centralized: You centrally configure your VPCs and network connectivity as part of one or more infrastructure account. Your software development teams manage application resources, such as compute, databases, and services. This offloads complexity from the application teams, but requires careful scaling of the central operational team.
  • Distributed: Your application teams manage their VPCs and network connectivity, as well as application resources like compute, databases and services. This distributes the network-related management tasks, ensuring higher agility for connectivity decisions, but requires application teams to have network skills.
  • Mixed: In a mixed operational deployment, application teams own the VPCs and resources, with the infrastructure team owning network connectivity in one or more centralized accounts. This helps balance ownership, ensuring both flexibility and standardization.

Your operational model can influence both network connectivity options and configurations, hence impacting the data path for your application flows. Here are some data path options:

  • Centralized: You can configure centralized data paths using any of the AWS network connectivity services. A centralized approach entails traffic processing at a single point in the network. For example, using Transit Gateway for intra-Region VPC connectivity, deploying a centralized internet ingress and egress VPC, or centralized east-west inspection VPC.
  • Distributed: Having a distributed data path allows multiple endpoints or services in the network to process traffic. For example, your VPCs can use their local IGW for internet access, and security can be ensured with a local set of AWS Network Firewall (NFW) endpoints, in each VPC.
  • Mixed: Using centralized traffic paths for certain flows (for example, Transit Gateway for shared services, cross Region and hybrid connectivity), and distributed paths for other flows (for example, VPC peering for high throughput and/or low latency connectivity between a limited number of VPCs), ensures your network connectivity is flexible and scalable.

3. Consider network segmentation and security needs

Security and segmentation can and should be implemented at all layers of the TCP/IP stack, as defined by the defense in depth security model. We won’t dive deep into security frameworks, but it’s important to review how connectivity options can meet your security needs.

At small scale or for specific use-cases, VPC peering allows you to grow your network footprint horizontally, and enforce resource-based segmentation, using security group referencing. As you scale horizontally with AWS Transit Gateway or Cloud WAN, you can scale your security boundaries by using multiple routing domains, for an additional layer of network segmentation.

Starting from the two basic traffic flows in the network, east-west and north-south, we can map security policies to the trust boundaries they define. We’ll highlight some of the key considerations that impact scale and flexibility when implementing security at large scale, and examples of NFW inspection architectures can be found here.

  • VPC to VPC traffic inspection: You can inspect east-west traffic between VPCs based on different criteria, depending on your organization’s structure. For example, you can inspect traffic between VPCs that belong to different routing domains – production, development, or testing, or between different lines of business, and organizational units. Different compliance requirements can also determine inspection boundaries, such as PCI and non-PCI compliant workloads. East-west traffic inspection can be achieved using centralized inspection architectures, often deployed in VPCs connected to AWS Transit Gateways or Cloud WAN.

    When centralizing inspection, the scale of the inspection point is critical. Using AWS services like Gateway Load Balancer or NFW can help scale your inspection layer to accommodate for the required throughput. Additionally, what you’re inspecting is important. For example, you may not want to inspect database replication traffic between two VPCs, but you may want to inspect application layer traffic.

  • North-south traffic inspection: The north-south traffic flow involves crossing a security perimeter between the internal network (which has a higher trust level), and the outside network, such as the internet or partner connectivity (with a lower trust level). The security and inspection architecture used between two zones is based on the direction of traffic, for example, outbound from the internal network, or inbound from the external network. Securing each flow symmetrically, at scale, means ensuring that the egress and ingress inspection points can process the needed throughput.

4. Understand scaling up and scaling out

Network Address Usage (NAU) is a measurement applied to resources in your virtual network to help you plan and monitor the size of your VPC. Each resource in a VPC uses one or more IP addresses to communicate with other resources, and contributes to NAU consumption. In this section we’ll focus on the NAU metric, and how you can use it to design and build flexible, scalable networks on AWS.

The NAU metric is measured for single VPCs, and intra-Region peered groups of VPCs. These two metrics can help plan your VPC deployment expansion, and facilitate Hyperscale operation. Each VPC can have up to 64,000 NAU units by default. You can request that this quota be increased to 256,000. If a VPC is peered with another VPC in the same Region, the two VPCs combined can have up to 128,000 NAU units by default. You can request that this quota be increased to 512,000. VPCs that are peered across different Regions do not contribute to this limit. Detailed information can be found in the documentation. The following figure (Figure1) shows an example of NAU and peered NAU calculations:

Figure 1: NAU and peered NAU example

  • Scaling up: Scaling up to accommodate growth by increasing the size of a system’s components, such as databases, EC2 instances, etc., is a natural and straightforward way to scale, but has limitations. Each VPC or peered group of VPCs can have a maximum number for the NAU units, in the same way as an EC2 instance size within a family can have a maximum amount of memory or number of CPUs. When operating at Hyperscale, linear scaling factors, for example, cost and throughput, or non-linear factors, such as fault domain and testability, become challenging to work with. A scaled-up VPC, or peered group of VPCs, is fundamentally a single network on AWS, meaning that shared fate can be a challenge too.
  • Scaling out: On the other hand, scaling out means accommodating growth by horizontally increasing the number of network components, for example, VPCs or peered groups of VPCs. This ensures that any network component continues to operate nominally, offering an overall increase in footprint. This provides domain isolation, failure containment, and narrows the impact of issues such as deployment failures, poison pills, misbehaving clients, data corruption, operational mistakes, etc. More details about the hyperscale cellular architectures benefits can be found here.

5. Work with IPv4 addressing limitations and overlap

When considering Hyperscale deployments, the number of private IPv4 addresses limits overall infrastructure scalability. This often leads to compromises in the design of connectivity of and adds a layer of complexity to the network that is hard to manage. Some design choices that can help increase the lifespan of private IPv4 space, and their associated considerations are:

  • Using AWS PrivateLink for connectivity between a large number of VPCs with overlapping IP space: AWS PrivateLink allows you to expose applications, at scale, from a service provider VPC to client VPCs. The service provider VPC and all of the client VPCs can have an overlapping IP space, because PrivateLink doesn’t use a routed connection like VPC peering or Transit Gateway. When choosing AWS PrivateLink for client-service connectivity, careful consideration must be made regarding the horizontal scaling of service provider VPCs, as a PrivateLink Endpoint Service is defined at a VPC level.
  • Using public IP connectivity between VPCs: Mapping every private resource with an Elastic IPv4 address ensures its unicity in the network, therefore bidirectional connectivity to every other resource. This approach leads to a high consumption of public IPv4 addresses and adds a layer of complexity to managing security for traffic flows that could otherwise be considered private. In IPv6, the VPC CIDR allocations are from the Globally Unique IPv6 space (GUA), ensuring that each IP address is unique, but connectivity for workloads that are private remain private and not exposed to the internet in any form.
  • Using multiple CIDRs in VPCs, some routed in the internal network and some local to the VPC: This design takes advantage of the possibility of assigning an overlapping IPv4 CIDR to a set of VPCs, to ensure that workloads can scale without exhausting the IPv4 space. This involves additional services, like Private NAT Gateway, that must scale with the environment. Details on the architecture can be found here.
  • Using VPC Sharing to limit the IPv4 address allocation: VPC Sharing is a common architecture choice for customers who experience scarce private IPv4 addressing, as it helps with creating and managing a smaller number of VPCs. This choice comes with added attention to security guardrails between workloads that share the same VPC. Additionally, to make scaling out possible, workloads and accounts need to be distributed across multiple shared VPCs.

6. Assess cost optimization opportunities

As a key pillar of the AWS Well-Architected Framework, cost optimization plays an important role in the scale of network infrastructure, especially in a Hyperscale environment. The goal is to build scalable, resilient, highly available, and flexible network architectures that are also cost efficient, without affecting key functions of the design. The different connectivity options available on AWS have different features and scalability boundaries, and also different associated costs.

When scaling up inside a VPC, data transfer costs are a large part of what you pay for traffic between Availability Zones (AZs). Once you scale out across multiple VPCs, VPC peering ensures free data transfer inside the same AZ, and you pay the same cost for cross-AZ data transfer as you do inside the same VPC. For a limited number of VPCs, VPC peering can be a cost optimization connectivity method. Whilst, at large scale, AWS Transit Gateway provides a great way to interconnect VPCs, however it does incur a data processing cost.

A best practice is keeping high-throughput data transfer within the same AZ, optimizing the data transfer cost, and ensuring high throughput and low latency. When data flows must cross AZs or VPC boundaries, the flexibility of the network design allows you to use either VPC peering or Transit Gateway for connectivity, depending on the flow attributes. Network cells can be optimally designed such that each network flow uses the connectivity method that’s best suited.

Reference network architectures

Let’s explore the most common architectures and ways of connecting building blocks, given the outlined best practices. The architectures below can be combined, such that they meet your needs.

Amazon VPC – a single AWS network component

The Amazon VPC is the fundamental building block in your network architecture. The way you deploy and manage Amazon VPCs across your organization can vary depending on your organization’s needs, multi-account strategy, Regional footprint, etc.


  • Traffic pattern: Keeping high throughput, low latency traffic flows inside the same VPC helps you ensure high performance.
  • Operational model and data path: You can use any of the three operational models for VPC management: centralized, distributed or mixed. The data path for intra-VPC communication is local to the VPC.
  • Network segmentation and security: Per-subnet Network ACLs and security groups can be used for security and micro-segmentation inside the VPC.
  • Scalability: When scaling up inside a single VPC, you must consider Hyperscale deployments caveats: limited testability, increased failure domains, management and operational complexity.
  • IPv4 address management: Scaling up a VPC can help you manage a limited private IPv4 space, by limiting the number of VPCs you need to create, and optimizing IP space utilization.
  • Cost optimization opportunities: Traffic between different Availability Zones in the same VPC will incur cost for data transfer.

The following figure (Figure 2) shows an example of a VPC design.

Figure 2: Example of an Amazon VPC design

VPC sharing

VPC sharing allows multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, Amazon Redshift clusters, and AWS Lambda functions, into shared, centrally managed VPCs. In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization using AWS Resource Access Manager (RAM). Detailed information about VPC sharing can be found here.


  • Traffic pattern: Keeping high throughput, low latency traffic flows inside the same Shared VPC can drive workload placement. You can choose the accounts that share a VPC based on their communication needs. Workloads deployed in participant accounts in the same shared VPC are only limited in throughput by the types of instances they are using.
  • Operational model and data path: Using VPC sharing maps to a centralized operational model, while the data path is local to each shared VPC. When scaling out to multiple shared VPCs, connectivity between them can be achieved using Transit Gateways or Cloud WAN.
  • Scalability: You can use VPC Sharing with applications that require a high degree of interconnectivity and are within the same trust boundaries. To achieve scalability, you can spread the accounts across multiple shared VPCs, based on connectivity needs, business logic or other criteria that allow you to satisfy the enterprise security, operational management and growth needs. NAU units are counted on a per VPC level. Additionally, tighter control over resource deployment becomes critical to make a participant doesn’t consume resources that must also be available for the other participants, for example, Elastic Load Balancers, Lambda functions, etc.
  • Network Segmentation and Security: Participants in a Shared VPC can reference security groups that belong to other participants or the owner using the security group ID. You can use NACLs for securing traffic between different subnets of the shared VPC.
  • IPv4 address management: Using VPC Sharing can help with better management of the private IPv4 space, by reducing the number of VPCs you need to deploy in a multi-account environment. Additionally, inside the VPC, you can control the size of subnets you share with participant accounts.
  • Cost optimization opportunities: There is no additional cost associated with using VPC sharing. Participant accounts are billed for the resources they launch in the shared subnets.

The following figure (Figure 3) shows an example of a Shared VPC design.

Figure 3: Example of a Shared VPC

VPC peering

Growing beyond the scale of a single VPC, VPC peering is a connectivity mechanism that allows you to expand your Amazon VPCs to larger scale network blocks. Resources in either VPC can communicate with each other as if they are within the same VPC. You can create a VPC peering connection between VPCs in different accounts, and Regions. Details about VPC peering can be found in the documentation. The following figure (Figure 4) shows an example of 4 VPCs peered together.

Figure 4: Amazon VPC peering example


  • Traffic pattern: Keeping high throughput, low latency traffic flows inside the same VPC peering mesh helps you ensure the lowest performance impact when scaling beyond a single VPC. VPC peering has no aggregate throughput limitations, you are only limited by the networking capabilities of the instances you’re using in the VPC.
  • Scalability: VPC peering is not a highly scalable connectivity option for a large number of VPCs, as it provides a non-transitive data path. Also, the NAU of a VPC which is part of an intra-Region peering mesh is the sum of NAU units in the local VPC and the NAU units in all directly peered VPCs.
  • Operational model and data path: The operational model can be centralized, distributed or mixed, while the data path is distributed to each peering connection.
  • Network segmentation and security: You can use security group referencing for intra-Region VPC peering connections. Subnet NACLs and security groups can be used for security and micro-segmentation inside a VPC, while security group referencing can be used across VPCs part of an intra-Region peering mesh.
  • IPv4 address management: Peering cannot be configured for VPCs with overlapping IP space.
  • Cost optimization opportunities: There is no charge associated with the VPC peering connection. We charge you only for the data transfer on the peering connection, between different availability zones for intra-Region peering, and cross-Region data transfer for cross-Region peering.

AWS PrivateLink

AWS PrivateLink provides a client to service type of secure, private connectivity to AWS services, or to your custom services. Detail on how AWS PrivateLink works can be found in the documentation. The following figure (Figure 5) shows an example of PrivateLink deployment for a customer-managed service.

Figure 5: AWS PrivateLink architecture


  • Traffic pattern: PrivateLink provides you with a client-service type of connectivity. You cannot initiate a connection from the application to the client, through the PrivateLink endpoint in the client VPC.
  • Scalability: By default, each interface endpoint can support a bandwidth of up to 10 Gbps per Availability Zone and automatically scales up to 100 Gbps. Scaling out PrivateLink endpoints is also possible, with careful consideration the management and operations complexity.
  • Network segmentation and security: When you create an interface endpoint, you can specify the security groups to associate with the endpoint network interface. Also, you can configure endpoint policies to secure your VPC endpoints.
  • Private IPv4 management: Both the service VPC and the client VPCs can have overlapping IP addresses.
  • Cost optimizations opportunities: There is a per-hour cost associated with the VPC endpoint per availability zone and a data processing fee per GB of data. You can consider consolidating VPC endpoints for certain services in your environment, and providing client connectivity using Transit Gateway or Cloud WAN

AWS Transit Gateway and AWS Cloud WAN

An AWS Transit Gateway is a Regional network transit hub that provides connectivity to your VPCs at scale. As your cloud infrastructure expands globally, inter-Region peering connects transit gateways together using AWS Global Infrastructure. You can also use Transit Gateways with AWS Direct Connect and AWS Site-to-Site VPN, and integrate them with SD-WAN services (using Transit Gateway Connect) for hybrid global connectivity.

VPC peering and Transit Gateway or Cloud WAN connectivity are not exclusive – they can and should be used together to meet the different needs of the applications. For example, VPC-A can have a high throughput requirement to talk to VPC-B, and both VPC-A and B need access to centralized shared services. VPC-A and B can communicate through VPC peering, and both can communicate to the Shared Services VPC using Transit Gateway. The following figure (Figure 6) shows an example of using Transit Gateway together with VPC peering.

Figure 6: AWS Transit Gateway and VPC peering connectivity


  • Traffic pattern: As you horizontally scale your VPC infrastructure, Transit Gateway and Cloud WAN provide you with the scalable multi-Region, and hybrid connectivity.
  • Scalability: When you create VPC attachments to the Transit Gateway or Cloud WAN, the NAU units are not propagated, allowing you to scale the size of each VPC or peered group of VPCs independently.
  • Network segmentation and security: Transit Gateways and Cloud WAN allow you to create hierarchical network level segmentation at scale, using route tables or segments.
  • Private IPv4 address management: You can attach VPCs with overlapping CIDRs to Transit Gateway or Cloud WAN. To ensure bidirectional routing, you can associate non-overlapping CIDRs with your VPCs and use Private NAT Gateways and Elastic Load Balancers. Detailed information can be found here.
  • Cost optimization opportunities: There is a per-hour cost associated with each attachment and a data processing fee per GB of data sent to the Transit Gateway. Understanding your traffic flows and choosing the right connectivity service helps you optimize on data processing costs.

Conclusion – Choosing the right tools for you

The goal of this blog post was to highlight some of the most common best practices when designing large scale Amazon VPC networks. Balancing standardization and agility, whilst using the right tool for your use case, helps you use all of the connectivity options and scaling methods we have discussed. These options create a highly flexible network designs that accommodate your use cases and growth needs. Individual VPCs, Shared VPCs, VPC peering, PrivateLink, Transit Gateway, or Cloud WAN can coexist in a large-scale network environment, each serving its designated purpose, to ensure you’re using the right service or feature for your use-case. If you have questions about this post, then start a new thread on AWS re:Post, or contact AWS Support.

About the authors

Alexandra Huides

Alexandra Huides is a Principal Networking Specialist Solutions Architect within Strategic Accounts at Amazon Web Services. She focuses on helping customers with building and developing networking architectures for highly scalable and resilient AWS environments. Alex is also a public speaker for AWS, and she focuses on helping customers adopt IPv6 and design highly scalable network architectures. Outside work, she loves sailing, especially catamarans, traveling, discovering new cultures, and reading.

Matt Headshot1.jpg

Matt Lehwess

Matt Lehwess is a Senior Principal Solutions Architect for AWS. Matt has spent many years working as a network engineer in the network service provider space, building large-scale WAN networks in the Asia Pacific region and North America, as well as deploying data center technologies and their related network infrastructure. As a result, he is most at home working with Amazon VPC, AWS Direct Connect, and Amazon’s other infrastructure-focused products and services. Matt is also a public speaker for AWS, and he enjoys spending time helping customers solve large-scale problems using the AWS Cloud platform. Outside of work, Matt is an avid rock climber, both indoor and outdoor, and a keen surfer.