Networking & Content Delivery

Upgrading AWS Direct Connect to 100 Gbps in 5 steps

Native 100 Gbps connections are now available at select AWS Direct Connect Locations worldwide. If you are using a 1 Gbps or 10 Gbps Direct Connect Dedicated Connection today, moving up to 100 Gbps can be done in five steps. This post walks through those steps and what to consider while planning your migration. These steps are applicable to connections with or without our newly announced MACsec encryption feature.

In the past, if you needed more than 10 Gbps of capacity from Direct Connect you had two options: You could spread network traffic across multiple 10 Gbps connections using Border Gateway Protocol (BGP) equal-cost multi-path (ECMP) routing, or you could consolidate multiple 10 Gbps connections using link aggregation groups (LAG). Both of these approaches increase available bandwidth, but are harder to create, maintain, and troubleshoot. The following diagram (figure 1) shows a high-level overview of the BGP ECMP, LAG, and native 100 Gbps approaches.


High-level diagram of AWS Direct Connect high bandwidth options

Figure 1: Direct Connect high-bandwidth options using BGP ECMP, LAG, and 100 Gbps connections

This blog post describes a five step process for migrating one or more existing Direct Connect connections to a 100 Gbps connection:

  1. Evaluate your Direct Connect architecture and physical connectivity needs.
  2. Create your new Direct Connect connections and order new circuits
  3. Configure your new virtual interfaces and network devices.
  4. Execute the migration and test your new Direct Connect architecture, failing back if needed.
  5. Decommission your old connections.

Our approach involves building a new Direct Connect connection and virtual interfaces in parallel to your existing connections. This gives you a migration path with minimal downtime and a quick way to switch back if needed. To make it easier to understand what is happening in steps 3 and 4, we use an example of a single Region and Direct Connect location that is migrating from a 4×10 Gbps LAG to a 100 Gbps connection.


Step 1 – Evaluate your Direct Connect architecture and physical connectivity needs.

Start with a review of your network resiliency needs:

We recommend that you provision sufficient network capacity to ensure that if one network connection fails your second connection is not overwhelmed. Planning for this in advance helps prevent resiliency problems down the road.

For example, let’s say that you have two sets of LAGs in two different Direct Connect locations, each comprised of multiple 10 Gbps connections. This setup lends a degree of redundancy in each location. But, if you replace each of these LAGs with a single 100 Gbps connection at each Direct Connect location, you lose that multi-connection redundancy. The AWS Direct Connect Resiliency Recommendations page is a great resource and can help find the right approach for you. The following diagram (figure 2) is an example 100 Gbps Direct Connect architecture with maximum resiliency.

Diagram of AWS Direct Connect maximum resiliency 100 Gbps architecture

Figure 2: Maximum resiliency 100 Gbps architecture


Determine the future of existing connections:

Depending on the impact of any downtime, you may or may not want to retain existing connections for backup alongside your new 100 Gbps connections. A decision to maintain Direct Connect connections of varying sizes should be made based on the criticality and requirements of your workloads. We recommend that you provision sufficient network capacity to ensure that the failure of one network connection does not overwhelm and degrade redundant connections.

Evaluate physical connectivity needs:

Once you have decided on an architecture, engage with one or more AWS Direct Connect Delivery Partners to help you establish physical network connectivity between an AWS Direct Connect location and your data center, office, or colocation environment. This connection is made using a Dedicated Connection. If you have your own network devices in an AWS Direct Connect location, you can skip this part.

Next, look for any links along the full network path that do not support 100 Gbps. For example, if you have a network device in an AWS Direct Connect location with a 50 Gbps circuit to your primary data center, that circuit will become a bottleneck after upgrading your Direct Connect to 100 Gbps.

Be aware of circuit ordering lead times:

If you need a new circuit, plan at least 90 days ahead of your Direct Connect migration to allow for circuit provider lead times. This is a good time to discuss the state of your current circuits with your providers to understand the terms and consequences of migrating from your existing lower capacity circuits to new higher capacity circuits.

Ensure that your network devices can accept 100 Gbps connections:

This is also a good time to ensure you have available ports on your network devices that support 100 Gbps fiber connections. For 100 Gbps connections, you need 100GBASE-LR4 single mode fiber transceiver modules (for more details see Direct Connect prerequisites).

Consider service quotas:

Increasing your Direct Connect connection to 100 Gbps does not override the AWS Transit Gateway maximum bandwidth quota. This means that your workloads may not achieve an end-to-end bandwidth of 100 Gbps if your VPC is connected to the Direct Connect connection through a transit virtual interface. You can overcome this limit if you associate Direct Connect with the Virtual Private Gateway of a VPC. When using Transit Gateway, you can attain the full 100 Gbps connection bandwidth, in aggregate, with an architecture that has multiple Transit Gateways associated with your Direct Connect as shown in the Amazon Virtual Private Cloud Connectivity Options Whitepaper. As always, regular EC2 instance bandwidth quotas also apply when calculating bandwidth for individual instances.


Step 2 – Place circuit orders and create your new 100 Gbps connections

Ordering circuits:

Once you have your network architecture worked out and are working with your circuit partner(s), it’s time to start creating resources. Use the AWS Direct Connect Resiliency Toolkit to ensure that your connections are redundant and at the proper speeds. A great feature of the AWS Direct Connect Resiliency Toolkit is that it helps you order the number of dedicated connections needed to achieve your SLA objective.

Using the LOA-CFA:

After you have created your new connections, follow the steps related to downloading and using the Letter of Authorization and Connecting Facility Assignment (LOA-CFA). Also ensure that you have installed any necessary network device hardware that was purchased from your vendor in step 1.


Step 3 – Bring up virtual interfaces (VIFs) on your new Direct Connect connections.

Check your connection status and run a quick test:

After your new connections and physical infrastructure are in place and connected, you can view the details of your connection in the AWS Direct Connect console. When the state of your Direct Connect link shows as available, you can begin setting up VIFs. We recommend at this point that you create a private VIF to attach to a test VPC. See the Create a virtual interface entry in the Direct Connect documentation for a detailed look at transit VIF, public VIF, and private VIF creation. Once your router configuration is completed, your VIF’s connection state should be available.  Launch an EC2 instance in your test VPC as a target for your test. Perform a ping test from on-prem via this test VIF to the EC2 instance’s private IP address. This test will confirm the connectivity from your router, through the delivery partner circuit, all the way to test VPC.

Create your production VIFs:

After connectivity is verified and tested, it’s time to ready the new 100 Gbps connection for production traffic. We recommend that you mirror the virtual interface setup of your existing Direct Connect connections onto the new 100 Gbps connection. The total number of VIFs, their type (private, public, or transit), and their association to resources in AWS (such as a Direct Connect Gateway), is the same. The new VIFs, while identical in design, are configured in standby mode using BGP configuration options that we explain later in this step. New VIFs are changed to an active state during a migration cutover window in step 4.

Note: If you have connections in multiple Direct Connect locations, we recommend that you migrate old connections one VIF at a time and in one Direct Connect location at a time

Sample Architecture:

For the sake of simplicity, we assume you are using a single connection in a single Region and there is a 40 Gbps LAG advertising three routes in each direction. This is a common architecture that customers needing high bandwidth hybrid connectivity have. The following diagram (figure 3) shows our example architecture before adding your new 100 Gbps connection. Next, create an active/standby setup with your existing connection as the active link and your new 100 Gbps connection as the standby.

Diagram of an existing AWS Direct Connect setup with a four 10 Gbps connection LAG

Figure 3: Existing Direct Connect setup before adding your new 100 Gbps connection


Integrate the new VIF as a hot standby:

After adding your new 100 Gbps connection and configuring your new VIF, you must attach the new 100 Gbps connection to the same Direct Connect Gateway. To avoid any traffic flow changes at this point, modify two BGP attributes: autonomous system (AS) path and local preference. This approach is depicted in in the following diagram (figure 4). While this process minimizes production traffic impact, we still recommend performing this step in a change window. As always, any network changes introduce risk.

Diagram of the new AWS Direct Connect connection alongside the old connection

Figure 4: Adding new 100 Gbps connection without changing traffic patterns


Tune your traffic flow from the cloud to on-prem:

We use a technique called AS path prepending to make the advertised BGP AS path to look less desirable by artificially increasing the path length of the routes you advertise. On your Customer Gateway router for the new Direct Connect connection, you modify the advertised AS path to include an additional AS number (a repeat of your own ASN). This influences the traffic flow coming from your AWS resources to your on-premises locations. To influence the reverse direction, we set a BGP local preference.

Tune your traffic flow from on-prem to the cloud:

Local preference is a BGP attribute that helps tune and prefer outbound paths from your AS. You set the BGP local preference for routes received through the new connection to a lower number than those received for the existing connection. For example, if routes through your existing connection have a local preference of 100, you could set the local preference for routes received via the new connection to 50. Consult your router vendor’s BGP configuration guides for specific commands. For a deep dive on Direct Connect routing examples, our blog post Creating active/passive BGP connections over AWS Direct Connect is helpful.

Confirm your route advertisements:

With this new configuration in place, confirm that you are receiving routes on both connections and you are ready for a cutover with minimal impact. You must repeat this step for all VIFs you have across all Direct Connect connections you are migrating.


Step 4 – Migrate traffic to your new connections and perform failover testing.

Shut down old BGP sessions:

Once your VIF is configured and route tuning from step 3 is complete, you proceed with a graceful shutdown of the old BGP sessions to allow the new connections to take over. This must be done in a change window, and may even be performed within the same window as Step 3. Advertisements cease when you shut down BGP neighbor/peering on the old 40 Gbps LAG, and the previously de-preferred advertisements on the new connection take effect. This is shown in the following diagram (figure 5).

Diagram of how traffic is migrated from the old AWS Direct Connect connection to the new connection using BGP

Figure 5: Cutting over to the new 100 Gbps connection

Test your workloads:

While still within the change window, you should test your workloads for both availability and performance. If problems are encountered, re-enable the BGP sessions on the old VIFs to return traffic flow to its previous state quickly and easily. You must repeat this step for all VIFs you have across all Direct Connect connections you are migrating.


An alternative migration approach:

An alternative approach to Steps 3 and 4 is to migrate your virtual interface from the old connection to the new. The benefit of this alternative approach is that there is less configuration work for you to perform. However, you experience downtime while the change is made. While this alternative approach is viable, we recommend that you consider the steps we have described previously in this post. These steps both minimize downtime and fail back more quickly if necessary.


Step 5 – Decommission old Direct Connect connections.

Once your new Direct Connect connection is carrying traffic and you have verified that it is operating as expected, removal of the old connections can begin. In Step 1, we thought about what connections we might want to keep for reduced-capacity redundancy. For scenarios where old connections are kept, refer to our post, Creating active/passive BGP connections over AWS Direct Connect, in order to tune traffic paths of your connections. If your architecture involves keeping subsets of connections from existing LAGs, dissociate connections from your LAG. Once you have identified which connections should be removed, delete the VIFs associated with those connections. Next delete the LAGs and connections themselves as needed. You are responsible for working with your AWS Direct Connect delivery partners to decommission any circuits and cross connects related to your old connections.

While the example in this blog used a single Direct Connect location and single VIF/connection migration, your environment likely has this setup replicated across multiple locations and multiple Regions. We encourage you to use the Direct Connect Resiliency Toolkit’s failover testing feature to ensure your configurations perform as expected before going live.



100 Gbps Direct Connect connections can create high-bandwidth network architecture that is simpler and easier to manage than using multiple 10 Gbps connections. In this blog, we’ve shown how to migrate to 100 Gbps Direct Connect in five steps.

  1. Evaluate your Direct Connect architecture and physical connectivity needs.
  2. Create your new Direct Connect connections and order new circuits
  3. Configure your new virtual interfaces and network devices.
  4. Execute the migration and test your new Direct Connect architecture, failing back if needed.
  5. Decommission your old connections.

If you want to learn more about setting up Direct Connect Connections globally, see our re:Invent video about how to go global with AWS multi-Region network services.


About the Authors

Sidhartha Chauhan Picture

Sidhartha Chauhan

Sid is a Senior Solutions Architect at AWS and works with enterprise customers to architect and build solutions on the cloud. He holds a masters degree in Computer Networking from NC State University and has authored the AWS Certified Advanced Networking Official Study Guide.

Brian Soper Picture

Brian Soper

Brian is a Solutions Architect in the Boston, Massachusetts area helping AWS customers transform and architect for the cloud since 2018. Brian has a 20+ year background building out infrastructure for both on-premises and cloud with a specialty in networking.