Networking & Content Delivery

Design patterns for interconnecting a telco data center to an Amazon VPC

Traditionally, communication service providers (CSPs) in the telecom industry have used a Virtual Routing and Forwarding (VRF) technique to segregate their data center (DC) networks per each network domains; for examples of domain such as Operation, Administration & Management (OAM), signaling, roaming, and user traffic networks. Each VRF domain in the data center must also be connected to the equivalent VRF of other data centers to provide a regional and nationwide coverage of the network. In addition, for CSP’s services to end customers, it’s also often required for CSPs to expand their multi-VRF based network to the external private network with keeping network separation. Therefore, in both inter-DC connectivity and telco network extension to the enterprise network, specific requirements usually come in for maintaining network separation (VRF) and interconnecting multiple Autonomous Systems (AS) support. As a general solution, RFC4364 has introduced two options for this requirement: option-A, which means VRF-to-VRF connection in RFC 4364 provides a 1:1 alignment between the routing contexts at each side of the Network-to-Network Interconnect (NNI). And option-B, which refers to using Multi-Protocol Label Switching (MPLS) inter-AS connectivity based on the label switching paradigm at the NNI, with a single MPLS-aware logical interface with its global QoS policies, hardening scheme, and single MP-eBGP session. When you interconnect applications running on AWS to workloads in already existing multi-VRF separated networks, this requirement and solutions are required to implement. As the first and the simplest approach, you can think of segregating applications into separate each Amazon Virtual Private Cloud (Amazon VPC) which would map to each VRF as the way of RFC4364 option-A. This will make you to have individual Site-to-Site (S2S) VPN or Direct Connect (DX) between a VRF at on-premises and a mapped VPC in AWS as shown in the following Figure 1.

Fig.1 VRF to VPC interconnection (RFC4364 Option-A)

This works well and is straightforward, but it only works when you can segregate network applications to each separate VPC. However, in most cases for telecom networking application/appliance, such as Virtual Network Function (VNF) or Container Network Function (CNF), multiple network domains must be confined in one VPC (e.g., OAM VRF, signaling network VRF, User Plane VRF, Roaming interface VRF) for specific appliances (such as EPC (evolved packet core) and 5GC (5G Core)). This usually adds more complexity and costs for implementing network expansion between the telco network and AWS while keeping network separation requirement. Therefore, it makes sense to explore best practices when you are especially with a single VPC architecture for your telco applications on AWS.

Fig.2 VRF to a single VPC interconnection

In this post, five viable design patterns are introduced to build a hybrid network between AWS and the CSP network that is separated with multiple VRFs, such as 1) by the customer gateway’s (CGW’s) route filter option configuration, 2) by the AWS Transit Gateway (Transit Gateway) route table separation, 3) by the Transit Gateway route table separation and Transit Gateway connect, 4) by the virtual router appliance inside of the VPC, and 5) using a Multi-VPC ENI Attachments feature. Patterns 1-3 and 5 would use a native AWS networking construct with having some fine-tuned configuration at each side of the on-premises side Provider Edge (PE)-router or Transit Gateway. On the other hand, Pattern 4 will require virtual router appliance inside of the VPC to implement the RFC4363 Option-B configuration. Beyond these guided five patterns, there can be other options, but these are introduced as the general approach and as the best practices that minimize the impact on both the VPC architecture in AWS and the existing VRF separated networks.

Pattern1 – Connecting VRFs to a single VPC using the route-map IN filter at CGW (Provider Edge (PE) Router)

The first design pattern introduced would be the simplest AWS networking configuration method, utilizing individual Site-to-Site (S2S) Virtual Private Network (VPN) or Direct Connect (DX) Virtual Interface (VIF) for each VRF, as well as having some specific configuration on the PE-router side. As addressed in the previous section, a single VPC would be one routing domain in the AWS networking. Therefore, when you connect Amazon VPC to on-premises networks where VRF separation exists through a S2S VPN or DX, all VPC CIDR ranges would be advertised to all connected peer VRFs. In this case, the route policy map at each VRF router (also this would be seen as each CGW for each S2S connection to the VPC) could be implemented not to have interference from other Amazon VPC subnets at each VRF. This INBOUND route policy must be applied to filter off advertised-subnets, which have nothing to do with the current VRF over a BGP association with Amazon Virtual Private Gateway (VGW) in Amazon VPC. This means that this option requires pre-setting at the on-premises VRF router end, while having information on pre-defined Amazon VPC subnets which map to each VRF correspondence. Along with this private network side configuration, you must also make sure to set the right perimeter of security and isolation using Security Group(s) and Access Control Lists (ACLs) at the instance and subnet interfaces. This lets you implement network isolation inside of the VPC.

Fig.3 design pattern1; using route filter at PE-router

An example route-map filter configuration can be described as follows in the case for the VRF1 router in the diagram of Figure 3, when the VRF-1 subnet is pre-configured as 10.0.10.0/24 in the VPC while the VPC is advertising entire VPC CIDR 10.0.0.0/16 range. Note that, to receive and filter multiple VPC CIDR ranges advertised over your S2S VPN or DX connection, you’ll need to configure a new secondary VPC CIDR for each prefix you would like to advertise, and create your VPC subnets from that CIDR. DX and S2S VPN will advertise one VPC CIDR prefix per secondary VPC CIDR as well as one for the primary of course, allowing you to filter based on CIDR at your premises accordingly.

router bgp 65001
network 169.254.205.58
neighbor 169.254.205.57 remote-as 64512
neighbor 169.254.205.57 route-map SET-LOCAL-PREF in
!
route-map SET-LOCAL-PREF permit 10
match ip address 2
set local-preference 120
!
route-map SET-LOCAL-PREF permit 20
!
access-list 2 permit 10.0.10.0 0.0.0.255
access-list 2 deny any

Pattern2 – Using Transit Gateway Route Table separation

If you have individual S2S VPNs or DX VIFs for multi-VRFs (means each S2S VPN or VIF for each VRF individually), you can use them to connect each VRF to the single VPC. In this environment, you can use AWS Transit Gateway (TGW) and Transit Gateway route tables to separate the route propagation of VPC CIDR to each VRF. One caution you have to keep in mind for this approach (as well as the next pattern 3) is that Transit Gateway may not be the best option for carrying user traffic in telco 4G/5G core network use cases that must handle a huge amount of traffic because of data traffic charge over Transit Gateway. As illustrated in the following Figure 4, the key idea of this design is that you can disable the propagation of Amazon VPC to the VRF side and instead define the static route at each VRF dedicated route table to include only the corresponding subnet(s) among VPC CIDR range. In the following example diagram, as a result, the VRF1 side of PE-router will only receive the Private-VRF1 Subnet range from the BGP advertisement of Transit Gateway side, while the VRF2 side will only see the Private-VRF2 Subnet range. In the same way as the previous pattern, network separation and isolation inside of the VPC must be implemented via NACL and Security Group at each subnet level and instance level.

Fig.4 design pattern2; using TGW route table separation

Pattern3 – Using Transit Gateway Connect over a single DX Transit-VIF

Similar to the previous pattern, but more effective when there exists just one DX connection, would be the Transit Gateway Connect feature. Transit Gateway Connect provides a way to interconnect multiple AS and multiple VRF networks to AWS over a single DX Transit-VIF. This can simplify the physical network connectivity between the on-premises data center and AWS domain using Generic Routing Encapsulation (GRE) and Border Gateway Protocol (BGP).

Fig.5 design pattern3; using TGW Connect and TGW route table separation

The above diagram shows an implementation of multiple Transit Gateway attachment type Connect with multiple VRFs. The Direct Connect Gateway (DX-GW) is configured with a Transit Virtual Interface and advertises the Transit Gateway CIDR range. The DX-GW is attached to a Transit Gateway, thus creating a DX-GW attachment. The DX-GW attachment will be the transport for the Transit Gateway attachment type Connect. After creating the Transit Gateway attachment type Connect, configure GRE + BGP with an on-premises VRF by creating Connect peers. The Transit Gateway GRE outer IP address peer will be any IP from the Transit Gateway CIDR and the GRE peer will be any of the advertised addresses from the PE-router on-premises. The BGP inside CIDR block must be a /29 CIDR from the 169.254.0.0/16 range. The first address from the /29 IPv4 range will be the BGP peer IP address of the VRF appliance, and two other addresses are selected for the Transit Gateway BGP peers. You can configure two BGP peers for every GRE connection, which provides built-in redundancy within each GRE tunnel. Each Transit Gateway attachment type Connect will have its own Transit Gateway routing table to add another layer of traffic separation. VPC CIDR will be dynamically propagated to the VRF once the VPC attachment has been added on the routing table propagation. On-premises VRF will advertise its network on its respective routing table once its corresponding Transit Gateway attachment type Connect is added on the routing table propagation. Similar to previous patterns, you must use NACL and Security group to make sure of the network separation inside of the VPC in this case as well.

Pattern4 – VRF separation using a virtual router appliance (RFC4364-option-B)

In the same way as the last pattern, you can take into account using the overlay network built by the virtual router appliance inside of the VPC. Between the PE router at the on-premises network and this virtual router appliance, you can create an option-B Network-to-Network-Interface (NNI) established over an MPLS over GRE connection as shown in the following diagram.

Fig.6 design pattern4; using overlay network by virtual router appliance

For the sake of a simple verification, this architecture can be tested in the exemplary configuration of the following diagram, using Juniper Networks Virtual SRX (vSRX) from AWS Marketplace. The left-hand side of the router appliance in the VPC1 mimics the on-premises PE router whereas the right-hand side of the router appliance in another VPC2 represents the virtual router appliance inside of the VPC to provide an overlay network. This makes both VPCs connected via S2S VPN over the Internet.

Fig.7 design pattern4 verification environment example

In this demo environment, traffic to and from different MPLS L3 VPNs are multiplexed over a single option-B overlay connection that is terminated on a virtual router appliance in the AWS Transit VPC. This virtual router appliance can disambiguate flows into multiple VRFs that are configured locally and can be bound to one or several subnets within the AWS Transit VPC by importing and exporting the corresponding route targets. The following is an example configuration of the vSRX that works as an Autonomous System Boundary Router (ASBR).

[edit configuration]
routing-instances {
    vSRXForWorkload-LAN1 {
        interface ge-0/0/1.0;
        instance-type vrf;
        route-distinguisher 65002:1;
        vrf-import import-vSRXOnPrem1;
        vrf-export export-vSRXForWorkload1;
        vrf-table-label;
    }
    vSRXForWorkload-LAN2 {
        interface ge-0/0/2.0;
        instance-type vrf;
        route-distinguisher 65002:2;
        vrf-import import-vSRXOnPrem1; 
        vrf-export export-vSRXForWorkload2;
        vrf-table-label;
    }
} 

Any eligible combination of import and export policies based on common or different Route Targets can be implemented as well. On the AWS side, each of these subnets can be associated to common or dedicated Route Tables that belong to the same VPC. A different routing lookup can be implemented within the VPC per subnet route table basis. Furthermore, you can achieve separation among routing contexts and also using NACL and Security group similar to previous patterns.

Pattern5 – VRF separation using a Multi-VPC ENI Attachments

Multi-VPC ENI Attachments is a feature that supports an appliance to have multiple separate network interfaces through VPC-level segregation. This feature allows network functions, such as appliances, to have multiple ENIs while maintaining VPC separation, as shown in figures 8 and 9.

Figure 8 – design pattern5 verification environment example using TGW

Figure 9 – design pattern5 verification environment example (using VGW)

As mentioned earlier in Figure 1, this architecture adheres to RFC4364 Option-A, which connects each on-premise VRF to its dedicated VRF VPC. Multi-VPC ENI Attachments feature allows a network function appliance to connect to multiple VRFs. While using this architecture, you have the option to use separate TGW route tables for each VRF-VPC (Figure 8), similar to Pattern2, or you can use a VGW at each VPC (Figure 9), based on your network requirements.

Conclusion

In the general cloud concept, since Amazon VPC is a single flat network that provides one routing domain, having separate VPCs per matching VRF would be the simplest way. However, a 4G/5G Core Network (such as UPF and PGW) like application must have the router get connected to multiple VRFs of the private network while this application can’t be usually broken into multiple VPCs. Therefore, this challenge titled as an integration of single VPC to the VRF-separated private network often surfaces in the context of mobile network implementation on AWS. In this post, we navigated four different approaches to connecting an Amazon VPC to VRF separated on-premises private networks. This is often required in the telecom industry use case, such as to build out the 5G Core Network and Private 4G/5G Network on AWS. Each approach has pros and cons regarding the complexity of implementation, cost of operation, and constraint on scalability and performance requirements. Therefore, it’s highly recommended that a detailed implementation using listed approaches would be decided according to the environment of the use case.

 

Pros Cons
Pattern1: route-map filter at PE-router (CGW)
  • Simplest configuration for Amazon VPC (S2S VPN, DX, VGW, Transit Gateway, subnet route tables)
  • Needs multiple VIF and S2S VPNs (each for each VRF)
  • PE-router has to have a filtering configuration
Pattern2: Transit Gateway route table separation using multiple S2S VPNs or VIFs
  • Each VRF can receive the advertisement of the corresponding subnet without requiring PE-router change
  • Needs multiple VIF and S2S VPNs (each for each VRF)
  • Needs Transit Gateway route table configuration for static routes
  • Transit Gateway must be used
Pattern3: Transit Gateway route table separation and Transit Gateway connect using a single VIF
  • Each VRF can receive the advertisement of the corresponding subnet without requiring the PE-router change
  • You can leverage one VIF
  • PE-router must have a GRE support and you should be mindful of the limitations of GRE (e.g., per-flow limit)
  • Needs Transit Gateway route table configuration for static routes
  • Transit Gateway and DX must be used
Pattern4: vRouter-basis
  • Most straightforward to network engineers (traditional telco-network friendly)
  • Additional cost/complexity for building overlay and underlay network (including high-availability, scalability and performance concern)
Pattern5: Using Multi-VPC ENI Attachments
  • Simplest configuration for Amazon VPC (S2S VPN, DX, VGW, Transit Gateway, subnet route tables)
  • Each VRF can receive the advertisement of the corresponding subnet without requiring PE-router change
  • Needs multiple VIF and S2S VPNs (each for each VRF)

About the authors

Rolando Headshot1.jpg

Rolando Jr Hilvano

Rolando Jr Hilvano is a Senior Telecom Solutions Architect in the Worldwide Telecom Business Unit at AWS, specializes in 5G space, and works with telecom partners and customers in building or deploying Telco workloads on AWS.

Gonzalo Headshot1.jpg

Gonzalo Gomez Herrero

Gonzalo Gomez Herrero is a Principal Solutions Architect at the AWS Telco Business Unit EMEA team. He has been supporting global telecom service providers in network design, engineering, planning, deployment, and operations activities for over 20 years, providing consultancy, professional services and solutions architecture across diverse roles. Gonzalo holds a Masters’s in Telecommunications Engineering from the University of Zaragoza and Postgraduate Studies in Computer Science from the Technical University of Munich.

Ammar Headshot1.jpg

Ammar Latif

Ammar Latif is a Senior Telecom Solutions Architect at AWS. He enjoys helping customers using cloud technologies to address their business challenges. Throughout his career, Ammar has collaborated with a number of Telecom and Media customers globally. Ammar holds a Ph.D. from New Jersey Institute of Technology.

Matt Headshot1.jpg

Matt Lehwess

Matt Lehwess is a Principal Solutions Architect for AWS. Matt has spent many years working as a network engineer in the network service provider space, building large-scale WAN networks in the Asia Pacific region and North America, as well as deploying data center technologies and their related network infrastructure. As a result, he is most at home working with Amazon VPC, AWS Direct Connect, and Amazon’s other infrastructure-focused products and services. Matt is also a public speaker for AWS, and he enjoys spending time helping customers solve large-scale problems using the AWS Cloud platform. Outside of work, Matt is an avid rock climber, both indoor and outdoor, and a keen surfer.

Young Jung Headshot1.jpg

Young Jung

Dr. Young Jung is a Principal Solutions Architect in the AWS Worldwide Telecom Business Unit. His primary focus and mission are to help telco Core/RAN partners and customers to design and build cloud-native NFV solutions on AWS environment. He also specializes in Outposts use cases of telco industry for telco’s edge service implementation powered by AWS services and technologies.