AWS Storage Blog

Understanding Direct Network Interfaces on AWS Snow Family

To leverage the benefits of modern software development and automation, the telecommunications industry (telco) commonly employs containerized network functions (NF). The containerized environment demands certain requirements from the underlying infrastructure as opposed to a legacy monolithic IT workload. For example, a single network interface is reused for multiple workflows: network operations and maintenance (O&M) traffic, signaling traffic from NF to the core network or radio access network, user plane traffic and much more depending on the use-case. Additionally, NFs have different requirements for Quality of Service (QoS), bandwidth, and segmentation. Some network functions (NF) use Kubernetes networking while others need layer-2 access. Without the flexibility to support these network requirements, telco operators are hampered in their ability to run NFs to serve their customers. They would have to resort to bare metal servers for computing, which negates the efficiencies and flexibility gained using a virtualized environment.

To solve this need, AWS Snow Family has a Direct Network Interface (DNI) feature that customers can use to facilitate the deployment of NFs. In this blog, we cover advanced networking capabilities supported by DNI. We start by discussing the difference between two ways AWS Snow devices virtualize network interfaces (Virtual Network Interface and Direct Network Interface). We then discuss the attributes and benefits of using DNIs, after which we elaborate on examples and implementation of DNI with various networking functionalities like VLAN, multicast, transitive routing, and IPv6, which will help you understand and design network architecture using AWS Snow family devices.

Understanding the difference between VNI and DNI

AWS Snow Family devices have two types of interfaces to connect to the network: Virtual Network Interface (VNI) and Direct Network Interface (DNI). To learn more about VNIs, please refer to Understanding Virtual Network Interfaces on AWS Snowball Edge. VNIs are suitable for most workloads, however Telco NFs demand networking is one of the most complicated aspects of cloud native telco architecture which is addressed by DNI.

The following example has only VNIs configured. The Snowball Edge (SBE) is connected to a switch via RJ45 and has a static IP address of 192.168.26.200/24. There are two Amazon EC2 instances, each assigned a private IP address from the SBE’s internal DHCP server. The private IP will be assigned from the 34.223.14.128/25 subnet and have a gateway address of 34.223.14.129 (this is the address of the gateway within the SBE’s IP stack). There is a VNI entry for each Amazon EC2 instance. Instance A has a VNI with IP 192.168.26.220 that translates to the internal IP 34.223.14.194. Instance B has a VNI with IP address 192.168.26.230 that translates to the internal IP 34.223.14.195. From the LAN’s perspective, traffic sent to 192.168.26.220 will be sent to Amazon EC2 instance A. Likewise, traffic sent to 192.168.26.230 will be sent to Amazon EC2 instance B in the following image.

The example has VNIs configured. The Snowball Edge (SBE) is connected to a switch via RJ45 and has a static IP address of 192.168.26.200/24.

Now let’s add a DNI to each instance as shown in the following diagram:

This is diagram illustrates DNIs configured along with VNIs referenced in previous diagram.

Within the guest OS, you will see an additional interface to reflect the newly added DNI. This new DNI is associated with a physical network interface (NIC) and will be mapped to the same Layer 2 domain as the physical NIC. Notice the subnet of the DNI is the same as that of the link between the SFP+ NIC and the switch. The instance OS has the flexibility to utilize the new interface (DNI) for various capabilities to include IPv6, multicast, service chaining for Virtual Network Functions (VNFs), or Data Plane Development Kit (DPDK).

Direct Network Interface attributes and benefits

The primary attributes of Direct Network Interfaces include the following:

  • Must use SnowballEdge CLI to configure (OpsHub not supported).
  • Layer 2 is bridged from physical interface to the DNI.
  • VLANs tags are supported (inter-VLAN routing must be performed externally).
  • Single-root I/O virtualization (SR-IOV) is used to share the physical NIC across multiple instances.
  • Multiple DNIs can be associated with an Amazon EC2 instance.
  • Traffic on DNIs is not protected by security groups.
  • Do not terminate an Amazon EC2 instance with an attached DNI (delete or detach the DNI first).

For AWS Snowcone, DNIs can be associated with RJ45 ports and each port can support up to 63 DNIs. However, the WiFi interface of the Snowcone does not support DNI.

Since DNI provides instances with layer-2 network access without any intermediary translation or filtering, it has some inherent benefits. These benefits include lower latency between workloads and the LAN as well as supporting network protocols other than IP. In addition, there is more flexibility in terms of network configurations such as service chaining, NIC teaming, and segmentation using VLANs.

Direct Network Interfaces configuration examples

Here are some examples of DNI configurations:

  1. DNIs with VLANs

DNIs configure with VLANs

When creating a DNI, you have an option to add a VLAN tag. In the this example, instance A has a DNI with VLAN ID 40 and instance B has a DNI with VLAN ID 30. Using VLANs allows us to segment the traffic so that instances A and B cannot communicate directly over their DNIs. Instance A has been configured with IP address 192.168.40.50 and instance B has been configured with IP address 192.168.30.50. For communication beyond the DNI subnet, you can set their gateway to use the switch, which has an IP address of .1 in their respective subnets. The connection to the switch is configured to be a trunk port and the switch performs inter-VLAN routing between the two VLANs.

  1. DNIs and VNIs using same physical interface

DNIs and VNIs configured using same physical interface with VLANs

In this example, there is a single optical interface for VNIs, DNIs, and management. To accomplish this, the switchport is configured as a trunk with a native VLAN. The native VLAN is untagged and used for the management and VNI traffic. The VNI will behave as before, with VNI traffic is being translated (Network Address Translate) across the native VLAN. DNI traffic leaving the Snow device will be tagged appropriately with its with VLAN ID.

The next example describes the situation when you use a DNI and VNI on the same subnet. This occurs when you use the same physical interface but do not use VLANs.

  1. DNI and VNI on same physical interface without VLANs

DNI and VNI on same physical interface without VLANsIn this example, instance A’s DNI has an IP address of 192.168.26.50 and its VNI has an outside IP address of 192.168.26.220. Since they are on the same subnet, this can use some confusion within the network. If laptop C (IP address 192.168.26.10) was communicating to instance A over the VNI address (192.168.26.220), and then a DNI was added with an address of 192.168.26.50, then the communication session between laptop C and instance A would be lost. This is because instance A now has a more direct path to the 192.168.26.0/24 subnet (via the DNI). The user can establish a new session to instance A using the 192.168.26.50 address, or they can add a more specific route (to get to 192.168.26.10/32, take the gateway 34.223.14.129).

Configuring Direct Network Interfaces

The configuration of DNIs require the use of the SnowballEdge CLI. Determine the physical network interface ID. This information can be retrieved by the describe-device option within the SnowballEdge CLI.

SnowballEdge CLI describe-device command to get physical network interface id

For Snowball Edge device, look for an optical interface. In this case, you selected the SFP_PLUS interface. You need the PhysicalNetworkInterfaceId from the output.

The syntax to create a DNI is:

SnowballEdge CLI syntax to create DNI

  • instance-id <instance_id>

The instance-id is required to associated the DNI to the instance.

  • mac <mac_address>

Optional and only allowed for direct network interfaces. Set the mac address of the network interface.

  • vlan <vlan_id>

Optional and only allowed for direct network interfaces. Set the assigned VLAN for the interface. When specified, all traffic sent from the interface will be tagged with the specified VLAN ID. Incoming traffic will be filtered for the specified VLAN ID, and will have all VLAN tags stripped before being passed to the instance.

If you do not specify a vlan ID, there will be no vlan tag. If you do not specify a MAC address, a MAC address will automatically be assigned. The following is an example:

SnowballEdge CLI create DNI command example

Note that the output gives you the DirectNetworkInterfaceArn and MacAddress.

To delete a DNI, you need the DirectNetworkInterfaceArn. The syntax is:

SnowballEdge CLI syntax to delete DNI

If you need to delete a DNI, here is an example:

SnowballEdge CLI delete DNI example

Operating system drivers

When DNIs are attached to your Amazon EC2 instance, the instance’s operating system sees the physical network interface card (NIC). This requires that your Amazon EC2 instance has the appropriate drivers for the NIC.

For the Snowball Edge, the NICs are:

 RJ45 (1/10GbaseT) – 2 ports  Intel X550
 SFP28 (10/25G)  Mellanox ConnectX-4 Lx
 QSFP28 (40/100G)  Mellanox ConnectX-5

For the Snowcone, the NIC is:

 RJ45 (1/10GbaseT) – 2 ports  Intel X553

For many Linux distributions (e.g. Ubuntu and AL2), the Mellanox drivers are already installed. For other distributions, you can download the drivers from here.

For Windows, you can download the drivers from here. Select a driver from the WinOF-2 tab.

Configuring the DNI interface of the Instance OS

When you add a new interface, you must configure the operating system for the traffic to flow according to your desired paths. Below are few of the examples:

  1. Within an Ubuntu instance
  2. Using IPv6 on Amazon Linux 2
  3. 5G user plane function (UPF) with MEC workload

While there are many possible options (multiple DNIs, routing, service chaining, etc.), covering them all is out of scope for this document.

The first example is for an Ubuntu instance

DNI configurations on Ubuntu OS

There are two interfaces, ens3 and ens5. Ens3 is the first interface and was there when the instance was first launched. Ens3 is set for DHCP and should not be modified as it is used by the VNI. Ens3 will be assigned an IP address in the 34.223.14.128/25 subnet and will have a default gateway of 34.223.14.129. Let’s say for this example that the Snow device gave ens3 an IP address of 34.223.14.193. This is the inside NAT address of the VNI. Let’s also say that the outside NAT address is 192.168.26.220.

The network configuration file is located in the /etc/netplan/50-cloud-init.yaml file:

Cloud-init script configuration for DNI on Ubuntu OS

Ens5 was created when a DNI was added. This DNI was placed on the 192.168.30.0/24 subnet and assigned a static IP of 192.168.30.35. The MAC address was copied from the output of the creation of the DNI.

Two static routes were added:

  • To get to the 172.16.100.0/24 network, take the path using the gateway 192.168.30.1.
  • To get to the 10.10.0.0/16 network, take the path using the gateway 192.168.30.1 (note that this network is beyond the 172.16.100.0/24 network.

Example output of VNI and DNI configured on Ubuntu OS

In our example, if your laptop was 192.168.26.173, it would be able to access the Ubuntu instance via the VNI address (192.168.26.220). The VNI address is used to managed the Ubuntu instance and the DNI address is used for data plane traffic to the 172.16.100.0/24 and 10.10.0.0/16 networks.

The second example is using IPv6 on Amazon Linux 2 (AL2)

Configure DNI with IPv6 address on Amazon Linux 2

The Amazon EC2 instance is Amazon Linux 2 and has two interfaces, eth0 and eth1. Similar to the previous example, Eth0 is the first interface and was there when the instance was first launched. Ens0 is set for DHCP and should not be modified as it is used by the VNI. The VNI does not support IPv6 and will only be assigned an IPv4 address. Ens0 will be assigned an IP address in the 34.223.14.128/25 subnet and will have a default gateway of 34.223.14.129. Let’s say for this example that the Snow device gave ens0 an IP address of 34.223.14.193. This is the inside NAT address of the VNI. Let’s also say that the outside NAT address is 192.168.26.220.

Eth1 was the newly added DNI. To configure IPv6 on Amazon Linux 2, you first must enable IPv6. To do this, you add the following lines to our /etc/sysconfig/network file:

DNI IPv6 configuration in /etc/sysconfig/network file

Next, create a new configuration file for eth1. Here is an example of /etc/sysconfig/network-scripts/ifcfg-eth1. Notice that there is no configuration for IPv4 since you are only enabling IPv6. It is possible to have both IPv4 and IPv6 on the DNI since the DNI bridges layer 2.

Example eth1 configuration file under /etc/sysconfig/network-scripts/

For this example, you are only going to use the IPv6 link-local address, which is automatically configured when using IPv6. Below you see the output of the ip a command (note you are only showing the portion for eth1).

Output of successfully configure IPv6 address

The IPv6 link-local address for eth1 is fe80::e4a7:50ff:fe22:1329 with an IPv6 netmask of /64.

On your laptop, you view the output of ifconfig for interface en10

Output of your laptop’s IPv6 address

The IPv6 link-local address for en10 is fe80::8c:4648:3cca:4845 with an IPv6 netmask of /64.

Now try an IPv6 ping from the AL2 instance to the macbook:

Ping test over IPv6 address

You see that you have IPv6 reach-ability between the AL2 instance running on SBE and a laptop on your local LAN. This was possible by attaching a DNI to your AL2 instance and bridging layer2 from your instance to the local LAN.

The third example is 5G UPF with Multi-access Edge Computing (MEC) workload

Example of 5G UPF data flow with MEC workload

In this final example, you depict a notional Telco use case describing a 5G UPF data flow with a MEC workload. You see how DNI can be used to stitch the path for the NFs within an SBE.

  1. UE traffic reaches the RAN. The RAN converts layer 1 to IP packets.
  2. IP traffic from RAN is backhauled to Telco DC, eventually reaching the top-of-rack switch (ToR).
  3. The ToR forwards traffic towards SBE on vlan 100 (within the vlan trunk interface). The SBE sees traffic with VLAN tag 100, and forwards it to the corresponding DNI with vlan ID 100. This DNI is segmented towards the N3 interface of UPF. This is the GTP-U tunnel traffic.
  4. The UPF then forwards the traffic out its N6 interface towards the ToR on DNI with vlan ID 200.
  5. The ToR forwards the traffic (inter-VLAN routing) towards SBE on vlan 300. DNI vlan 300 is segmented to the MEC workload.

Summary

In this blog, we dove deep in understanding DNI and how it behaves within the AWS Snow family. We examined various deployment models and illustrated with the help of architecture diagram on how Telco operators can build multiple network connectivity designs. DNI also unlocks the ability for Telcos to use more advanced features such as VLANs, multicast, and IPv6. With these flexible options, DNI allows Telco to deploy their Network Functions within their virtual environments, enabling them to serve their customers.

Additionally, we looked various attributes and benefits of using DNI such as allowing multiple interfaces to be attached to EC2 instance on AWS Snow devices. Allowing customers to separate Information Technology (IT) and Operational Technology (OT) traffic.

Finally, we also demonstrated how to use SnowballEdge CLI API calls to configure DNI and instructions to configure the interface at operating system level (Ubuntu-IPv4 and Amazon Linux 2-IPv6). To use DNI EC2 instance’s operating system must have appropriate drivers as covered in this blog. For additional information, please refer to the Snow Device network configuration guide.

* Please note that details of Direct Network Interfaces in this article are accurate at the time of this writing and are subject to change with newer software releases.

Mark Nguyen

Mark Nguyen

Mr. Nguyen is a passionate 25+ year Tech veteran with a proven track record of customer success. Combined with vision and experience, he builds pragmatic solutions that are scalable, resilient, secure, and operationally manageable. Mr. Nguyen is Product Solutions Architect on the AWS Snow team.

Omkar Mukadam

Omkar Mukadam

Omkar Mukadam is a Edge Specialist Solution Architecture at Amazon Web Services. He currently focuses on solutions which enables commercial customers to effectively design, build and scale with AWS Edge service offerings.