AWS HPC Blog

Second generation EFA: improving HPC and ML application performance in the cloud

When we launched the Elastic Fabric Adapter (EFA) at re:Invent 2018, we delivered on a goal of accelerating computational fluid dynamics (CFD) and weather applications in Amazon EC2, without sacrificing the elasticity, regional availability, cost, and instance choice that makes EC2 so popular. At launch, EFA was available on C5n and P3dn instance types in 5 regions.

Performance at scale was a step-function improvement, as you can tell from Figure 1. Today, EFA is available on 33 instance types powered by Intel, AMD, and AWS Graviton processors with multiple memory, disk, and accelerator configurations, and at least one EFA-enabled instance type is available in every AWS Region.

The use cases for EFA have expanded to include large-scale distributed machine learning training and real-time uncompressed high-definition video steaming.

Figure 1: Scaling comparison between EFA and TCP on CFD++ using a 24M cell case out to 1500+ cores. In this case, we were simulating a Klingon Bird of Prey vehicle entering an earth-like atmosphere, but the important part is that as the number cores grows, the efficiency of EFA over TCP becomes more and more obvious. These results are from C5n instances in April of 2019, shortly after EFA become generally available.

Figure 1: Scaling comparison between EFA and TCP on CFD++ using a 24M cell case out to 1500+ cores. In this case, we were simulating a Klingon Bird of Prey vehicle entering an earth-like atmosphere, but the important part is that as the number cores grows, the efficiency of EFA over TCP becomes more and more obvious. These results are from C5n instances in April of 2019, shortly after EFA become generally available.

We continued to iterate on EFA’s performance and capabilities in the last 4 years, and today want to talk about the second generation of EFA.

Many of the improvements discussed in this post are already available to customers on any instance that supports EFA, but the recently-released Trn1 instance type is the first time that we have brought all the pieces together in a single place. This iterative approach—deploying improvements to EFA on existing instances as the improvements are developed—is critical to how we approach EFA development. Our customers are constantly finding new use cases – and we don’t wait for the next instance generation to address those customer’s needs.

Distributed training

An example of this iterative development process is distributed machine learning training on P4d instances. Two years ago, the majority of machine learning training was using a data-parallel model across a small number of instances, with communication primarily consisting of Allreduce operations multiple gigabytes in size. However, the machine learning community has adopted larger-scale and multiple levels of parallelism, which greatly changed the communication pattern in a way that challenged EFA’s capabilities.

Over the last year on the same P4d hardware, we improved the performance of small and medium message sizes by up to 50% (Figure 2). This work has resulted in observed performance improvements of over 18% for Fully Sharded Data Parallel (FSDP), a popular PyTorch Distributed Training library, and over 8% for Megatron-LM, Nvidia’s open source distributed training library (Figure 3).

Figure 2: Allreduce performance on P4d.24xlarge instance types using the latest EFA software stack in January of 2022 and November of 2022. Results were generated on a 128 GPU (16 instance) cluster. The performance for large messages has increased by 10% and the performance for small messages has increased by 50%. The improvements are due to changes across the EFA stack, from hardware to Libfabric to NCCL itself.

Figure 2: Allreduce performance on P4d.24xlarge instance types using the latest EFA software stack in January of 2022 and November of 2022. Results were generated on a 128 GPU (16 instance) cluster. The performance for large messages has increased by 10% and the performance for small messages has increased by 50%. The improvements are due to changes across the EFA stack, from hardware to Libfabric to NCCL itself.

Figure 3: Performance improvements on P4d for FSDP and Megatron-LM over 2022 due to improvements in the EFA stack. FSDP saw over 18% performance improvement and Megatron-LM saw an 8% performance improvement.

Figure 3: Performance improvements on P4d for FSDP and Megatron-LM over 2022 due to improvements in the EFA stack. FSDP saw over 18% performance improvement and Megatron-LM saw an 8% performance improvement.

Second generation improvements

The second generation of EFA provides another step function in application performance, especially for machine learning applications. For very small collective operations with accelerators like GPUs or AWS Tranium, second generation EFA provides an additional 50% communication-time improvement over the first generation EFA available on P4d. At the same time, we have doubled the throughput of each AWS Nitro system card which hosts the EFA devices, which allowed us to improve large-message collective performance and average latency.

The following sections discuss the improvements we’ve made to the EFA project since the first generation of EFA launched. While subsets of the improvements are available on any EFA-enabled instance, it is only with second generation EFA that all the improvements are available in one place.

AWS Nitro System hardware improvements

The second generation of EFA starts with new hardware: an updated Nitro System card that improves network performance. Endpoint latency – the portion of latency caused by the NIC/host software instead of network cables and switches – is reduced by 30%. And at the same time, available bandwidth per Nitro card has jumped from 100 Gbps to 200 Gbps, with twice the PCIe bandwidth to help keep the network busy.

Second generation EFA also greatly improves support for moving data directly between accelerator memories (like those on AWS Trainiums or GPUs) – improving distributed machine learning training applications. In the first generation of EFA, we added an RDMA read semantic to support NCCL communication. But in second generation EFA, we’ve added a more complete RDMA interface, allowing for more complex completion-semantics (like the low latency LL/LL128 protocols in Nvidia’s NCCL implementation) which further lowers communication time. The new RDMA interface is also available for HPC applications using MPI, improving throughput when there are a small number of communicating MPI processes per instance. This is important for supporting hybrid OpenMP / MPI applications.

Software Improvements

On any network, quite a bit of software sits between an HPC or ML application and the network device. In the case of EFA, that includes a kernel module, a package called Libfabric that provides a portable programming interface to RDMA-like network cards, and MPI or NCCL packages. As part of our efforts to improve application performance, we have touched every one of these pieces of software.

There were three key changes to the NCCL training stack that resulted in a 75% reduction in communication time for common collective operations.

First, we changed our communication protocol to send data eagerly to “prime the pump” while setting up RDMA transactions. This change lowered the send/receive time of NCCL by almost 40% for small and medium messages – which are becoming more common in large-scale model training.

Second, we’re trading slightly higher memory usage for more network buffers, resulting in more in-flight transactions. This effectively hides the network latency from the application by pipelining successive messages, and still costs us less than 1% of GPU memory.

Finally, we implemented topology-aware collective routines on our AWS Trainium instances to take advantage of the low-latency on-node network to improve performance for smaller collectives (Figure 4). We also (again) applied these lessons to MPI, allowing MPI users on our accelerator instances to move data directly between accelerator buffers.

Figure 4: Allreduce performance on 16 trn1.32xlarge instances before - and after - implementation of topology-aware collectives. By separating the low-latency on-node communication from the higher-latency off-node communication, we are able to improve scale-out performance of collective routines by more than 75%.

Figure 4: Allreduce performance on 16 trn1.32xlarge instances before – and after – implementation of topology-aware collectives. By separating the low-latency on-node communication from the higher-latency off-node communication, we are able to improve scale-out performance of collective routines by more than 75%.

Taking advantage of second generation EFA

The second generation of EFA has been available this year on the sixth-generation compute and memory optimized instances, including Intel, AMD, and Graviton processors. Trn1 is the first instance type to support second generation EFA with RDMA semantics.

To take advantage of the instance-software enhancements, make sure that you’re using at least version 1.19.0 of the EFA installer (you can see the EFA getting started documentation for more on that). For AWS ParallelCluster, version 3.3 or later includes all the EFA software needed to take advantage of the improvements discussed in this post. On Trn1 or P4d/P4de instances, the AWS Deep Learning AMIs also include updated EFA software.

Conclusion

Four years ago, we announced the first generation of EFA. Since launch, we’ve continually improved performance for our customers’ applications through improvements to the EFA device and instance software. This year, we’ve reached an important milestone in performance improvements that, combined with a new generation of Nitro card, makes up our second generation of EFA.

Customers have been enjoying the benefits of second generation of EFA for the last year on our sixth generation compute and memory optimized instances and, with Trn1, also benefit from our work to enable full RDMA semantics. We will be rolling out second generation EFA with RDMA semantics to additional instance types throughout the next year.

Of course, second generation EFA is not a finishing line – we’ll continue to improve both the hardware and software that powers EFA for a long time to come.

Brian Barrett

Brian Barrett

Brian is a Principal Engineer in Annapurna Labs at AWS. He has over two decades of experience building networks for high-performance computing systems and is one of the founding developers of the Open MPI implementation of the message-passing interface (MPI) standard. Brian holds a Ph.D. in Computer Science from Indiana University, Bloomington.

Matt Koop

Matt Koop

Matt is a Principal Engineer for the high-performance computing team at AWS. He draws on a broad set of experience in large-scale computing from both commercial and public sector to develop solutions for AWS customers. Matt holds a Ph.D. in computer science and engineering from Ohio State University.