Posted On: Feb 5, 2021
Starting today, Amazon EC2 M5n, M5dn, R5n, and R5dn bare metal instances that can utilize up to 100 Gbps of network bandwidth, and Elastic Fabric Adapter (EFA) for HPC/ML workloads are generally available. Amazon EC2 bare metal instances provide your applications with direct access to the Intel® Xeon® Scalable processor and memory resources of the underlying server. These instances are ideal for workloads that require access to the hardware feature set (such as Intel® VT-x), or for applications that need to run in non-virtualized environments for licensing or support requirements.
Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted applications. Bare metal instances also make it possible for customers to run virtualization secured containers such as Clear Linux Containers. Workloads on bare metal instances continue to take advantage of all the comprehensive services and features of the AWS Cloud, such as Amazon Elastic Block Store (EBS), Elastic Load Balancer (ELB), and Amazon Virtual Private Cloud (VPC).
Based on the next generation AWS Nitro System, M5n, M5dn, R5n, and R5dn instances make 100 Gbps networking available to network-bound workloads without requiring customers to use custom drivers or recompile applications. Customers can also take advantage of this improved network performance to accelerate data transfer to and from Amazon S3, reducing the data ingestion time for applications and speeding up delivery of results.
M5n, M5dn, R5n, R5dn instances are powered by custom second-generation Intel® Xeon® Scalable processors (Cascade Lake) with sustained all-core turbo frequency of 3.1 GHz. They also provide support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads.