Amazon EC2 G5 Instances
Amazon EC2 G5 instances are the latest generation of NVIDIA GPU-based instances that can be used for a wide range of graphics-intensive and machine learning use cases. They deliver up to 3x better performance for graphics-intensive applications and machine learning inference and up to 3.3x higher performance for machine learning training compared to Amazon EC2 G4dn instances.
Customers can use G5 instances for graphics-intensive applications such as remote workstations, video rendering, and gaming to produce high fidelity graphics in real time. With G5 instances, machine learning customers get high performance and cost-efficient infrastructure to train and deploy larger and more sophisticated models for natural language processing, computer vision, and recommender engine use cases.
G5 instances feature up to 8 NVIDIA A10G Tensor Core GPUs and second generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.6 TB of local NVMe SSD storage.
High performance for graphics-intensive applications
G5 instances deliver up to 3x higher graphics performance and up to 40% better price performance than G4dn instances. They have more ray tracing cores than any other GPU-based EC2 instance, feature 24 GB of memory per GPU, and support NVIDIA RTX technology. This makes them ideal for rendering realistic scenes faster, running powerful virtual workstations, and supporting graphics heavy applications at higher fidelity.
High performance and cost-efficiency for ML inference
G5 instances deliver up to 3x higher performance and up to 40% better price performance for machine learning inference compared to G4dn instances. They are a highly performant and cost-efficient solution for customers who want to use NVIDIA libraries such as TensorRT, CUDA, and cuDNN to run their ML applications.
Cost-efficient training for moderately complex ML models
G5 instances offer up to 15% lower cost-to-train than Amazon EC2 P3 instances. They also deliver up to 3.3x higher performance for ML training compared to G4dn instances. This makes them a cost-efficient solution for training moderately complex and single node machine learning models for natural language processing, computer vision, and recommender engine use cases.
Maximized resource efficiency
G5 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. With G5 instances, the Nitro system provisions the GPUs in a pass-through mode, providing performance comparable to bare-metal.
AWS NVIDIA A10G Tensor Core GPU
G5 instances are the first in the cloud to feature NVIDIA A10G Tensor Core GPUs that deliver high performance for graphics-intensive and machine learning applications. Each instance features up to 8 A10G Tensor Core GPUs that come with 80 ray tracing cores and 24 GB of memory per GPU. They also offer 320 third-generation NVIDIA Tensor Cores delivering up to 250 TOPS resulting in high performance for ML workloads.
G5 instances offer NVIDIA RTX Enterprise and gaming drivers to customers at no additional cost. NVIDIA RTX Enterprise drivers can be used to provide high quality virtual workstations for a wide range of graphics-intensive workloads. NVIDIA gaming drivers provide unparalleled graphics and compute support for game development. G5 instances also support CUDA, cuDNN, NVENC, TensorRT, cuBLAS, OpenCL, DirectX 11/12, Vulkan 1.1, and OpenGL 4.5 libraries.
High performance networking and storage
G5 instances come with up to 100 Gbps of networking throughput enabling them to support the low latency needs of machine learning inference and graphics-intensive applications. 24 GB of memory per GPU along with support for up to 7.6 TB of local NVMe SSD storage enable local storage of large models and datasets for high performance machine learning training and inference. G5 instances can also store large video files locally resulting in increased graphics performance and the ability to render larger and more complex video files.
Built on AWS Nitro System
G5 instances are built on the AWS Nitro System, which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead.
|Instance Size||GPU||GPU Memory (GiB)||vCPUs||Memory (GiB)||Storage (GB)||Network Bandwidth (Gbps)||EBS Bandwidth (Gbps)||On Demand Price/hr*||1-yr ISP Effective Hourly (Linux)||3-yr ISP Effective Hourly (Linux)|
|Single GPU VMs||g5.xlarge||1||24||4||16||1x250||Up to 10||Up to 3.5||$1.006||$0.604||$0.402|
|g5.2xlarge||1||24||8||32||1x450||Up to 10||Up to 3.5||$1.212||$0.727||$0.485|
|g5.4xlarge||1||24||16||64||1x600||Up to 25||8||$1.624||$0.974||$0.650|
|Multi GPU VMs||g5.12xlarge||4||96||48||192||1x3800||40||16||$5.672||$3.403||$2.269|
* Prices shown are for US East (Northern Virginia) AWS Region. Prices for 1-year and 3-year reserved instances are for "Partial Upfront" payment options or "No Upfront" for instances without the Partial Upfront option.
Athenascope uses cutting-edge developments in computer vision and artificial intelligence to analyze gameplay and automatically surface the most compelling gameplay moments to create highlight videos for gamers and content creators.
“To create a seamless video experience, low latency video analysis using our CV models is a foundational goal for us. Amazon EC2 G5 instances offer a 30% improvement in price/performance over previous deployments with G4dn instances.”
Chris Kirmse, CEO & Founder, Athenascope
Netflix is one of the world's leading streaming entertainment services with 214 million paid memberships in over 190 countries enjoying TV series, documentaries, and feature films across a wide variety of genres and languages.
“Building a studio in the cloud to create animation, visual effects, and live action content for our viewers has been a priority for us. We want to give artists the flexibility to access workstations whenever and wherever they need them. We’re constantly looking for ways to help our artists innovate by offering them access to more powerful workstations.”
Stephen Kowalski, Director of Digital Production Infrastructure Engineering, Netflix
“With the new Amazon EC2 G5 instances, we can provision higher-end graphics workstations that offer up to 3x higher performance compared to workstations with EC2 G4dn instances. With G5 instances, content creators have the freedom to create more complex and realistic content for our viewers.”
Ben Tucker, Technical Lead, Animation Production Systems Engineering, Netflix
"For high-end VR/XR applications, Amazon EC2 G5 instances are a game-changer. We’re able to run professional applications in Varjo’s signature human-eye resolution with three times the frame rate compared to G4dn instances used before, providing our customers with never-before-seen experience quality when streaming from server.”
Urho Konttori, Founder and Chief Technology Officer, Varjo
The AWS Deep Learning AMIs (DLAMI) and AWS Deep Learning Containers (DLC)
AWS Deep Learning AMIs (DLAMI) and AWS Deep Learning Containers (DLC) provide Data Scientists, ML practitioners, and researchers with machine and container images that are pre-installed with deep learning frameworks to make it easy to get started by letting you skip the complicated process of building and optimizing your software environments from scratch. The SynapseAI SDK for the Gaudi accelerators is integrated into the AWS DL AMIs and DLCs enabling you to quickly get started with DL1 instances.