Amazon EC2 G6 Instances

High performance GPU-based instances for deep learning inference and graphics-intensive applications

Amazon EC2 G6 instances powered by NVIDIA L4 Tensor Core GPUs can be used for a wide range of graphics-intensive and machine learning use cases. The G6 instances offer 2x better performance for deep learning inference and graphics workloads compared to EC2 G4dn instances.

Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming.

G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.

Benefits

High performance and cost-efficiency for deep learning inference

G6 instances deliver up to 2x higher performance for deep learning inference compared to G4dn instances. They are powered by L4 GPUs that feature fourth-generation tensor cores and are a highly performant and cost-efficient solution for customers who want to use NVIDIA libraries such as TensorRT, CUDA, and cuDNN to run their ML applications.

High performance for graphics-intensive workloads

G6 instances deliver up to 2x higher graphics performance than G4dn instances. They are powered by L4 GPUs that feature third-generation RT cores and support NVIDIA RTX technology. This makes them ideal for rendering realistic scenes faster, running powerful virtual workstations, and supporting graphics heavy applications at higher fidelity and resolution. Gr6 instances support vCPU: RAM ratio of 1:8 that makes them well suited for graphics workloads with higher memory requirements.

Maximized resource efficiency

G6 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. With G6 instances, the Nitro system provisions the GPUs in a pass-through mode, providing performance comparable to bare-metal.

Features

NVIDIA L4 Tensor Core GPU

G6 instances feature NVIDIA L4 Tensor Core GPUs that deliver high performance for graphics-intensive and machine learning applications. Each instance features up to 8 L4 Tensor Core GPUs that come with 24 GB of memory per GPU, third-generation NVIDIA RT cores, fourth-generation NVIDIA Tensor Cores, and DLSS 3.0 technology. NVIDIA L4 GPUs include four video decoders and two video encoders and have AV1 hardware-encoding capabilities.

NVIDIA drivers and libraries

G6 instances offer NVIDIA RTX Enterprise and gaming drivers to customers at no additional cost. NVIDIA RTX Enterprise drivers can be used to provide high quality virtual workstations for a wide range of graphics-intensive workloads. NVIDIA gaming drivers provide unparalleled graphics and compute support for game development. G6 instances also support CUDA, cuDNN, NVENC, TensorRT, cuBLAS, OpenCL, DirectX 11/12, Vulkan 1.3, and OpenGL 4.6 libraries.

High performance networking and storage

G6 instances come with up to 100 Gbps of networking throughput enabling them to support the low latency needs of machine learning inference and graphics-intensive applications. 24 GB of memory per GPU along with support for up to 7.52 TB of local NVMe SSD storage enable local storage of large models and datasets for high performance machine learning training and inference. G6 instances can also store large video files locally resulting in increased graphics performance and the ability to render larger and more complex video files.

Built on AWS Nitro System

G6 instances are built on the AWS Nitro System, which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead.

Product details

  Instance Size GPU GPU Memory (GB) vCPUs Memory (GiB) Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) On Demand Price/hr* 1-yr ISP Effective Hourly (Linux) 3-yr ISP Effective Hourly (Linux)
Single GPU VMs g6.xlarge 1 24 4 16 1x250 Up to 10 Up to 5 $0.805 $0.499 $0.342
g6.2xlarge 1 24 8 32 1x450 Up to 10 Up to 5 $0.978 $0.606 $0.416
g6.4xlarge 1 24 16 64 1x600 Up to 25 8 $1.323 $0.820 $0.562
g6.8xlarge 1 24 32 128 2x450 25 16 $2.014 $1.249 $0.856
g6.16xlarge 1 24 64 256 2x940 25 20 $3.397 $2.106 $1.443
gr6.4xlarge 1 24 16 128 1x600 Up to 25 8 $1.539 $0.954 $0.654
gr6.8xlarge 1 24 32 256 2x450 25 16 $2.446 $1.517 $1.040
                       
Multi GPU VMs g6.12xlarge 4 96 48 192 4x940 40 20 $4.602 $2.853 $1.955
g6.24xlarge 4 96 96 384 4x940 50 30 $6.675 $4.139 $2.837
g6.48xlarge 8 192 192 768 8x940 100 60 $13.35 $8.277 $5.674

Prices shown in the preceding table are for US East (Northern Virginia) AWS Region.

Customer testimonials

Varjo

Varjo makes the highest-immersion virtual and mixed reality products for advanced VR users. Their solutions are used to train astronauts, pilots, and nuclear power plant operators, design cars, and conduct pioneering research.  

"For high-end enterprise applications of mixed reality, Amazon EC2 G6 instances are a game-changer. We’re able to train complex machine learning models in a cost-effective and scalable manner, enhancing the user experience for our customers whilst achieving a high price-performance ROI."

Knut Nesheim, Head of Engineering, Varjo

Cloudinary

Cloudinary’s mission is to help companies unleash the full potential of their media to create the most engaging visual experiences.

“For high-efficiency video-processing workflows, Amazon EC2 G6 instances significantly accelerate AV1 video encoding to enable faster delivery of highly optimized content for our customers.”

Amnon Cohen-Tidhar, VP Media Technologies, Cloudinary

SmugMug

Millions of customers use SmugMug’s photo and video sharing service to store billions of valuable photos and videos.

"At SmugMug and Flickr we say, “Your photos look better here”. Since 2013, we’ve used Amazon EC2 GPU instance types to help fulfill that promise. Each EC2 GPU generation brings faster performance and better throughput per dollar, allowing us to cost-effectively scale image rendering for ever-larger photo sizes for higher resolution displays. We’re excited about the g6 instance family and its ability to further optimize our infrastructure performance and economics."

Andrew Shieh, Principal Engineer, SmugMug and Flickr

Getting started with G6 instances for ML

Using DLAMI or Deep Learning Containers

DLAMI provides ML practitioners and researchers with the infrastructure and tools to accelerate DL in the cloud, at any scale. Deep Learning Containers are Docker images preinstalled with DL frameworks to streamline the deployment of custom ML environments by letting you skip the complicated process of building and optimizing your environments from scratch.

Using Amazon EKS or Amazon ECS

If you prefer to manage your own containerized workloads through container orchestration services, you can deploy G6 instances with Amazon EKS or Amazon ECS.

Get started with AWS

Sign up for an AWS account

Sign up for an AWS account

Instantly get access to the AWS Free Tier.

Learn with simple tutorials

Learn with 10-minute tutorials

Explore and learn with simple tutorials.

Start building with EC2 in the console

Start building in the console

Begin building with step-by-step guides to help you launch your AWS project.