Amazon EC2 G4 Instances

The industry’s most cost-effective GPU instance for machine learning inference and graphics-intensive applications

Amazon EC2 G4 instances deliver the industry’s most cost-effective GPU instance for deploying machine learning models in production and graphics-intensive applications. G4 instances provide the latest generation NVIDIA T4 GPUs, AWS custom Intel Cascade Lake CPUs, up to 100 Gbps of networking throughput, and up to 1.8 TB of local NVMe storage.

G4 instances are optimized for machine learning application deployments (inference), such as image classification, object detection, recommendation engines, automated speech recognition, and language translation that need access to low level GPU software libraries. These instances are also cost-effective solutions for graphics-intensive applications, such as remote graphics workstations, video transcoding and game streaming in the cloud. G4 instances are offered in different instance sizes with access to one GPU or multiple GPUs with different amounts of vCPU and memory - giving you the flexibility to pick the right instance size for your applications.



Increase performance and reduce machine learning inference costs

G4 instances are equipped with NVIDIA T4 GPUs which deliver up to 40X better low-latency throughput than CPUs, so more requests can be served in real time. Also, G4 instances are optimized to be cost-effective for machine learning inference, which can represent up to 90% of overall operational costs for machine learning initiatives.

High performance graphics

Graphics applications using G4 instances have been shown to have an up to 1.8X increase in graphics performance and up to 2X video transcoding capability over the previous generation Amazon G3 instances. The NVIDIA T4 GPU can also be used for graphics applications and 3D rendering with support for the latest APIs: DirectX 12, OpenGL 4.6, OpenCL 2.2, CUDA 10.1, and Microsoft DXR.

Cost-effective small scale training

G4 instances are also useful for small-scale/entry-level machine learning training jobs for those businesses or institutions that are less sensitive to time-to-train. G4 instances deliver up to 260 TFLOPs of FP16 performance and are a compelling solution for small-scale training jobs.  


A choice of configurations

G4 instances come in several configurations offering up to 64 vCPUs, up to 4 NVIDIA T4 GPUs, and up to 256 GB of host memory so that you can right size your instance. A bare metal instance will also soon be available that offers 96 vCPUs, 8 NVIDIA T4 GPUs, and 384 GB of host memory.

Local Instance Storage

G4 instances offer up to 900 GB of local NVMe storage for fast data access, enabling customers to efficiently create photo-realistic and high-resolution 3D content for movies, games, and AR/VR experiences. The G4 bare metal instance that is coming soon will offer 1.8 TB of local NVMe storage.

Up to 100 Gbps Networking

G4 instances offer up to 50 Gbps of networking throughput to remove data transfer bottlenecks. The G4 bare metal instance that is coming soon will offer 100 Gbps of networking throughput.

Use Cases

Machine Learning Inference

The cost of ML inference can represent up to 90% of overall operational costs. G4 instances are an ideal solution for businesses or institutions looking for a more cost-effective platform for ML inference as well as a solution for machine learning inference applications that need direct access to GPU libraries such as, CUDA, CuDNN, and TensorRT.

Remote Graphics Workstations

Customers looking to use remote workstations in the cloud for running graphics applications, such as Autodesk Maya or 3D Studio Max, can use G4 instances to give them the flexibility to provision resources on a per-project basis and not be limited by availability of on-premises hardware.

Media and Entertainment

G4 instances can be used for post-production and video playout/broadcast as well as video encoding and transcoding. In addition, G4 instances can support AR/VR applications.

Product Details

  Instance Size vCPUs Memory (GB) GPU Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (GBps) On-Demand Price/hr* 1-yr Reserved Instance Effective Hourly* (Linux) 3-yr Reserved Instance Effective Hourly* (Linux)
Single GPU VMs g4dn.xlarge 4 16 1 125 Up to 25 Up to 3.5 $0.526 $0.316 $0.210
g4dn.2xlarge 8 32 1 225 Up to 25 Up to 3.5 $0.752 $0.452 $0.300
g4dn.4xlarge 16 64 1 225 Up to 25 Up to 3.5 $1.204 $0.722 $0.482
g4dn.8xlarge 32 128 1 1x900 50 7 $2.176 $1.306 $0.870
g4dn.16xlarge 64 256 1 1x900 50 7 $4.352 $2.612 $1.740
Multi GPU VMs g4dn.12xlarge 48 192 4 1x900 50 7 $3.912 $2.348 $1.564
g4dn.metal** 96 384 8 2x900 100 14 Coming soon Coming soon Coming soon

* Prices shown are for US East (Northern Virginia) AWS Region. Prices for 1-year and 3-year reserved instances are for "Partial Upfront" payment options or "No Upfront" for instances without the Partial Upfront option.

** Coming soon

Global Availability

Amazon EC2 G4 instances are available in the US East (N. Virginia and Ohio), US West (Oregon and N. California), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, and Stockholm), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, and Tokyo), South America (São Paulo), Middle East (Bahrain), and GovCloud (US-West) AWS Regions as On-Demand, Reserved, or Spot Instances.

Getting Started with G4 Instances

Deep Learning AMI and Deep Learning Containers

Amazon Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks and interfaces such as TensorFlow, PyTorch, Apache MXNet, Chainer, Gluon, Horovod, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques. To learn more, visit the Amazon Deep Learning AMIs product page.

AWS Deep Learning Containers (AWS DL Containers) are Docker images pre-installed with deep learning frameworks to make it easy to deploy custom machine learning (ML) environments quickly by letting you skip the complicated process of building and optimizing your environments from scratch. AWS DL Containers support TensorFlow and Apache MXNet, with PyTorch and other deep learning frameworks coming soon. To learn more, visit the Amazon Deep Learning Containers product page.


NVIDIA Marketplace offerings featuring NVIDIA Quadro Virtual Workstation software. These AMIs support running up to four 4k displays on a per GPU basis on a G4 instance. To find these AMIs, use this link: NVIDIA Marketplace offerings.

NVIDIA Marketplace offerings featuring NVIDIA vGaming drivers. These AMIs support running a single 4k display on a per GPU basis on a G4 instance. To find these AMIs, use this link: NVIDIA Marketplace offerings.


NVIDIA AMI with vGaming Driver

You can download the NVIDIA GRID driver from this link: NVIDIA Windows vGaming Driver for G4 Instances and NVIDIA Linux vGaming Driver for G4 Instances.

This download is available to AWS customers only. By downloading, you agree to use the downloaded software only to develop AMIs for use with the NVIDIA Tesla T4 hardware. Upon installation of the software, you are bound by the terms of the NVIDIA GRID Cloud End User License Agreement.

If you own GRID licenses, you should be able to use those licenses on your G4 instances. For more information, see NVIDIA GRID Software Quick Start Guide.


Get started with AWS

Step 1 - Sign up for an AWS account

Sign up for an AWS account

Instantly get access to the AWS Free Tier.

Learn with 10-minute Tutorials

Explore and learn with simple tutorials.

Start building with AWS

Begin building with step-by-step guides to help you launch your AWS project.

Learn more about other Amazon EC2 instance types

Click here to learn more
Ready to get started?
Sign up
Have more questions?
Contact us