AWS and NVIDIA have collaborated for over 10 years to continually deliver powerful, cost-effective, and flexible GPU-based solutions for customers. These innovations span from the cloud, with NVIDIA GPU-powered Amazon EC2 instances, to the edge, with services such as AWS IoT Greengrass deployed with NVIDIA Jetson Nano modules.
Customers around the world are using AWS and NVIDIA solutions for machine learning (ML), virtual workstations, high performance computing (HPC), and IoT services. Amazon EC2 instances powered by NVIDIA GPUs deliver the scalable performance needed for fast ML training, cost-effective ML inference, flexible remote virtual workstations, and powerful HPC computations. At the edge, customers can use AWS IoT Greengrass to extend a wide range of AWS cloud services to NVIDIA-based edge devices so the devices can act locally on the data they generate.
GPU instances for fast ML training and cost-effective inferences
For data scientists, researchers, and developers who need to speed up ML training, Amazon EC2 P3 instances powered by NVIDIA V100 Tensor Core GPUs are the fastest in the cloud for ML training. You can use multiple Amazon EC2 P3 instances with up to 100 Gbps of networking throughput to scale out your infrastructure and rapidly train ML models. Complementing the P3 instances, Amazon EC2 G4 instances, introduced in 2019, feature NVIDIA T4 Tensor Core GPUs to deliver the most cost-effective GPU instances for ML inference in the cloud.
Adapt your workforce and access creative talent across the globe
Virtual workstations on AWS enable studios to take on bigger projects, work from anywhere, and pay only for what they need. Running on Amazon EC2 G4 instances powered by NVIDIA T4 Tensor Core GPUs, virtual workstations employ the power of NVIDIA Quadro technology, the visual computing platform trusted by creative and technical professionals.
High Performance Compute
Solve large computational problems and gain new insights
Amazon EC2 P3 instances powered by NVIDIA V100 Tensor Core GPUs are an ideal platform to run engineering simulations, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other GPU compute workloads. High performance computing (HPC) allows scientists and engineers to solve these complex, compute-intensive problems. HPC applications often require high network performance, fast storage, large amounts of memory, high compute capabilities, or all of the above. AWS enables customers to increase the speed of research and reduce time-to-results by running HPC in the cloud and scaling to larger numbers of parallel tasks than would be practical in most on-premises environments.
Internet of Things
Seamlessly extend AWS to edge devices so they can act locally
AWS IoT Greengrass seamlessly extends AWS to edge devices such as NVIDIA Jetson devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. With AWS IoT Greengrass, NVIDIA Jetson devices can run AWS Lambda functions, Docker containers, or both; execute predictions based on ML models; keep device data in sync; and communicate with other devices securely – even when not connected to the Internet.
Learn about how to integrate NVIDIA DeepStream on Jetson Modules with AWS IoT Core and AWS IoT Greengrass.
Aon Pathwise (P3)
PathWise uses Amazon EC2 to model customer data hundreds of times faster than legacy solutions.
Snap Inc. uses Amazon EC2 G4 instances to deliver Bitmoji TV to millions.
Sway (P3 and G4)
Sway uses Amazon EC2 G4 instances and machine learning to Get People Dancin’!
NerdWallet uses machine learning on AWS to power recommendations platform.
Subtle Medical (P3)
AI-based PET, MRI scans bring life-saving technology to more patients via the AWS Cloud.
The use of Amazon SageMaker and Amazon EC2 P3 instances with NVIDIA P3 Tensor Core GPUs has improved NerdWallet’s flexibility, performance and has reduced the time required for data scientists to train ML models. “It used to take us months to launch and iterate on models: now it only takes days”
– Ryan Kirkman, Senior Engineering Manager
“Our customers rely on us to deliver highly accurate 3D Reality Models computed from multi-angle aerial photography across massive coverage areas. We use around 870 thousand GPU cores per day. We used to run this pipeline on Amazon EC2 G2 instances but switched to Amazon EC2 G4 instances and reduced our costs by 67%.”
– John Corbett, Director of Vision Systems
AWS and NVIDIA Services
Amazon EC2 P3 instances
Amazon EC2 P3 instances feature up to 8 NVIDIA V100 Tensor Core GPUs and up to 100 Gbps of networking throughput for ML and HPC applications. P3 instances have been proven to reduce ML training times from days to minutes, as well as increase the number of simulations completed for HPC by 3-4x.
Amazon EC2 G4 instances
Amazon EC2 G4 instances feature NVIDIA T4 Tensor Core GPUs, providing access to one GPU or multiple GPUs, with different amounts of vCPU and memory. G4 instances provide the industry’s most cost-effective and versatile GPU instance for deploying ML models in production and graphics-intensive applications.