AWS-NVIDIA-Header-Logos

Build powerful machine learning applications on the most advanced
and highest performing GPU-accelerated cloud infrastructure

Amazon Web Services and NVIDIA deliver proven, high performance GPU-accelerated cloud infrastructure to provide every developer and data scientist with the most sophisticated compute resources available today.  AWS is the world’s first cloud provider to offer NVIDIA® Tesla® V100 GPUs with Amazon EC2 P3 instances, which are optimized for compute-intensive workloads, such as machine learning. With 640 Tensor Cores, NVIDIA Tesla V100 GPUs break the 100 teraflops barrier of deep learning performance.

AWS brings NVIDIA Tesla V100 GPUs to the cloud at scale.

TRI_Autonomous_Vehicle

Learn how Toyota Research Institute is improving human lives with advances in AI.  Read the blog.

Accelerate Machine Learning with EC2 P3 Instances

With Amazon EC2 P3 instances, powered by the NVIDIA Volta architecture, you can significantly reduce machine learning training times from days to hours.  With a maximum of 8 NVIDIA Tesla V100 GPUs, EC2 P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink™ interconnect that enables high-speed, low-latency GPU-to-GPU communication.

Build and Train Machine Learning Models Quickly

Amazon SageMaker is a fully-managed machine learning platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. To help you select your algorithm, Amazon SageMaker includes the most common machine learning algorithms which have been optimized to deliver up to 10 times the performance you’ll find running these algorithms anywhere else.  Amazon SageMaker is also pre-configured with the latest versions of TensorFlow, Apache MXNet, Chainer, and CUDA9 library support for maximum performance with Amazon EC2 P3 instances.

844535726

Amazon SageMaker provides the fastest way to build, train, and deploy machine learning models.  Read the IDC infographic.

DL-AMIs-social

Find out why ML practitioners choose AWS for deep learning over other cloud providers.  Read the report.

Innovate on Any Deep Learning Framework

AWS Deep Learning AMIs equip you with the infrastructure and tools to accelerate deep learning in the cloud. The AMIs are pre-installed with all of the popular deep learning frameworks such as TensorFlow, Apache MXNet, Microsoft Cognitive Toolkit, Chainer, Caffe, Caffe2, Torch, Pytorch, Gluon, and Keras to train sophisticated AI models and to develop custom workflows.  To expedite your development and model training, the AWS Deep Learning AMIs include the latest NVIDIA GPU-acceleration through pre-configured CUDA and cuDNN drivers for use on Amazon EC2 P3 instances.

Amazon EC2 P3 instances for machine learning

For serious machine learning projects, data science and AI engineering teams must use GPU-accelerated compute instances that can handle massively parallel workloads to more quickly train and continuously retrain models on large data sets.  Amazon EC2 P3 instances, powered by NVIDIA Tesla V100 GPUs, together with Amazon SageMaker are ideally designed for those critical projects and the fastest way to train and run machine learning models today.