Skip to main content

TensorFlow on AWS

Getting Started with TensorFlow on AWS

Amazon SageMaker

The easiest way to get started with TensorFlow on AWS is using Amazon
SageMaker, a fully managed service that provides every developer and
data scientist with the ability to build, train, and deploy TensorFlow
models quickly. SageMaker assists with each step of the machine learning
process to make it easier to develop high quality models. Data
scientists can also use SageMaker with TensorBoard to save development
time by visualizing the model architecture to identify and remediate
convergence issues, such as validation loss not converging or vanishing
gradients. To get started with TensorFlow and TensorBoard on SageMaker,
use the following resources:

AWS Deep Learning AMI

AWS Deep Learning AMIs are machine images pre-installed with TensorFlow,
allowing you to quickly experiment with new algorithms or learn new
skills and techniques. To get started, see the TensorFlow on AWS Deep
Learning AMIs tutorials below.

AWS Deep Learning Containers

AWS Deep Learning Containers are Docker images pre-installed with
TensorFlow to make it easy to deploy custom machine learning
environments quickly by letting you skip the complicated process of
building and optimizing your environments from scratch. To get started
with TensorFlow on AWS DL Containers, use the following resources:

Amazon EC2 Inf1 instances/ AWS Inferentia

Amazon EC2 Inf1 instances are built from the ground up to support machine learning inference applications. Inf1 instances feature up to 16 AWS Inferentia chips, high-performance machine learning inference chips designed and built by AWS. Inf1 instances deliver up to 3x higher throughput and up to 40% lower cost per inference than Amazon EC2 G4 instances, which were already the lowest cost instance for machine learning inference available in the cloud. Using Inf1 instances, you can run large scale machine learning inference with TensorFlow models at the lowest cost in the cloud. To get started, see our tutorial on running TensorFlow models on Inf1.

Amazon Elastic Inference

Amazon Elastic Inference allows you to attach low-cost GPU-powered
acceleration to Amazon EC2 and SageMaker instances or Amazon ECS tasks,
to reduce the cost of running inference with PyTorch models by up to
75%. To get started with TensorFlow on Elastic Inference, see the
following resources.