- Machine Learning›
- AWS Trainium›
- Getting Started
Purpose-built AI chips for the best price-performance
Train large models with Trainium and serve them at scale with Inferentia—faster, cheaper, fully integrated with AWS.
Why Trainium & Inferentia
Lower costs, higher throughput, and tight integration with AWS services. Supported by PyTorch, Hugging Face and vLLM.
Ready to build and scale with AWS’s purpose-built AI chips?
Follow this learning path to go from exploration → setup → training → deployment → optimization. Each step includes hands-on sessions so you can learn by doing.
Explore
Trainium for training. Inferentia for inference. Scale from a single instance to clusters with SageMaker HyperPod or EKS.
Intro to the ecosystem, hardware architecture, and real-world examples (Anthropic, Project Rainer). Includes a live demo on Trainium.
Setup
Start with EC2, SageMaker, or containers. Prebuilt DLAMIs and the Neuron SDK make setup simple and fast.
Learn the Neuron SDK stack, launch DLAMIs, configure EC2, and start working in Jupyter.
Learn
Train, deploy, and optimize your models.
Practice
Apply your skills with hands-on demos and sample projects:
Fine-tune Llama 3 on Trainium.
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages