Amazon Web Services

This video from AWS re:Invent 2023 explores training and tuning state-of-the-art machine learning models on Amazon SageMaker. Gal Oshri, Emily Webber, and Thomas Kollar discuss the challenges of training large-scale ML models and how SageMaker addresses them. They cover SageMaker's features like distributed training, cluster repair, and the new smart sifting capability. Emily demonstrates fine-tuning and pre-training large language models, including a demo of training Llama 7B on SageMaker. Thomas Kollar shares insights on how Toyota Research Institute leverages SageMaker for various ML use cases, including autonomous driving and robotics. The presenters highlight SageMaker's ability to scale from small experiments to large-scale training jobs efficiently, making it accessible for companies to create and customize their own foundation models.

product-information
skills-and-how-to
generative-ai
ai-ml
sagemaker

Up Next

VideoThumbnail
1:01:07

Accelerate ML Model Delivery: Implementing End-to-End MLOps Solutions with Amazon SageMaker

Nov 22, 2024
VideoThumbnail
15:58

Revolutionizing Business Intelligence: Generative AI Features in Amazon QuickSight

Nov 22, 2024
VideoThumbnail
2:53:33

Streamlining Patch Management: AWS Systems Manager's Comprehensive Solution for Multi-Account and Multi-Region Patching Operations

Nov 22, 2024
VideoThumbnail
9:30

Deploying ASP.NET Core 6 Applications on AWS Elastic Beanstalk Linux: A Step-by-Step Guide for .NET Developers

Nov 22, 2024
VideoThumbnail
47:39

Simplifying Application Authorization: Amazon Verified Permissions at AWS re:Invent 2023

Nov 22, 2024