Amazon Web Services

In this comprehensive video, AWS machine learning specialist Emily Webber introduces the process of pretraining foundation models on AWS. She explains when and why to create a new foundation model, comparing it to fine-tuning existing models. Webber discusses the data requirements, compute resources, and business justifications needed for pretraining projects. She then delves into distributed training techniques on Amazon SageMaker, including data parallelism and model parallelism. The video concludes with a detailed walkthrough of pretraining a 30 billion parameter GPT-2 model using SageMaker's distributed training capabilities. Viewers can access accompanying notebook resources to follow along with the demonstration.

product-information
skills-and-how-to
generative-ai
ai-ml
gen-ai
Show 4 more

Up Next

VideoThumbnail
1:01:07

Accelerate ML Model Delivery: Implementing End-to-End MLOps Solutions with Amazon SageMaker

Nov 22, 2024
VideoThumbnail
15:58

Revolutionizing Business Intelligence: Generative AI Features in Amazon QuickSight

Nov 22, 2024
VideoThumbnail
2:51

How to Start, Connect, and Enroll Amazon EC2 Mac Instances into Jamf for Apple Mobile Device Management

Nov 22, 2024
VideoThumbnail
9:30

Deploying ASP.NET Core 6 Applications on AWS Elastic Beanstalk Linux: A Step-by-Step Guide for .NET Developers

Nov 22, 2024
VideoThumbnail
47:39

Simplifying Application Authorization: Amazon Verified Permissions at AWS re:Invent 2023

Nov 22, 2024