Amazon Web Services

In this comprehensive video, AWS machine learning specialist Emily Webber introduces the process of pretraining foundation models on AWS. She explains when and why to create a new foundation model, comparing it to fine-tuning existing models. Webber discusses the data requirements, compute resources, and business justifications needed for pretraining projects. She then delves into distributed training techniques on Amazon SageMaker, including data parallelism and model parallelism. The video concludes with a detailed walkthrough of pretraining a 30 billion parameter GPT-2 model using SageMaker's distributed training capabilities. Viewers can access accompanying notebook resources to follow along with the demonstration.

product-information
skills-and-how-to
generative-ai
ai-ml
gen-ai
Show 4 more

Up Next

VideoThumbnail
30:23

T3-2 Amazon SageMaker Canvasで始めるノーコード機械学習 (Level 200)

Jun 27, 2025
VideoThumbnail
31:49

T2-3 AWS を使った生成 AI アプリケーション開発 (Level 300)

Jun 27, 2025
VideoThumbnail
26:05

T4-4: AWS 認定 受験準備の進め方 AWS Certified Solutions Architect – Associate 編 後半

Jun 26, 2025
VideoThumbnail
32:15

T3-1: はじめてのコンテナワークロード - AWS でのコンテナ活用の第一歩

Jun 26, 2025
VideoThumbnail
29:37

BOS-09: はじめてのサーバーレス - AWS Lambda でサーバーレスアプリケーション開発 (Level 200)

Jun 26, 2025