Amazon Web Services

In this comprehensive video, AWS machine learning specialist Emily Webber introduces the process of pretraining foundation models on AWS. She explains when and why to create a new foundation model, comparing it to fine-tuning existing models. Webber discusses the data requirements, compute resources, and business justifications needed for pretraining projects. She then delves into distributed training techniques on Amazon SageMaker, including data parallelism and model parallelism. The video concludes with a detailed walkthrough of pretraining a 30 billion parameter GPT-2 model using SageMaker's distributed training capabilities. Viewers can access accompanying notebook resources to follow along with the demonstration.

product-information
skills-and-how-to
generative-ai
ai-ml
gen-ai
Show 4 more

Up Next

VideoThumbnail
37:15

Contextual Retrieval 기반 RAG와 AWS 구성 방안

Jun 27, 2025
VideoThumbnail
40:18

ML 엔지니어를 클라우드 환경에서의 효율적인 LLM 배포 전략: vLLM, Amazon LMI, 그리고 SageMaker

Jun 27, 2025
VideoThumbnail
35:02

고급 프롬프트 엔지니어링 방법 및 Tool Use 활용 가이드

Jun 27, 2025
VideoThumbnail
30:02

Builders 온라인 시리즈 | Amazon VPC와 온프레미스 네트워크 연결하기

Jun 27, 2025
VideoThumbnail
26:52

Builders 온라인 시리즈 | 당신의 아키텍처는 Well-Architected 한가요?

Jun 27, 2025