Amazon Web Services

In this comprehensive video, AWS Machine Learning specialist Emily Webber explores various options for deploying foundation models on AWS, focusing on Amazon SageMaker. She covers online, offline, queued, embedded, and serverless application types, explaining their tradeoffs. The video demonstrates how to host distributed models across multiple accelerators and optimize performance through techniques like model compression. Emily provides a hands-on walkthrough of deploying a 175 billion parameter BLOOM model using SageMaker's large model inference container. She discusses key concepts like tensor parallelism and offers practical tips for efficient model deployment and serving. The video concludes with a demo of invoking the deployed model for inference.

product-information
skills-and-how-to
generative-ai
ai-ml
compute
Show 7 more

Up Next

VideoThumbnail
5:35

AWS WAF - Web Application Firewall protect your web applications from common web exploits

Jun 26, 2025
VideoThumbnail
16:03

Tọa đàm với anh Hiếu Trần - Co-founder của NAB Studio

Jun 26, 2025
VideoThumbnail
18:40

Thiết kế hạ tầng mạng chung trong môi trường sử dụng nhiều AWS account (Level 200)

Jun 26, 2025
VideoThumbnail
7:59

Triển khai và vận hành ứng dụng container trên môi trường nhiều AWS account (Level 300)

Jun 26, 2025
VideoThumbnail
7:06

Sử dụng Amazon S3 như thế nào? (Level 100)

Jun 26, 2025