Amazon Web Services

Amazon SageMaker provides fully managed deployment features for optimal machine learning inference performance and cost at scale. This workshop explores how to use SageMaker inference capabilities to quickly deploy ML models in production for various use cases, including hyper-personalization, Generative AI, and Large Language Models (LLMs). Learn about different SageMaker inference endpoint options and how to deploy LLMs for inference efficiently.

product-information
skills-and-how-to
cost-optimization
ai-ml
serverless
Show 4 more

Up Next

VideoThumbnail
30:23

T3-2 Amazon SageMaker Canvasで始めるノーコード機械学習 (Level 200)

Jun 27, 2025
VideoThumbnail
31:49

T2-3 AWS を使った生成 AI アプリケーション開発 (Level 300)

Jun 27, 2025
VideoThumbnail
26:05

T4-4: AWS 認定 受験準備の進め方 AWS Certified Solutions Architect – Associate 編 後半

Jun 26, 2025
VideoThumbnail
32:15

T3-1: はじめてのコンテナワークロード - AWS でのコンテナ活用の第一歩

Jun 26, 2025
VideoThumbnail
29:37

BOS-09: はじめてのサーバーレス - AWS Lambda でサーバーレスアプリケーション開発 (Level 200)

Jun 26, 2025