Amazon Web Services

In this comprehensive video, AWS expert Emily Webber explores prompt engineering and fine-tuning techniques for pre-trained foundation models. She covers zero-shot, single-shot, and few-shot prompting, as well as instruction fine-tuning and parameter-efficient methods. The video includes a hands-on demonstration using SageMaker JumpStart to fine-tune GPT-J 6B on SEC filing data, showcasing the power of these techniques for various NLP tasks like summarization, classification, and translation. Webber emphasizes the importance of using instruction-tuned models and provides practical tips for improving model performance through prompt engineering and fine-tuning. This video is an essential resource for developers and data scientists looking to leverage generative AI capabilities on AWS.

product-information
skills-and-how-to
generative-ai
ai-ml
gen-ai
Show 4 more

Up Next

VideoThumbnail
15:58

Revolutionizing Business Intelligence: Generative AI Features in Amazon QuickSight

Nov 22, 2024
VideoThumbnail
1:01:07

Accelerate ML Model Delivery: Implementing End-to-End MLOps Solutions with Amazon SageMaker

Nov 22, 2024
VideoThumbnail
9:30

Deploying ASP.NET Core 6 Applications on AWS Elastic Beanstalk Linux: A Step-by-Step Guide for .NET Developers

Nov 22, 2024
VideoThumbnail
47:39

Simplifying Application Authorization: Amazon Verified Permissions at AWS re:Invent 2023

Nov 22, 2024
VideoThumbnail
2:51

How to Start, Connect, and Enroll Amazon EC2 Mac Instances into Jamf for Apple Mobile Device Management

Nov 22, 2024