Amazon Web Services

This video from AWS re:Invent 2023 introduces Amazon SageMaker Clarify's new foundation model evaluation capability. Mike Diamond, Emily Webber, and Taryn Heilman discuss how this tool helps evaluate large language models for accuracy, responsibility, and risks like hallucinations or biases. They demonstrate using pre-built datasets and metrics to quickly assess models, as well as customizing evaluations for specific use cases. The speakers highlight how these evaluations can be integrated into model selection and customization workflows to ensure responsible AI development. Taryn shares insights on how Indeed uses similar evaluation approaches for their AI applications in job matching and recommendations. The presentation emphasizes the importance of rigorous model assessment to mitigate risks and comply with emerging AI regulations.

product-information
skills-and-how-to
generative-ai
ai-ml
gen-ai
Show 1 more

Up Next

VideoThumbnail
15:58

Revolutionizing Business Intelligence: Generative AI Features in Amazon QuickSight

Nov 22, 2024
VideoThumbnail
1:01:07

Accelerate ML Model Delivery: Implementing End-to-End MLOps Solutions with Amazon SageMaker

Nov 22, 2024
VideoThumbnail
2:53:33

Streamlining Patch Management: AWS Systems Manager's Comprehensive Solution for Multi-Account and Multi-Region Patching Operations

Nov 22, 2024
VideoThumbnail
9:30

Deploying ASP.NET Core 6 Applications on AWS Elastic Beanstalk Linux: A Step-by-Step Guide for .NET Developers

Nov 22, 2024
VideoThumbnail
47:39

Simplifying Application Authorization: Amazon Verified Permissions at AWS re:Invent 2023

Nov 22, 2024