[AWS GenAI Loft] Fine-tune and deploy LLM from Hugging Face on AWS AI Chips
GenAI Loft | París
-
-
HÍBRIDO
Oussama Kandakji | Sr. AI & ML GTM Solutions Architect @ AWS, Syl Taylor | Specialist SA - Software Performance @ AWS
English
200: intermedio, 300: avanzado
Ponentes
Mostrar todo
Supposing you have a business challenge to address, which requires custom trained or fine-tune ML models. You need to prepare a dataset, train/deploy your models and finally integrate these models to your application (eventually automate this whole process). And, in the end, you expect to have a cost-optimized solution that fits into your budget.
In this workshop you'll learn how to use AWS AI Chips, AWS Trainium and AWS Inferentia, with Amazon SageMaker and Hugging Face Optimum Neuron, to optimize your ML workloads! You'll also learn a new methodology to map/qualify/implement end2end solutions for different business challenges. A top-down approach that starts with the use case/business challenge identification/mapping and ends with a trained model deployed as an API, which can be then integrated to your application.