[AWS GenAI Loft] Fine-tune and deploy LLM from Hugging Face on AWS AI Chips

    GenAI Loft | Paris

    Tag:

    -

    Zeit:

    -

    Typ:

    HYBRID

    Speakers:

    Oussama Kandakji | Sr. AI & ML GTM Solutions Architect @ AWS, Syl Taylor | Specialist SA - Software Performance @ AWS

    Sprache:

    English

    Stufe(n):

    200 – Mittelstufe, 300 – Fortgeschritten

    Speakers

    Alle anzeigen

    Supposing you have a business challenge to address, which requires custom trained or fine-tune ML models. You need to prepare a dataset, train/deploy your models and finally integrate these models to your application (eventually automate this whole process). And, in the end, you expect to have a cost-optimized solution that fits into your budget.

    In this workshop you'll learn how to use AWS AI Chips, AWS Trainium and AWS Inferentia, with Amazon SageMaker and Hugging Face Optimum Neuron, to optimize your ML workloads! You'll also learn a new methodology to map/qualify/implement end2end solutions for different business challenges. A top-down approach that starts with the use case/business challenge identification/mapping and ends with a trained model deployed as an API, which can be then integrated to your application.