Ir al contenido principalAWS Startups
    1. Events
    2. [AWS GenAI Loft] Train, Deploy and Govern Generative AI models on AWS

    [AWS GenAI Loft] Train, Deploy and Govern Generative AI models on AWS

    AWS GenAI Loft | París

    Día:

    -

    Horario:

    -

    Tipo:

    HÍBRIDO

    Ponentes:

    Oussama Kandakji | Sr. AI & ML GTM Solutions Architect at AWS, Hossam Basudan | Solutions Architect at AWS, Christian Kamwangala | Big Data Cloud Engineer at AWS, Ioan CATANA | AI/ML Specialist Solutions Architect at AWS

    Idioma:

    English

    Niveles:

    300: avanzado

    Ponentes

    Ver más

    If you want to maintain control of your Foundation Model (FM) lifecycle, join us to discover the methods and innovations in Generative AI around fine-tuning, inference, and MLOps. We will cover model fine-tuning techniques, inference optimizations and patterns, as well as MLOps and lifecycle automation practices.

    Agenda

    8:30 AM UTC

    Mastering Model Inference Strategies on AWS

    Oussama Kandakji | Sr. AI & ML GTM Solutions Architect at AWS

    This session explores AWS model inference strategies, covering batch processing, multi-model hosting, and serverless options. Learn to optimize performance, reduce costs, and meet diverse inference needs. Topics include batch inference for large datasets, multi-model hosting on single infrastructure, serverless and event-driven services for scalable, on-demand inference.

    9:30 AM UTC

    Coffee Break

    9:45 AM UTC

    Own, Control, and Optimize your AI Models at Scale

    Kevin FRANCOIS-BOUAOU & Oussama KANDAKJI | Manager AI/Generative AI at Deloitte & Sr. AI & ML GTM Solutions Architect at AWS

    Covering strategies to effectively manage and optimize foundation models at scale. Topics: model quantization, formats, orchestration for deployment/training, capacity usage, MLflow. Implementing scalable solutions for foundation models in production. Practical guidance to unlock their potential.

    11:00 AM UTC

    Lunch

    12:00 PM UTC

    The art of ML automation (MLOps) using MLflow on Amazon Sagemaker

    Christian KAMWANGALA & Ioan CATANA | Big Data Cloud Engineer & AI/ML Specialist Solutions Architect at AWS

    In this session, we'll explore how to leverage MLflow, an open-source ML lifecycle platform, on Amazon SageMaker to automate key ML workflow stages, including experiment tracking, model management, and deployment. Through hands-on examples, you'll learn how to construct a robust, scalable MLOps pipeline that improves the efficiency, reproducibility, and governance of your ML projects.

    1:15 PM UTC

    Coffee Break

    1:30 PM UTC

    Serve 100s of fine tuned llms for the price of 1 with Amazon Sagemaker

    Hossam Basudan | Solutions Architect at AWS

    This session explores serving multiple fine-tuned language models cost-effectively using Low-Rank Adaptation (LoRA) on Amazon SageMaker. Learn how LoRA enables efficient fine-tuning with significant savings. Discover a multi-adapter serving solution for deploying diverse fine-tuned models on a single base model, minimizing performance and cost impact.