Ir al contenido principalAWS Startups
    1. Events
    2. AWS Gen AI Loft | To Retrieve or to Retrain: RAG vs Fine-tuning Masterclass

    AWS Gen AI Loft | To Retrieve or to Retrain: RAG vs Fine-tuning Masterclass

    IA

    Amazon Bedrock

    AWS GenAI Loft | Bangalore

    IA generativa

    Machine learning

    SageMaker

    Startup

    Técnico

    Día:

    -

    Horario:

    -

    Tipo:

    PRESENCIAL

    Ponentes:

    Supreeth S Angadi | GenAI/ML Startups Solutions Architect, AWS

    Idioma:

    English

    Niveles:

    300: avanzado, 400: experto

    Are you a Gen AI/ML practitioner, data scientist or a business leader struggling to decide between customizing foundation models or augmenting them with external knowledge? Join us for an immersive hands-on session that tackles the critical question in Gen AI implementation: when to implement Retrieval-Augmented Generation (RAG) versus when to fine-tune your models.

    In this workshop, you'll get practical experience with both approaches using the AWS comprehensive suite of Gen AI/ML services including Amazon Bedrock, Amazon SageMaker, and model evaluation tools. By the end of the session, you'll have a clear decision framework to guide your organization's model customization strategy.

    Who is this for? This workshop is ideal for:

    • AI/ML practitioners and engineers implementing Gen AI solutions.
    • Technical and Business leaders evaluating model customization approaches.
    • Solution architects designing knowledge-intensive applications.
    • Developers working with context-rich Gen AI use cases.
    • Business stakeholders looking to understand tradeoffs in AI customization.

    Key highlights:

    • Demo of RAG pipelines using Amazon Bedrock.
    • Step-by-step fine-tuning of foundation models with Amazon SageMaker and Amazon Bedrock.
    • Practical evaluation frameworks to measure performance of both approaches.
    • Cost-benefit analysis of RAG vs fine-tuning strategies.
    • Real-world case studies illustrating optimal use cases for each approach.
    • Comprehensive decision framework development for your specific business needs.
    • Best practices for knowledge integration in large language models (LLMs).

    This session provides a balanced perspective on both customization strategies, helping you make informed decisions based on your specific use cases, data availability, and performance requirements. You'll leave with practical implementation knowledge and a strategic framework for choosing between RAG and fine-tuning.

    Prerequisites:

    • Basic to intermediate understanding of LLMs.
    • Familiarity with Python programming.
    • Basic knowledge of vector databases and embeddings.
    • Understanding of prompt engineering concepts.

    By registering, you agree to the AWS Event Terms & Conditions and AWS Code of Conduct.