Skip to main contentAWS Startups
    1. Events
    2. Data for Generative AI Workshop with AWS and Cohere

    Data for Generative AI Workshop with AWS and Cohere

    AWS GenAI Loft | San Francisco

    Day:

    -

    Time:

    -

    Type:

    IN PERSON

    Speakers:

    Benjamin S. Skrainka | Principal Economist, Amazon, ​​Shayon Sanyal | Principal WW Specialist Solutions Architect for Data and AI, AWS, Payal Singh | Solutions Architect, Cohere, Raj Jayakrishnan | Senior Database Solutions Architect, AWS, Rajeev Sakhuja | Generative AI Specialist, AWS, Hector Lopez | Applied Scientist in AWS's Generative AI Innovation Center, AWS, Elliott Choi | Product Manager, Cohere, ​Brandon Hoang | AI/ML Solutions Architect, Cohere

    Language:

    English

    Level(s):

    200 - Intermediate, 300 - Advanced

    Speakers

    Show more

    Join a small cohort with subject matter experts and uncover generative AI solution insights with hands-on labs. Develop rapid prototypes. Learn to achieve high recall rates, reduce latency, minimize hallucinations, and balance cost-performance optimization at production scale through practical strategies. According to Deloitte's 2024 survey, barriers to generative AI adoption include errors with real-world consequences, not achieving expected value, lack of high-quality data, hallucinations, and inaccuracies.
    In this data and use-case-focused generative AI workshop, developers, architects, and technical decision-makers will learn the framework to build and scale applications such as real-time conversational AI and recommendation engines with RAG (Retrieval-Augmented Generation).

    Event Prerequisites:

    • Government issued ID required for event check in
    • Bring your laptop for hands-on sessions and labs
    • Please use your business email address for registration

    Agenda

    4:30 PM UTC

    Check-in & Networking

    5:00 PM UTC

    Data to Decisions: Problem framing with data for business value

    ​In this insightful keynote, data strategy expert Ben Skrainka addresses a crucial challenge: making sure data models deliver real business value. He explores evidence-based methods to validate whether models truly answer key business questions, assess data sufficiency, and establish model trustworthiness. Participants will learn practical approaches to meet business goals, ensuring that data-driven decisions create measurable impact in today's generative AI enterprise.

    5:15 PM UTC

    Foundations of scalable RAG for generative AI use cases

    ​Unlock the foundation of enterprise-ready Retrieval-Augmented Generation (RAG) with PostgreSQL pgvector and Amazon Bedrock Knowledge Bases. Explore how developers efficiently build scalable and cost-effective AI applications including conversational AI, real-time semantic and hybrid search, and intelligent recommendation systems. Learn how to streamline development using Amazon Bedrock, LLMs, and a vector database to enhance retrieval accuracy and automation. We'll also explore agentic AI architectures, enabling seamless integration with enterprise data while optimizing performance and cost.

    6:15 PM UTC

    Rapid Prototyping: Build effective RAG pipelines for generative AI use cases

    ​In this hands-on session discover how to quickly prototype RAG pipeline using Amazon Bedrock and Aurora PostgreSQL pgvector. Building a RAG pipeline involves data ingestion, chunking, embedding, and iterative tuning to optimize data quality. Amazon Bedrock simplifies this process with Knowledge Bases, automating unstructured data handling and providing fine-grained tuning options. Its built-in RAG evaluation features help assess and refine pipelines using custom datasets. We'll explore how to build, manage, and optimize RAG pipelines with Amazon Bedrock and Aurora PostgreSQL pgvector, followed by a live code walkthrough showcasing the end-to-end process in action.

    7:15 PM UTC

    Lunch & Networking

    8:00 PM UTC

    Partner Session - Enterprise-grade RAG with Cohere’s Embed and Rerank models for generative AI

    ​Learn how Cohere's LLMs enhance RAG pipelines through Embed and Rerank models, addressing challenges like data quality and hallucinations. Embed generates high-quality embeddings for efficient retrieval, capturing semantic meaning and enabling similarity searches in vector spaces. Rerank optimizes results by reordering them based on relevance, ensuring accurate and contextual information feeds into the generative pipeline. These models integrate seamlessly with vector databases, crucial for scalable storage and retrieval of embeddings in production-grade RAG pipelines. This combination supports low-latency, high-recall performance at enterprise scale.

    9:00 PM UTC

    Practical strategies for production launch with Generative AI Innovation Center

    ​Using a real-world example of a RAG application, we'll highlight how we help customers to quickly develop prototypes and scale it to production. This session explores the journey from PoCs to enterprise-ready production solution, focusing on selecting optimal LLMs and vector databases for specific business objectives. Join us to learn practical strategies for moving beyond prototypes and building scalable, production-grade generative AI solutions for your use cases and business objectives.

    10:00 PM UTC

    Q&A and Networking

    In partnership with

    Events for you

    View all