AI21 Labs: Inside the engine of a large language model
Amazon Bedrock
GenAI Loft | San Francisco
IA generativa
-
-
PRESENCIAL
English
300: avanzado
Join AI21 Labs and AWS for a night of generative AI knowledge sharing, as we look under the hood of Jamba, the first production grade Transformer+Mamba model; and we explore techniques for AI-assisted code generation. All AI enthusiasts and learners welcome.
Agenda
12:30 AM UTC
Networking and light snacks
1:00 AM UTC
Jamba - The Benefits of a Hybrid SSM-Transformer Model
Yuval Belfer | Technical Product Marketing Manager, AI21 Labs
In this talk, we briefly dive into technical components of AI21’s Jamba model. Jamba, the first production grade Transformer+Mamba model, combines the quality of Transformers with the speed of Mamba. Jamba is built on top of an SSM-Transformer mixture-of-experts (MoE) architecture. It is based on hybrid interleaving Transformer & SSM layers, enjoying the benefits of both architectures. We describe
2:00 AM UTC
AI Code Generation and Evaluation
Anila Joshi & Kamran Razi | Applied Science Manager, AWS and Data Scientist, AWS
Explore AI-assisted code generation, focusing on the integration of retrieval-augmented generation (RAG) with generative AI services on AWS. Learn best practices for optimizing code repositories, setting up rapid prototyping environments using Bedrock, and leveraging agentic workflows with Langgraph. We'll also cover the RAGAS framework for evaluating code, using custom metrics like CodeBleu.
3:00 AM UTC
Open Q&A and Networking