AWS Machine Learning Blog
Your guide to generative AI and ML at AWS re:Invent 2024
The excitement is building for the fourteenth edition of AWS re:Invent, and as always, Las Vegas is set to host this spectacular event. This year, generative AI and machine learning (ML) will again be in focus, with exciting keynote announcements and a variety of sessions showcasing insights from AWS experts, customer stories, and hands-on experiences with AWS services. As you continue to innovate and partner with us to advance the field of generative AI, we’ve curated a diverse range of sessions to support you at every stage of your journey. These sessions are strategically organized across multiple learning topics, so there’s something valuable for everyone, regardless of your experience level.
In this attendee guide, we’re highlighting a few of our favorite sessions to give you a glimpse into what’s in store. As you browse the re:Invent catalog, select your learning topic and use the “Generative AI” area of interest tag to find the sessions most relevant to you.
The technical sessions covering generative AI are divided into six areas: First, we’ll spotlight Amazon Q, the generative AI-powered assistant transforming software development and enterprise data utilization. These sessions, featuring Amazon Q Business, Amazon Q Developer, Amazon Q in QuickSight, and Amazon Q Connect, span the AI/ML, DevOps and Developer Productivity, Analytics, and Business Applications topics. The sessions showcase how Amazon Q can help you streamline coding, testing, and troubleshooting, as well as enable you to make the most of your data to optimize business operations. You will also explore AWS App Studio, a generative AI-powered service that empowers a new set of builders to rapidly create enterprise-grade applications using natural language, generating intelligent, secure, and scalable apps in minutes. Second, we’ll delve into Amazon Bedrock, our fully managed service for building generative AI applications. Learn how you can use leading foundation models (FMs) from industry leaders and Amazon to build and scale your generative AI applications, and understand customization techniques like fine-tuning and Retrieval Augmented Generation (RAG). We’ll cover Amazon Bedrock Agents, capable of running complex tasks using your company’s systems and data. Third, we’ll explore the robust infrastructure services from AWS powering AI innovation, featuring Amazon SageMaker, AWS Trainium, and AWS Inferentia under AI/ML, as well as Compute topics. Discover how the fully managed infrastructure of SageMaker enables high-performance, low cost ML throughout the ML lifecycle, from building and training to deploying and managing models at scale. Fourth, we’ll address responsible AI, so you can build generative AI applications with responsible and transparent practices. Fifth, we’ll showcase various generative AI use cases across industries. And finally, get ready for the AWS DeepRacer League as it takes it final celebratory lap. You don’t want to miss this moment in AWS DeepRacer history, where racers will go head-to-head one last time to become the final champion. Off the race track, we will have dedicated sessions designed to help you continue your learning journey and apply your skills to the rapidly growing field of generative AI.
Visit the Generative AI Zone (GAIZ) at AWS Village in the Venetian Expo Hall to explore hands-on experiences with our newest launches and connect with our generative AI and ML specialists. Through a series of immersive exhibits, you can gain insights into AWS infrastructure for generative AI, learn about building and scaling generative AI applications, and discover how AI assistants are driving business transformation and modernization. As attendees circulate through the GAIZ, subject matter experts and Generative AI Innovation Center strategists will be on-hand to share insights, answer questions, present customer stories, such as those from HubbleIQ, EBSCOlearning, and Stacklet as well as an extensive catalogue of reference demos, and provide personalized guidance for moving generative AI applications into production. Experience an immersive selection of innovative generative AI exhibits at the Generative AI and Innovations Pavilion through interactive displays spanning the AWS generative AI stack. Additionally, you can deep-dive into your industry-specific generative AI and ML use cases with our industry experts at the AWS Industries Pavilion.
If you’re new to re:Invent, you can attend sessions of the following types:
- Keynotes – Join in person or virtually and learn about all the exciting announcements.
- Innovation talks – Learn about the latest cloud technology from AWS technology leaders and discover how these advancements can help you push your business forward. These sessions will be livestreamed, recorded, and published to YouTube.
- Breakout sessions – These 60-minute sessions are expected to have broad appeal, are delivered to larger audiences, and will be recorded. If you miss them, you can watch them on demand after re:Invent.
- Chalk talks – Enjoy 60 minutes of content delivered to smaller audiences with an interactive whiteboarding session. Chalk talks are where discussions happen, and these offer you the greatest opportunity to ask questions or share your opinion.
- Workshops – In these hands-on learning opportunities, in 2 hours, you’ll be able to build a solution to a problem, and understand the inner workings of the resulting infrastructure and cross-service interaction. Bring your laptop and be ready to learn!
- Builders’ sessions – These highly interactive 60-minute mini-workshops are conducted in small groups of fewer than 10 attendees. Some of these appeal to beginners, and others are on specialized topics.
- Code talks – These talks are similar to our popular chalk talk format, but instead of focusing on an architecture solution with whiteboarding, the speakers lead an interactive discussion featuring live coding or code samples. These 60-minute sessions focus on the actual code that goes into building a solution. Attendees are encouraged to ask questions and follow along.
If you have reserved your seat at any of the sessions, great! If not, we always set aside some spots for walk-ins, so make a plan and come to the session early.
To help you plan your agenda for this year’s re:Invent, here are some highlights of the generative AI and ML sessions. Visit the session catalog to learn about all our generative AI and ML sessions.
Keynotes
Matt Garman, Chief Executive Officer, Amazon Web Services
Tuesday December 3| 8:00 AM – 10:30 AM (PST) | The Venetian
Join AWS CEO Matt Garman to hear how AWS is innovating across every aspect of the world’s leading cloud. He explores how we are reinventing foundational building blocks as well as developing brand-new experiences, all to empower customers and partners with what they need to build a better future.
Swami Sivasubramanian, Vice President of AI and Data
Wednesday December 4 | 8:30 AM – 10:30 AM (PST) | The Venetian
Join Dr. Swami Sivasubramanian, VP of AI and Data at AWS, to discover how you can use a strong data foundation to create innovative and differentiated solutions for your customers. Hear from customer speakers with real-world examples of how they’ve used data to support a variety of use cases, including generative AI, to create unique customer experiences.
Innovation talks
Pasquale DeMaio, Vice President & General Manager of Amazon Connect| BIZ221-INT | Generative AI for customer service
Monday December 2 | 10:30 AM – 11:30 AM (PST) | Venetian | Level 5 | Palazzo Ballroom B
Generative AI promises to revolutionize customer interactions, ushering in a new era of automation, cost efficiencies, and responsiveness. However, realizing this transformative potential requires a holistic approach that harmonizes people, processes, and technology. Through customer success stories and demonstrations of the latest AWS innovations, gain insights into operationalizing generative AI for customer service from the Vice President of Amazon Connect, Pasquale DeMaio. Whether you’re just starting your journey or well on your way, leave this talk with the knowledge and tools to unlock the transformative power of AI for customer interactions, the agent experience, and more.
Mai-Lan Tomsen Bukovec, Vice President, Technology | AIM250-INT | Modern data patterns for modern data strategies
Tuesday December 3 | 11:30 AM – 12:30 PM (PST) | Venetian | Level 5 | Palazzo Ballroom B
Every modern business is a data business, and organizations need to stay nimble to balance data growth with data-driven value. In this talk, you’ll understand how to recognize the latest signals in changing data patterns, and adapt data strategies that flex to changes in consumer behavior and innovations in technology like AI. Plus, learn how to evolve from data aggregation to data semantics to support data-driven applications while maintaining flexibility and governance. Hear from AWS customers who successfully evolved their data strategies for analytics, ML, and AI, and get practical guidance on implementing similar strategies using cutting-edge AWS tools and services.
Dilip Kumar, Vice President, Amazon Q Business | INV202-INT | Creating business breakthroughs with Amazon Q
Wednesday December 4| 11:30 AM – 12:30 PM (PST) | Venetian | Level 5 | Palazzo Ballroom B
Get an overview of Amazon Q Business capabilities, including its ability to answer questions, provide summaries, generate content, and complete assigned tasks. Learn how Amazon Q Business goes beyond search to enable AI-powered actions. Explore how simple it is to build applications using Amazon Q Apps. Then, examine how AWS App Studio empowers a new set of builders to rapidly create business applications tailored to their organization’s needs, and discover how to build richer analytics using Amazon Q in QuickSight.
Baskar Sridharan, VP, AI/ML Services & Infrastructure | AIM276-INT | Generative AI in action: From prototype to production
Wednesday December 4 | 1:00 PM – 2:00 PM (PST) | Venetian | Level 5 | Palazzo Ballroom B
Learn how to transition generative AI from prototypes to production. This includes building custom models, implementing robust data strategies, and scaling architectures for performance and reliability. Additionally, the session will cover empowering business users to drive innovation and growth through this transformative technology.
Adam Seligman, Vice President, Developer Experience | DOP220-INT | Reimagining the developer experience at AWS
Thursday December 5 | 2:00 PM – 3:00 PM (PST) | Venetian | Level 5 | Palazzo Ballroom B
Dive into the pioneering approach AWS takes to integrating generative AI across the entire software development lifecycle. Explore the rich ecosystem of technical resources, networking opportunities, and knowledge-sharing platforms available to you with AWS. Learn from real-world examples of how AWS, developers, and software teams are using the power of generative AI to creative innovative solutions that are shaping the future of software development.
Breakout sessions
DOP210: Accelerate multi-step SDLC tasks with Amazon Q Developer Agents
Monday December 2 | 8:30 AM – 9:30 AM PT
While existing AI assistants focus on code generation with close human guidance, Amazon Q Developer has a unique capability called agents that can use reasoning and planning capabilities to perform multi-step tasks beyond code generation with minimal human intervention. Its agent for software development can solve complex tasks that go beyond code suggestions, such as building entire application features, refactoring code, or generating documentation. Join this session to discover new agent capabilities that help developers go from planning to getting new features in front of customers even faster.
AIM201: Maximize business impact with Amazon Q Apps: The Volkswagen AI journey
Monday December 2 | 10:00 AM – 11:00 AM PT
Discover how Volkswagen harnesses generative AI for optimized job matching and career growth with Amazon Q. Learn from the AWS Product Management team about the benefits of Amazon Q Business and the latest innovations in Amazon Q Apps. Then, explore how Volkswagen used these tools to streamline a job role mapping project, saving thousands of hours. Mario Duarte, Senior Director at Volkswagen Group of America, details the journey toward their first Amazon Q application that helps Volkswagen’s Human Resources build a learning ecosystem that boosts employee development. Leave the session inspired to bring Amazon Q Apps to supercharge your teams’ productivity engines.
BSI101: Reimagine business intelligence with generative AI
Monday December 2 | 1:00 PM – 2:00 PM PT
In this session, get an overview of the generative AI capabilities of Amazon Q in QuickSight. Learn how analysts can build interactive dashboards rapidly, and discover how business users can use natural language to instantly create documents and presentations explaining data and extract insights beyond what’s available in dashboards with data Q&A and executive summaries. Hear from Availity on how 1.5 million active users are using Amazon QuickSight to distill insights from dashboards instantly, and learn how they are using Amazon Q internally to increase efficiency across their business.
AIM272: 7 Principles for effective and cost-efficient Gen AI Apps
Monday December 2 | 2:30 PM – 3: 30 PM PT
As generative AI gains traction, building effective and cost-efficient solutions is paramount. This session outlines seven guiding principles for building effective and cost-efficient generative AI applications. These principles can help businesses and developers harness generative AI’s potential while optimizing resources. Establishing objectives, curating quality data, optimizing architectures, monitoring performance, upholding ethics, and iterating improvements are crucial. With these principles, organizations can develop impactful generative AI applications that drive responsible innovation. Join this session to hear from ASAPP, a leading contact center solutions provider, as they discuss the principles they used to add generative AI-powered innovations to their software with Amazon Bedrock.
DOP214: Unleashing generative AI: Amazon’s journey with Amazon Q Developer
Tuesday December 3 | 12:00 PM – 1:00 PM
Join us to discover how Amazon rolled out Amazon Q Developer to thousands of developers, trained them in prompt engineering, and measured its transformative impact on productivity. In this session, learn best practices for effectively adopting generative AI in your organization. Gain insights into training strategies, productivity metrics, and real-world use cases to empower your developers to harness the full potential of this game-changing technology. Don’t miss this opportunity to stay ahead of the curve and drive innovation within your team.
AIM229: Scale FM development with Amazon SageMaker HyperPod (customer panel)
Tuesday December 3 | 2:30 PM – 3: 30 PM PT
From startups to enterprises, organizations trust AWS to innovate with comprehensive, secure, and price-performant generative AI infrastructure. Amazon SageMaker HyperPod is a purpose-built infrastructure for FM development at scale. In this session, learn how leading AI companies strategize their FM development process and use SageMaker HyperPod to build state-of-the-art FMs efficiently.
BIZ212: Elevate your contact center performance with AI‑powered analytics
Wednesday December 4 | 8:30 AM – 9:30 AM PT
AI is unlocking deeper insights about contact center performance, including customer sentiment, agent performance, and workforce scheduling. Join this session to hear how contact center managers are using AI-powered analytics in Amazon Connect to proactively identify and act on opportunities to improve customer service outcomes. Learn how Toyota utilizes analytics to detect emerging themes and unlock insights used by leaders across the enterprise.
AIM357: Customizing models for enhanced results: Fine-tuning in Amazon Bedrock
Wednesday December 4 | 4:00 PM – 5:00 PM PT
Unleash the power of customized AI by fine-tuning generative AI models in Amazon Bedrock to achieve higher quality results. Discover how to adapt FMs like Meta’s Llama and Anthropic’s Claude models to your specific use cases and domains, boosting accuracy and efficiency. This session covers the technical process, from data preparation to model customization techniques, training strategies, deployment considerations, and post-customization evaluation. Gain the knowledge to take your generative AI applications to new heights, harnessing tailored, high-performance language processing solutions that give you a competitive advantage.
AIM304: Using multiple agents for scalable generative AI applications
Wednesday December 4 | 5:30 PM – 6:30 PM PT
Join this session to learn how Northwestern Mutual transformed their application development support system using Amazon Bedrock multi-agent collaboration with better planning and communication among agents. Learn how they created specialized agents for different tasks like account management, repos, pipeline management, and more to help their developers go faster. Explore the significant productivity gains and efficiency improvements achieved across the organization.
CMP208: Customer Stories: Optimizing AI performance and costs with AWS AI chips
Thursday December 5 | 12:30 PM – 1:30 PM PT
As you increase the use of generative AI to transform your business at scale, rising costs in your model development and deployment infrastructure can adversely impact your ability to innovate and deliver delightful customer experiences. AWS Trainium and AWS Inferentia deliver high-performance AI training and inference while reducing your costs by up to 50%. Attend this session to hear from AWS customers ByteDance, Ricoh, and Arcee about how they realized these benefits to grow their businesses and deliver innovative experiences to their end-users.
AIM359: Streamline model evaluation and selection with Amazon Bedrock
Friday December 6 | 8:30 AM – 9:30 AM
Explore the robust model evaluation capabilities of Amazon Bedrock, designed to select the optimal FMs for your applications. Discover how to create and manage evaluation jobs, use automatic and human reviews, and analyze critical metrics like accuracy, robustness, and toxicity. This session provides practical steps to streamline your model selection process, providing high-quality, reliable AI deployments. Gain essential insights to enhance your generative AI applications through effective model evaluation techniques.
AIM342: Responsible generative AI: Evaluation best practices and tools
Friday December 6 | 10:00 AM – 11:00 AM
With the newfound prevalence of applications built with large language models (LLMs) including features such as RAG, agents, and guardrails, a responsibly driven evaluation process is necessary to measure performance and mitigate risks. This session covers best practices for a responsible evaluation. Learn about open access libraries and AWS services that can be used in the evaluation process, and dive deep on the key steps of designing an evaluation plan, including defining a use case, assessing potential risks, choosing metrics and release criteria, designing an evaluation dataset, and interpreting results for actionable risk mitigation.
Chalk talks
AIM347-R1 : Real-time issue resolution from machine-generated signals with gen AI
Tuesday December 3 | 1:00 PM – 2:00 PM PT
Resolving urgent service issues quickly is crucial for efficient operations and customer satisfaction. This chalk talk demonstrates how to process machine-generated signals into your contact center, allowing your knowledge base to provide real-time solutions. Discover how generative AI can identify problems, provide resolution content, and deliver it to the right person or device through text, voice, and data. Through a real-life IoT company case study, learn how to monitor devices, collect error messages, and respond to issues through a contact center framework using generative AI to accelerate solution provision and delivery, increasing uptime and reducing technician deployments.
AIM407-R: Understand the deep security & privacy controls within Amazon Bedrock
Tuesday December 3 | 2:30 PM – 3:30 PM PT
Amazon Bedrock is designed to keep your data safe and secure, with none of your data being used to train the supported models. While the inference pathways are straightforward to understand, there are many nuances of some of the complex features of Amazon Bedrock that use your data for other non-inference purposes. This includes Amazon Bedrock Guardrails, Agents, and Knowledge Bases, along with the creation of custom models. In this chalk talk, explore the architectures, secure data flows, and complete lifecycle and usage of your data within these features, as you learn the deep details of the security capabilities in Amazon Bedrock.
AIM352: Unlock Extensibility in AWS App Studio with JavaScript and Lambda
Wednesday December 4 | 10:30 AM – 11:30 AM PT
Looking for a better way to build applications that boost your team’s productivity and drive innovation? Explore the fastest and simplest way to build enterprise-grade applications—and how to extend your app’s potential with JavaScript and AWS Lambda. Join to learn hands-on techniques for automating workflows, creating AI-driven experiences, and integrating with popular AWS services. You’ll leave with practical skills to supercharge your application development!
CMP329: Beyond Text: Unlock multimodal AI with AWS AI chips
Wednesday December 4 | 1:30 PM – 2:30 PM PT
Revolutionize your applications with multi-modal AI. Learn how to harness the power of AWS AI chips to create intelligent systems that understand and process text, images, and video. Explore advanced models, like Idefics2 and Chameleon, to build exceptional AI assistants capable of OCR, document analysis, visual reasoning, and creative content generation.
AIM343-R: Advancing responsible AI: Managing generative AI risk
Wednesday December 4 | 4:00 PM – 5:00 PM
Risk assessment is an essential part of responsible AI (RAI) development and is an increasingly common requirement in AI standards and laws such as ISO 42001 and the EU AI Act. This chalk talk provides an introduction to best practices for RAI risk assessment for generative AI applications, covering controllability, veracity, fairness, robustness, explainability, privacy and security, transparency, and governance. Explore examples to estimate the severity and likelihood of potential events that could be harmful. Learn about Amazon SageMaker tooling for model governance, bias, explainability, and monitoring, and about transparency in the form of service cards as potential risk mitigation strategies.
AIM366: Bring your gen AI models to Amazon Bedrock using Custom Model Import
Thursday December 5 | 1:00 PM – 2:00 PM
Learn how to accelerate your generative AI application development with Amazon Bedrock Custom Model Import. Seamlessly bring your fine-tuned models into a fully managed, serverless environment, and use the Amazon Bedrock standardized API and features like Amazon Bedrock Agents and Amazon Bedrock Knowledge Bases to accelerate generative AI application development. Discover how Salesforce achieved 73% cost savings while maintaining high accuracy through this capability. Walk away with knowledge on how to build a production-ready, serverless generative AI application with a fine-tuned model.
Workshops
AIM315: Transforming intelligent document processing with generative AI
Monday December 2 | 8 AM – 10 AM PT
This workshop covers the use of generative AI models for intelligent document processing tasks. It introduces intelligent document processing and demonstrates how generative AI can enhance capabilities like multilingual OCR, document classification based on content/structure/visuals, document rule matching using RAG models, and agentic frameworks that combine generative models with decision-making and task orchestration. Attendees will learn practical applications of generative AI for streamlining and automating document-centric workflows.
DOP308-R: Accelerating enterprise development with Amazon Q Developer
Monday December | 12:00 PM – 2:00 PM PT
In this workshop, explore the transformative impact of generative AI in development. Get hands-on experience with Amazon Q Developer to learn how it can help you understand, build, and operate AWS applications. Explore the IDE to see how Amazon Q provides software development assistance, including code explanation, generation, modernization, and much more. You must bring your laptop to participate.
BSI204-R1: Hands-on with Amazon Q in QuickSight: A step-by-step workshop
Wednesday December 4 | 1:00 PM – 3:00 PM
In this workshop, explore the generative BI capabilities of Amazon Q in QuickSight. Experience authoring visuals and refining them using natural language. Learn how business users can use natural language to generate data stories to create highly customizable narratives or slide decks from data. Discover how natural language Q&A with Amazon Q helps users gain insights beyond what is presented on dashboards while executive summaries provide an at-a-glance view of data, surfacing trends and explanations. You must bring your laptop to participate.
AIM327: Fine-tune and deploy an LLM using Amazon SageMaker and AWS AI chips
Wednesday December 4 | 3:30 PM – 5:30 PM PT
As deep learning models have grown in size and complexity, there is a need for specialized ML accelerators to address the increasing training and inference demands of these models, while also delivering high performance, scalability, and cost-effectiveness. In this workshop, use AWS purpose-built ML accelerators, AWS Trainium and AWS Inferentia, to fine-tune and then run inference using an LLM based on the Meta Llama architecture. You must bring your laptop to participate.
AIM402: Revolutionizing multimodal data search with Amazon Q Business
Wednesday December 4 | 3:30 PM – 5:30 PM PT
Today’s enterprises deal with data in various formats, including audio, image, video, and text, scattered across different documents. Searching through this diverse content to find useful information is a significant challenge. This workshop explores how Amazon Q Business transforms the way enterprises search and discover data across multiple formats. By utilizing cutting-edge AI and ML technologies, Amazon Q Business helps enterprises navigate their content seamlessly. Find out how this powerful tool accelerates real-world use cases by making it straightforward to extract actionable insights from multimodal datasets. You must bring your laptop to participate.
Builder’s sessions
CMP304-R: Fine-tune Hugging Face LLMs using Amazon SageMaker and AWS Trainium
December Tuesday 3 | 2:30 PM – 3:30 PM
LLMs are pre-trained on vast amounts of data and perform well across a variety of general-purpose tasks and benchmarks without further specialized training. In practice, however, it is common to improve the performance of a pre-trained LLM by fine-tuning the model using a smaller task-specific or domain-specific dataset. In this builder’s session, learn how to use Amazon SageMaker to fine-tune a pre-trained Hugging Face LLM using AWS Trainium, and then use the fine-tuned model for inference. You must bring your laptop to participate.
AIM328: Optimize your cloud investments using Amazon Bedrock
December Thursday 5 | 2:30 PM – 3:30 PM
Manually tracking the interconnected nature of deployed cloud resources and reviewing their utilization can be complex and time-consuming. In this builders’ session, see a demo on how you can optimize your cloud investments to maximize efficiency and cost-effectiveness. Explore a novel approach that harnesses AWS services like Amazon Bedrock, AWS CloudFormation, Amazon Neptune, and Amazon CloudWatch to analyze resource utilization and manage unused AWS resources. Using Amazon Bedrock, analyze the source code to identify the AWS resources used in the application. Apply this information to build a knowledge graph that represents the interconnected AWS resources. You must bring a laptop to participate.
AIM403-R: Accelerate FM pre-training on Amazon SageMaker HyperPod
December Monday 2 | 2:30 – 3:30 PM
Amazon SageMaker HyperPod removes the undifferentiated heavy lifting involved in building and optimizing ML infrastructure for training FMs, reducing training time by up to 40%. In this builders’ session, learn how to pre-train an LLM using Slurm on SageMaker HyperPod. Explore the model pre-training workflow from start to finish, including setting up clusters, troubleshooting convergence issues, and running distributed training to improve model performance.
Code talks
DOP315: Optimize your cloud environments in the AWS console with generative AI
December Monday 2 | 5:30 PM – 6 30 PM
Available on the AWS Management Console, Amazon Q Developer is the only AI assistant that is an expert on AWS, helping developers and IT pros optimize their AWS Cloud environments. Proactively diagnose and resolve errors and networking issues, provide guidance on architectural best practices, analyze billing information and trends, and use natural language in chat to manage resources in your AWS account. Learn how Amazon Q Developer accelerates task completion with tailored recommendations based on your specific AWS workloads, shifting from a reactive review to proactive notifications and remediation.
AIM405: Learn to securely invoke Amazon Q Business Chat API
December Wednesday 4 | 2:30 PM – 3:30 PM
Join this code talk to learn how to use the Amazon Q Business identity-aware ChatSync API. First, hear an overview of identity-aware APIs, and then learn how to configure an identity provider as a trusted token issuer. Next, discover how your application can obtain an AWS STS token to assume a role that calls the ChatSync API. Finally, see how a client-side application uses the ChatSync API to answer questions from your documents indexed in Amazon Q Business.
AIM406: Attain ML excellence with proficiency in Amazon SageMaker Python SDK
December Wednesday 4 |4:30 PM – 5:30 PM
In this comprehensive code talk, delve into the robust capabilities of the Amazon SageMaker Python SDK. Explore how this powerful tool streamlines the entire ML lifecycle, from data preparation to model deployment. Discover how to use pre-built algorithms, integrate custom models seamlessly, and harness the power of popular Python libraries within the SageMaker platform. Gain hands-on experience in data management, model training, monitoring, and seamless deployment to production environments. Learn best practices and insider tips to optimize your data science workflow and accelerate your ML journey using the SageMaker Python SDK.
AWS DeepRacer
ML enthusiasts, start your engines—AWS DeepRacer is back at re:Invent with a thrilling finale to 6 years of ML innovation! Whether you’re an ML pro or just starting out, the AWS DeepRacer championship offers an exciting glimpse into cutting-edge reinforcement learning. The action kicks off on December 2 with the Last Chance Qualifier, followed by 3 days of intense competition as 32 global finalists race for a whopping $50,000 prize pool. Don’t miss the grand finale on December 5, where top racers will battle it out on the challenging Forever Raceway in the Data Pavilion. This year, we’re taking AWS DeepRacer beyond the track with a series of four all-new workshops. These sessions are designed to help you use your reinforcement learning skills in the rapidly expanding field of generative AI. Learn to apply AWS DeepRacer skills to LLMs, explore multi-modal semantic search, and create AI-powered chatbots.
Exciting addition: We are introducing the AWS LLM League—a groundbreaking program that builds on the success of AWS DeepRacer to bring hands-on learning to the world of generative AI. The LLM League offers participants a unique opportunity to gain practical experience in model customization and fine-tuning, skills that are increasingly crucial in today’s AI landscape. Join any of the three DPR-101 sessions to demystify LLMs using your AWS DeepRacer know-how.
Make sure to check out the re:Invent content catalog for all the generative AI and ML content at re:Invent.
Let the countdown begin. See you at re:Invent!
About the authors
Mukund Birje is a Sr. Product Marketing Manager on the AIML team at AWS. In his current role he’s focused on driving adoption of AWS data services for generative AI. He has over 10 years of experience in marketing and branding across a variety of industries. Outside of work you can find him hiking, reading, and trying out new restaurants. You can connect with him on LinkedIN
Dr. Andrew Kane is an AWS Principal WW Tech Lead (AI Language Services) based out of London. He focuses on the AWS Language and Vision AI services, helping our customers architect multiple AI services into a single use-case driven solution. Before joining AWS at the beginning of 2015, Andrew spent two decades working in the fields of signal processing, financial payments systems, weapons tracking, and editorial and publishing systems. He is a keen karate enthusiast (just one belt away from Black Belt) and is also an avid home-brewer, using automated brewing hardware and other IoT sensors.