Innovate Generative AI + Data logo
March 06, 2025 | Asia Pacific & Japan

Built for breakthroughs

Join us at Innovate to discover how AWS can help you harness the full potential of generative AI and data.

Conference timings

  • Australia & New Zealand
  • Australia
     GMT+11 (AEDT)

    Timing 1: 12.00pm - 3.30pm
    Timing 2: 4.30pm - 8.00pm
    Timing 3 Keynote rebroadcast: 10.00pm - 11.00pm

    New Zealand
     GMT+13 (NZDT)

    Timing 1: 2.00pm - 5.30pm
    Timing 2: 6.30pm - 10.00pm
     

  • ASEAN
  • Singapore
    Malaysia
    Philippines

     GMT+8 (SGT/MYT/PHT)

    Timing 1: 9.00am - 12.30pm
    Timing 2: 1.30pm - 4.00pm
    Timing 3 Keynote rebroadcast:
    7.00pm - 8.00pm

    Thailand
    Vietnam
     

     GMT+7 (ICT)

    Timing 1: 8.00am - 11.30am
    Timing 2: 12.30pm - 3.00pm
    Timing 3 Keynote rebroadcast:
    6.00pm - 7.00pm

    Indonesia
     
     

     GMT+7 (WIB)

    Timing 1: 8.00am - 11.30am
    Timing 2: 12.30pm - 3.00pm
    Timing 3 Keynote rebroadcast:
    6.00pm - 7.00pm

    Pakistan
     
     

     GMT+5 (PKT)

    Timing 1: 6.00am - 9.30am
    Timing 2: 10.30am - 2.00pm
    Timing 3 Keynote rebroadcast:
    4.00pm - 5.00pm

  • India & Sri Lanka
  • India
     GMT+5.30 (IST)

    Timing 1: 6.30am - 10.00am
    Timing 2: 11.00am - 2.30pm
    Timing 3 Keynote rebroadcast: 4.30pm - 5.30pm

    Sri Lanka
     GMT+5.30 (SLST)

    Timing 1: 6.30am - 10.00am
    Timing 2: 11.00am - 2.30pm
    Timing 3 Keynote rebroadcast: 4.30pm - 5.30pm

  • Japan
  • Japan
     GMT+9 (JST)

    Timing 1: 10.00am - 1.30pm
    Timing 2: 2.30pm - 5.00pm
    Timing 3 Keynote rebroadcast: 8.00pm - 9.00pm

  • Korea
  • Korea
     GMT+9 (KST)

    Timing 1: 10.00am - 1.30pm
    Timing 2: 2.30pm - 5.00pm
    Timing 3 Keynote rebroadcast: 8.00pm - 9.00pm

Divider

Explore six learning tracks designed for you

From technical deep dives for builders to strategic sessions for business leaders, this event equips you with the inspiration and skills to navigate the world of generative AI and data on AWS.
  • Opening Keynote
  • Opening Keynote -
    The Generative AI Mindset

    The Innovate keynote will cover the transformative power of AI and the critical mindsets required to deliver value with this groundbreaking technology. We will explore how data serves as a key differentiator, the importance of a disciplined approach, and the urgency of embracing AI to maintain competitiveness. The keynote will highlight the strategic imperative of identifying optimal AI applications within organizations, sharing customer success stories and key trends from across industries. Through these real-world examples, we will demonstrate how companies are turning ideas into innovations, leveraging the potential of generative AI to drive growth and transformation.

    Innovate with data and machine learning (Level 100)

    By harnessing the collective power of data, generative AI and human intelligence, organizations can unleash new possibilities in efficiency and creativity. One area that is especially critical to get right if you want to see success in generative AI is data. When you want to build generative AI applications that are unique to your business needs, data is the differentiator. Join this session to uncover how technologies like generative AI, machine learning and analytics provide data-driven insights to accelerate innovation, uncover new opportunities and optimize business performance.

  • Generative AI journey
  • The Gen AI Journey for
    Business Decision Makers

    Generative AI opportunities come from all across an organization. Line of business owners and job functions often surface the most salient business needs and use cases that could benefit from generative AI, and often partner with IT to evaluate, select, integrate, and implement new projects. This track will act as a guide for leaders from across organizations through each step of the generative AI journey, from idea inception through to production. The sessions within this track will cover a wide array of topics throughout the process to help break down each step and how they should approach the decisions they will face as leaders.

    Gen AI use cases (Level 200)

    Businesses across various industries are faced with the challenge of effectively leveraging generative AI to drive operational efficiency and enhance customer experiences. Identifying the most impactful use cases where generative AI can deliver substantial benefits is crucial. This talk explores how organizations can harness generative AI to develop innovative solutions that can create value for their organizations.

    New York Life: Data platform modernization to generative AI innovation (Level 200)

    New York Life (NYL) modernized its on-premises data platform to enhance analytics, performance, and automation for its critical insurance operations. To meet these objectives, NYL built a scalable data lake and reporting platform on AWS using AWS Lambda, AWS Glue, Amazon RDS, and Amazon Redshift. In this session, NYL shares lessons learned from moving off its legacy platform to a modern data lake and how having a modern data foundation accelerated their generative AI journey. Learn how NYL is using Amazon SageMaker and Amazon Bedrock to improve employee productivity and front-line agent experience.

    The AWS approach to secure generative AI (Level 200)

    At AWS, safeguarding the security and confidentiality of customers’ workloads is a top priority. AWS Artificial Intelligence (AI) infrastructure and services have built-in security and privacy features to give customers control over their data. Join this session to learn how AWS thinks about security across the three layers of our generative AI stack, from the bottom infrastructure layer to the middle layer, which provides easy access to all the models along with tools customers need to build and scale generative AI applications, and the top layer, which includes applications that leverage LLMs and other FMs to make work easier.

    7 principles for effective and cost efficient generative AI apps (Level 200)

    As generative AI gains traction, building effective and cost-efficient solutions is paramount. This session outlines seven guiding principles for building effective and cost-efficient generative AI applications. These principles can help businesses and developers harness generative AI’s potential while optimizing resources. Establishing objectives, curating quality data, optimizing architectures, monitoring performance, upholding ethics, and iterating improvements are crucial. With these principles, organizations can develop impactful generative AI applications that drive responsible innovation. Join this session to hear from ASAPP, a leading contact center solutions provider, as they discuss the principles they used to add generative AI–powered innovations to their software with Amazon Bedrock.

    Responsible AI: From theory to practice with AWS (Level 200)

    The rapid growth of generative AI brings promising innovation but raises new challenges around its safe and responsible development and use. While challenges like bias and explainability were common before generative AI, large language models bring new challenges like hallucination and toxicity. Join this session to understand how your organization can begin its responsible AI journey. Get an overview of the challenges related to generative AI, and learn about the responsible AI in action at AWS, including the tools AWS offers. Also hear Cisco share its approach to responsible innovation with generative AI.

  • Building and scaling with generative AI
  • Building and Scaling with Gen AI for Technical Decision Makers and Developers

    Explore the cutting-edge capabilities of Amazon Bedrock for building and scaling advanced generative AI applications. This track showcases the latest innovations in multi-agent systems, automated reasoning for enhanced AI safety, scalable Retrieval Augmented Generation (RAG), and efficient model selection. Learn how to leverage Amazon Bedrock's new features to create more accurate, secure, and cost-effective AI solutions. From multi-agent collaboration to automated reasoning checks, discover how Amazon Bedrock is pushing the boundaries of generative AI, enabling you to build sophisticated, responsible, and scalable AI applications that drive business value.

    Scaling generative AI workloads with efficient model choice (Level 300)

    As customers build, deploy, and scale generative AI applications, using and managing the right set of models for the outcomes they desire becomes key. Amazon Bedrock is introducing several features designed to help customers find the right models, and help customers enhance cost-efficiency while maintaining world class performance and accuracy. Attend this session to learn about Amazon Bedrock JumpStart, Intelligent Prompt Routing, Model Distillation.

    An overview of Amazon Nova understanding models (Level 300)

    Amazon Nova is a new generation of foundation models that deliver frontier intelligence and industry-leading price-performance. This session dives into Amazon Nova text and multimodal understanding models, their benchmark performances, and capabilities. Learn more about how these models excel in visual reasoning, agentic workflows, and Retrieval Augmented Generation (RAG). Experience video understanding on Amazon Bedrock and unparalleled customizability through text, image, and video input based fine-tuning and distillation. Join us to learn how Amazon Nova can transform your AI applications, from document analysis to API execution and UI actuation.

    Build scalable RAG applications using Amazon Bedrock Knowledge Bases (Level 300)

    Amazon Bedrock offers a managed Retrieval Augmented Generation (RAG) capability, connecting foundation models to your data. This session explores the latest Amazon Bedrock Knowledge Bases (KBs) techniques to improve response accuracy and optimize costs. Leverage Amazon Bedrock KBs' advanced chunking, parsing, and hallucination reducing capabilities for improved accuracy. Learn how to build scalable RAG solutions, delivering contextual responses while only paying for what you use.

    Leveraging multiple agents for scalable Gen AI applications (Level 300)

    Amazon Bedrock Agents handle tasks autonomously, streamlining operations for businesses. Discover how Northwestern Mutual is transforming customer support using Amazon Bedrock Agents’ capabilities. In this session, we’ll explore how they have built systems that enable users to engage with support chat, access real-time answers from knowledge bases, and automate actions across external platforms, streamlining operations. With Bedrock’s Guardrails ensuring security and preventing misuse, this approach offers a cutting-edge solution for AI-driven customer engagement. Join us for real-world insights into how coordinated AI agents are redefining efficiency and security in AI-powered support systems.

    Introduction to Automated Reasoning checks in Amazon Bedrock Guardrails (Level 300)

    AWS launches Automated Reasoning (AR) checks in Amazon Bedrock Guardrails - making AWS the first major cloud provider to use automated reasoning that helps build transparent, responsible generative AI applications. Join us to learn about AR Check - a new Guardrails policy that uses sound mathematical techniques to reduce hallucinations, validate generative AI responses, and explain them in an auditable way. See how the Guardrails policy can help users generate more accurate LLM responses on highly regulated topics such as operational workflows and HR policies; learn about the different use cases for AR checks; and discover how to get started today.

  • Using generative AI in the workplace
  • Using Gen AI in the Workplace for Business Decision Maker, Technical Decision Maker, and Developers

    In this track, builders and leaders will learn how businesses of all sizes and across all industries are unlocking the transformational value of Amazon Q. The sessions will feature the latest innovations in the Amazon Q product line-up, helping every employee be more productive from code development to end-customer interactions. Attendees will learn how to get started with generative AI in their roles, how to automate the undifferentiated tasks, and learn how to make generative AI securely accessible to everyone in your organization.

    What's new with Amazon Q business (Level 200)

    As enterprises grapple with fast technological change, join this session to learn about the latest product releases with Amazon Q Business. The session dives into the latest features and enhancements of Amazon Q Business, demonstrating how to deploy an Amazon Q Business application that leverages your enterprise content - empowering employees to answer questions, provide summaries, generate content, and securely complete tasks.

    A fast and easy way to build applications with AWS App Studio (Level 200)

    Experience the future of enterprise app development with App Studio - a generative AI-powered service that uses natural language to create enterprise-grade applications, empowers technical professionals like IT project managers, data engineers, and enterprise architects to build highly secure, scalable, and performant business applications solving critical problems in minutes, without professional developer skills.

    Reimagine business intelligence with generative AI (Level 100)

    In this session, get an overview of the generative AI capabilities of Amazon Q in QuickSight. Learn how analysts can build interactive dashboards rapidly, and discover how business users can use natural language to instantly create documents and presentations explaining data and extract insights beyond what’s available in dashboards with data Q&A and executive summaries. Hear from Availity on how 1.5 million active users are leveraging Amazon QuickSight to distill insights from dashboards instantly, and learn how they are using Amazon Q internally to increase efficiency across their business.

    Unleashing Generative AI with Amazon Q Developer (Level 200)

    Join us to discover how Amazon rolled out Amazon Q Developer to thousands of developers, trained them in prompt engineering, and measured its transformative impact on productivity. In this session, learn best practices for effectively adopting generative AI in your organization. Gain insights into training strategies, productivity metrics, and real-world use cases to empower your developers to harness the full potential of this game-changing technology. Don’t miss this opportunity to stay ahead of the curve and drive innovation within your team.

    Accelerate Multi-step SDLC tasks with Amazon Q Developer Agents (Level 200)

    While existing AI assistants focus on code generation with close human guidance, Amazon Q Developer has a unique capability called agents that can use reasoning and planning capabilities to perform multi-step tasks beyond code generation with minimal human intervention. Its agent for software development can solve complex tasks that go beyond code suggestions, such as building entire application features, refactoring code, or generating documentation. Join this session to discover new agent capabilities that help developers go from planning to getting new features in front of customers even faster.

  • Unified experience for your data and AI
  • Unified Experience for Your Data and AI for Technical Decision Makers

    In this track, you will learn how to transform your organizations data with the next-generation Amazon SageMaker – your center for all data, analytics, and AI workloads. Discover how to break down data silos, optimize storage, and accelerate query performance with modern data architectures. Master techniques for seamless data integration across sources while maintaining enterprise-grade governance and security. You'll gain practical knowledge to build a future-ready analytics environment that enables faster AI development, reduces operational complexity, and drives more value from your data assets.

    Build with data and AI faster in the next generation of Amazon SageMaker (Level 300)

    The rapid rise of generative AI is transforming how businesses approach data and analytics, blending traditional workflows and converging analytics and AI use cases. This session covers the next generation of Amazon SageMaker, the center for all your data, analytics, and AI, with a specific focus on SageMaker Unified Studio. Learn how Unified Studio brings together familiar tools from AWS analytics and AI/ML services for data processing, SQL analytics, machine learning model development, and generative AI application development into a single environment to enable collaboration and help teams build data products faster.

    Store Apache Iceberg tabular data at scale with Amazon S3 Tables (Level 300)

    Amazon S3 Tables is purpose-built to store tabular data in Apache Iceberg tables. With Amazon S3 Tables, you can create tables and set up table-level permissions with just a few clicks in the Amazon S3 console. These tables are backed by storage specifically built for tabular data, resulting in higher transactions per second and better query throughput compared to unmanaged tables in storage. Join this session to learn how you can automate table management tasks such as compaction, snapshot management, and more with Amazon S3 to continuously optimize query performance and minimize cost.

    Unify all your data with Amazon SageMaker Lakehouse (Level 300)

    Data warehouses, data lakes, or both? Explore how Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse simplifies analytics and AI. This session unveils how SageMaker Lakehouse provides unified access to data across Amazon S3 data lakes, Amazon Redshift data warehouses, and third-party sources without altering your existing architecture. Learn how it breaks down data silos and opens your data estate with Apache Iceberg compatibility, offering flexibility to use preferred query engines and tools that accelerate your time to insights. Discover robust security features, including consistent fine-grained access controls, that help democratize data without compromises.

    Build and optimize a data lake on Amazon S3 (Level 400)

    Organizations are building petabyte-scale data lakes on AWS to democratize access for thousands of end users. As customers design their data lake architecture for the right capabilities and performance, many are turning to open table formats (OTF) to improve the performance of their data lakes and to adopt enhanced capabilities, such as time-travel queries and concurrent updates. In this session, learn about recent innovations in AWS that make it easier to build, secure, and manage data lakes. Learn best practices to store, optimize, and use data lakes with industry-leading AWS, open source, and third-party analytics and ML tools.

    Streamline data and AI Governance with Amazon SageMaker Catalog (Level 300)

    Discover how Amazon SageMaker Catalog, built on Amazon DataZone, transforms data and AI governance at scale. This advanced session explores three key capabilities: centralized artifact management, unified access control, and comprehensive lineage tracking. Learn to efficiently organize data and ML assets using semantic search with AI-generated metadata. We will demonstrate implementing fine-grained permissions and setting up collaborative workflows. You will also see how SageMaker Catalog enables automated data quality monitoring and sensitive data detection. Accelerate data analytics and model development, ensure compliance, and foster collaboration - ultimately driving faster time to market for your analytics and AI initiatives while maintaining robust governance.

  • Build and train Foundation Models and LLMs
  • Build and Train Foundation Models and LLMs for Machine Learning Engineer & Data Scientists

    Master the tools and techniques for building, training, and deploying large language models and foundation models at scale. Through detailed sessions, learn how Amazon SageMaker's comprehensive platform enables efficient model development with features like HyperPod and optimized distributed training frameworks. Explore how AWS AI chips (Trainium and Inferentia) can help overcome computational challenges while reducing costs. This track demonstrates practical approaches to accelerate AI development using SageMaker Studio's integrated development environment, from initial experimentation to production deployment.

    Customize FMs with advanced techniques using Amazon SageMaker AI (Level 300)

    Amazon SageMaker allows data scientists and ML engineers to accelerate their generative AI journeys by deeply customizing publicly available foundation models (FMs) and deploying them into production applications. The journey begins with Amazon SageMaker JumpStart, an ML hub that provides access to hundreds of publicly available FMs, such as Llama 3, Falcon, and Mistral. Join this session to learn how you can evaluate FMs, select an FM, customize it with advanced techniques, and deploy it—all while implementing AI responsibility, simplifying access control, and enhancing transparency.

    Train generative AI models on Amazon SageMaker for scale and performance (Level 300)

    Amazon SageMaker offers the highest-performing ML infrastructure and a resilient training environment to help you train foundation models (FMs) for months without disruption. Top AI companies, from enterprises to startups, build cutting-edge models with billions of parameters on SageMaker. Discover how you can save up to 40% in training time and costs with state-of-the-art training capabilities such as Amazon SageMaker HyperPod, fully managed training jobs, and optimized distributed training frameworks. Join this session to learn how to run large-scale, cost-effective model training on SageMaker to accelerate generative AI development.

    Conquer AI performance, cost, and scale with AWS AI chips (Level 300)

    Generative AI promises to revolutionize industries, but its immense computational demands and escalating costs pose significant challenges. To overcome these hurdles, AWS designed and purpose built AI chips AWS Trainium and Inferentia. In this session, get a close look at the innovation across silicon, server, and datacenter and hear about how AWS customers built, deployed, and scaled foundation models across various products and services using AWS AI chips.

    Build, train, and deploy ML models, including FMs, for any use case (Level 300)

    Amazon SageMaker AI is a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML) for any use case. With SageMaker AI, you can build, train and deploy ML models, including foundation models (FMs) at scale using tools like notebooks, debuggers, profilers, pipelines, MLOps, and more – all in one integrated development environment (IDE). In this session, discover how you can get started along with the rest of the AWS platform.

    Accelerate ML workflows with Amazon SageMaker Studio (Level 300)

    Unlock the power of Amazon SageMaker Studio, a comprehensive IDE for streamlining the machine learning (ML) lifecycle. Explore data exploration, transformation, automated feature engineering with AutoML, and collaborative coding using integrated Jupyter Notebooks. Discover how SageMaker Studio and MLOps integration simplifies model deployment, monitoring, and governance. Through live demos and best practices, learn to leverage SageMaker Studio tools for efficient feature engineering, model development, collaboration, and data security.
  • Building a data foundation
  • Building a Data Foundation for Technical Decision Makers

    In this track, you will learn about foundational choices that provide flexibility for employing your data across any workload. You will gain practical skills for customizing and deploying generative AI applications using data from databases, file repositories, and your data lake. You will discover the power of metadata-driven data management with Amazon S3 Metadata and reimagine data streaming with end-to-end managed, and serverless capabilities. Finally, you will learn how Amazon DynamoDB was built to overcome the performance and scale limitations of relational databases, delivering consistent single-digit millisecond performance at any scale.

    A practitioner’s guide to data for generative AI (Level 300)

    In this session, gain the skills needed to deploy end-to-end generative AI applications using your most valuable data. While this session focuses on the Retrieval Augmented Generation (RAG) process, the concepts also apply to other methods of customizing generative AI applications. Discover best practice architectures using AWS database services like Amazon Aurora, Amazon OpenSearch Service, or Amazon MemoryDB along with data processing services like AWS Glue and streaming data services like Amazon Kinesis. Learn data lake, governance, and data quality concepts and how Amazon Bedrock Knowledge Bases, Amazon Bedrock Agents, and other features tie solution components together.

    Get started with Amazon Aurora DSQL (Level 400)

    Amazon Aurora DSQL is a new relational database that combines the best of serverless experience, Amazon Aurora performance, and Amazon DynamoDB scale. Aurora DSQL's distributed architecture is designed to make it effortless for organizations of any size to manage distributed workloads with strong consistency. In this session, we guide you through the fundamentals of Aurora DSQL. Learn how Aurora DSQL can work within your architecture, understand key considerations and tradeoffs, explore what an application architecture could look like, and more.

    Unlock the power of your data with Amazon S3 Metadata (Level 300)

    Amazon S3 revolutionizes data discovery by automatically generating rich metadata for every object in your Amazon S3 buckets. Powered by Amazon S3 Tables, Amazon S3 Metadata provides a queryable metadata layer that allows you to curate, discover, and use your Amazon S3 data more efficiently. With Amazon S3 Metadata, you can explore and filter your objects based on attributes like object creation time and storage class to streamline data preparation for analytics, real-time inference, and more. Join this session to learn the power of metadata-driven data management with Amazon S3 Metadata.

    Build streaming data into your data foundation (Level 300)

    Learn how AWS is reimagining data streaming with end-to-end managed and serverless capabilities across core infrastructure, systems operations, data integration, data processing, and data management for customers to modernize their data platforms. Learn about new and recent innovations for collecting, processing, and analyzing streaming data, including improved scalability, high resiliency, lower latency, and native integrations with many AWS and third-party services. Join this session to discover how you can use AWS streaming solutions to build scalable, resilient data streaming applications for faster insights and improved decision-making.

    An insider’s look into architecture choices for Amazon DynamoDB (Level 300)

    To overcome the performance and scale limitations of relational databases, AWS built Amazon DynamoDB to deliver consistent single-digit millisecond performance at any scale for the most demanding applications on the planet. In this session, learn about the architecture choices for Amazon DynamoDB. Gain a better understanding of when to use DynamoDB and why it is used by over one million AWS customers to power hundreds of applications that exceed half a million requests per second. Leave with a new perspective on how to design your own applications.

Session levels designed for you

INTRODUCTORY
Level 100

Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.

INTERMEDIATE
Level 200

Sessions are focused on providing best practices, details of service features and demos with the assumption that attendees have introductory knowledge of the topics.

ADVANCED
Level 300

Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.

EXPERT
Level 400

Sessions are for attendees who are deeply familiar with the topic, have implemented a solution on their own already, and are comfortable with how the technology works across multiple services, architectures, and implementations.

Divider

Keynote speaker

Join Rahul Pathak, VP Data & AI/ML GTM at Amazon Web Services, for an inspiring exploration of AI's transformative potential across industries. This keynote will showcase how organizations are harnessing AI and data to drive innovation and maintain competitive advantage. Through compelling customer success stories and emerging trends, discover how companies are turning groundbreaking ideas into real-world innovations and learn why AI adoption is crucial for future business transformation.

Speaker image

Featured speakers

Hear from the ones who have been there and done that.

Mark Relph

Director, Data & AI Partners, AWS

Join Mark Relph’s insightful session “7 principles for effective and cost efficient generative AI apps” (Level 200), where you’ll discover practical strategies for optimizing your AI investments. Learn from ASAPP’s real-world implementation using Amazon Bedrock, and gain valuable knowledge to enhance your organization’s AI initiatives. This session is ideal for teams looking to build efficient, cost-effective AI solutions.

Gradient background

Mindy Ferguson

VP, AWS Messaging and Streaming, AWS

Mindy Ferguson presents “Build streaming data into your data foundation” (Level 300), offering a comprehensive look at AWS’s latest streaming data innovations. Explore how these advanced capabilities can enhance your data architecture with improved scalability and performance. This session will equip you with practical knowledge to implement robust streaming solutions for your organization’s needs.

Gradient background

Mark Roy

Global Lead Solution Architect - Amazon Bedrock - GenAI, AWS

In “Leveraging multiple agents for scalable Gen AI applications” (Level 300), Mark Roy shares Northwestern Mutual’s successful implementation of Amazon Bedrock Agents. Learn how to design and deploy secure, efficient AI systems that transform customer support operations. This session provides valuable insights for organizations ready to implement advanced AI agent solutions.

Gradient background

Mani Khanuja

Sr. Artificial Intelligence & Machine Learning Specialist Solutions Architect, AWS

Mani Khanuja’s session “Build scalable RAG applications using Amazon Bedrock Knowledge Bases” (Level 300) offers practical guidance on implementing effective RAG solutions. Discover proven techniques for improving response accuracy and optimizing costs using Amazon Bedrock’s Knowledge Bases. This session is essential for teams looking to build sophisticated, scalable RAG applications.

Gradient background

Frequently asked questions

AWS Innovate is a free online conference designed to inspire and educate customers on maximizing innovation through AWS’s cloud infrastructure. Once you register for the event, you’ll receive an email with detailed instructions on how to access the conference platform.

You will create a username and password to complete your registration and access the event on live day. If you have any questions, contact us at aws-apj-marketing@amazon.com.

After completing the online registration process, you will receive a confirmation email.

If you have questions that have not been answered in the FAQs above, please email us at aws-apj-marketing@amazon.com.

Yes, all sessions will be on demand following the event.