February 22, 2024
Innovate with AI and machine learning.
Unlock an intelligent tomorrow, today.

50+

Sessions in 3 languages
Ask the
experts
Live 1:1 Q&A
Certificate of
Attendance
Level up your skills
Customer
Stories
Learn from real-world examples and use cases
Generative AI
Builders’ Zone
Get hands on with technical demos

 Asia Pacific & Japan

Dive into the world of AI and machine learning

Enhance customer experiences, boost creativity, optimize processes, and unlock generative AI’s potential–and more. If you are ready to harness the power of AI and ML, join us at AWS Innovate to explore, discover, and learn the practical steps to bring these ideas to life.

Stay ahead in this rapidly evolving tech space with the latest technologies, up level your AI/ML skills, and connect with experts–all at no cost! Secure your spot now.

Agenda 

Embark on a hands-on learning journey, guided by step-by-step architectural and deployment best practices. Tailored for all skill levels–whether you are starting your AI/ML journey, an advanced user, or simply curious, we have sessions for your experience and job role. Check out the latest agenda below and join us for a day of immersive learning!

 Download Agenda at a Glance »
Agenda at a glance

Sessions details

  • Opening keynote
  • Opening keynote

    Opening keynote

    Innovate with data and machine learning (Level 100)
    By harnessing the collective power of data, generative AI and human intelligence, organizations can unleash new possibilities in efficiency and creativity. One area that is especially critical to get right if you want to see success in generative AI is data. When you want to build generative AI applications that are unique to your business needs, data is the differentiator. Join this session to uncover how technologies like generative AI, machine learning and analytics provide data-driven insights to accelerate innovation, uncover new opportunities and optimize business performance.

    Speakers:
    Olivier Klein, Chief Technologist, APJ, AWS
    Santanu Dutt, Head of Technology, APJ, AWS

  • Accelerating outcomes with AI/ML
  • Accelerating outcomes with AI/ML and generative AI

    Accelerating outcomes with AI/ML

    About the track

    Find out how AI and ML services are applied to applications and used in real-world use cases across industries and organizations.

    Choosing the right AI/ML and generative AI tools for your use case (Level 200)
    AI/ML techniques are important fundamentals for organizations looking to reinvent customer experiences and deliver their objectives. However, applying the right techniques to the use cases is not easy. This session provides guidance on how to apply practical and proven AIML techniques on key use cases for real business impact. We outline existing traditional AI and emerging generative AI suite of services to help you understand when and what services are best suited for the key application requirements or use cases across business functions and industries. We share best practices on how to future-proof your stack and ensure flexibility and control within your organization. This session concludes with how to apply technologies to your use cases, enabling you to conceptualize new opportunities, achieve competitive advantage, and deliver organizational outcomes.

    Speaker: Vatsal Shah, Principal Solutions Architect, AWS India
    Duration: 30mins


    Architecture patterns for building generative AI applications (Level 200)
    Do you wish to have guidance on the right tools to build cost-effective and high-performance generative AI applications that are customized for your workload and traffic patterns? In this session, learn the usage patterns and techniques for key use cases such as text generation, summarization, Q&A, chatbot, and image generation to improve productivity and create organizational value. We discuss key considerations when to apply RAG, fine-tuning, or prompt engineering to improve generative AI performance. The session also covers when and how to use advanced prompt engineering, fine-tuning, and RLHF options to improve the results. Find out how to leverage generative AI models with AWS services such as Amazon SageMaker and Amazon SageMaker JumpStart in use cases such as text summarization, simplification, and tone augmentation.

    Speaker: Praveen Jayakumar, Head of AI/ML Solutions Architect, AWS India
    Duration: 30mins


    Cost-optimizing AI/ML workloads on AWS (Level 200)
    Many organizations are adopting AI/ML either as a core component or as a supporting workload to achieve high application performance, create delightful user experiences, build sustained competitive advantage, and manage costs effectively. In this session, learn about various ways to cost-optimize your AI/ML workloads. We will delve into ML cost management, highlighting innovative approaches, instance optimizations, lifecycle configurations, and the integration of Amazon SageMaker with Amazon EC2 Spot Instances and Managed Spot Training for Amazon SageMaker and other services. Additionally, we will explore how you can leverage the latest features of Amazon SageMaker to reduce expenses while maximizing the value of your ML workloads.

    Speaker: Yudho Ahmad Diponegoro, Senior Solutions Architect, AWS
    Duration: 30mins

  • Generative AI fundamentals
  • Generative AI fundamentals

    Generative AI fundamentals

    About the track

    Discover the potential of generative AI for your organization with this track. We discuss the techniques, share common use cases, and provide step by step guidance to build generative AI applications on AWS.

    Select the right large language model for your generative AI use case (Level 200)
    There are a large number of large language models (LLMs) out there and choosing the right one is critical because of the high cost associated with deploying generative AI models. Join this session as we explain the tools and considerations when evaluating LLMs. We share how to evaluate LLMs for tasks where the output is fact-based and when the output is creative by nature. With thousands of text generation models out there to choose from and endless prompt engineering possibilities to use them with, learn how you can quickly and reliably identify the best price-performance solution for your use case. Find out how to build a complete picture of model and prompt-template performance on AWS. We showcase how to use automated tools that work alongside human labelers to create scalable but accurate evaluations, enabling you to build high-quality solutions faster and deploy with confidence.

    Speaker: Alex Thewsey, Senior AI/ML Solutions Architect, AWS
    Duration: 30mins


    LLMOps: Lifecycle of a LLM (Level 200)
    The rise of large language models (LLMs) surfaced new challenges in development, deployment, and maintenance of these models. While many standard MLOps practices apply, foundation models require additional considerations. Join this session to learn practical tips on adopting LLMs at scale in your organization and maintaining these models in the long run. We explain how you can operationalize generative AI applications using MLOps principles leading to foundation model operations (FMOps).

    Speakers: 
    Sara van de Moosdijk, Senior AI/ML Partner Solutions Architect, AWS
    Vasileios Vonikakis, Senior AI/ML Partner Solutions Architect, AWS

    Duration: 30mins


    Build an automated large language model evaluation pipeline on AWS (Level 200)
    Large Language Models (LLMs) have gained significant attention as the key tools for understanding, generating and manipulating text with unprecedented proficiency. Their potential applications span from conversational agents to content generation and information retrieval. However, maximizing LLM capabilities, while ensuring responsible and effective use of these models hinges on the critical process of LLM evaluation. Join us as we dive into the solution framework and demonstrate how you can efficiently evaluate different LLMs and prompt templates by temporarily launching endpoints and running test sets. We show how the evaluation process is automated by converting LLM evaluation into a classification problem, where a test LLM assesses the output of the first LLM, similar to human evaluators, thus saving significant costs and resources during the evaluation stage.

    Speakers: 
    Melanie Li, PhD, Senior AI/ML Specialist Technical Account Manager, AWS
    Sam Edwards, Cloud Support Engineer, AWS
    Rafa Xu, Senior Cloud Architect, AWS Professional Services

    Duration: 30mins


    Using generative AI responsibly and securely on AWS (Level 200)
    Generative AI presents significant opportunities for organizations across multiple industries and it can be a force for good, in areas such as biomedical research and sustainable materials design. But it is important to build generative AI responsibly, securely, and achieve the right balance between innovation and safety. In this session we discuss the key considerations on how to use generative AI responsibly, including fairness, explainability, robustness, privacy, security, governance, and transparency. We dive deep into the operational security features of key AI/ML solutions such as Amazon Bedrock, Amazon CodeWhisperer, and Amazon SageMaker JumpStart. Understand how these solutions provide you with built-in security, privacy, encryption, access control, and compliance, making it easier for you and your organization to build generative AI responsibly into your workflows while maintaining security, as well as adhering to the risk and compliance regulations.

    Speaker: Michael Stringer, Principal Security Solutions Architect, AWS
    Duration: 30mins

  • AI/ML use cases solutions track 1
  • AI/ML use cases solutions track 1

    AI/ML use cases solutions track 1

    Discover the various machine learning integration services available on AWS to help you build, deploy, and innovate at scale. We also focus on how AI services are applied to common use cases such as personalized recommendations, adding intelligence to your contact center, and improving customer experience.

    Transform your organization with intelligent document processing (IDP) on AWS (Level 200)
    Organizations across several industries often process documents in their daily processes as these documents contain critical information that are required for fast and accurate decision making. However, majority of these documents are still processed manually for extraction of information and insights, which is time-consuming, error-prone, expensive, and difficult to scale. In this session, we explain how to build an intelligent document processing (IDP) solution on AWS to automate information extraction from documents even with different types and formats, enabling your organization to achieve speed and accuracy, without the need for ML skills. Uncover how this solution enables your organization to make quality business decisions faster, freeing your human resources to work on higher value tasks, and reduce overall costs.

    Speakers: 
    Melwin Pais, Senior Solutions Architect, AWS
    Leah Schimer, Associate Solutions Architect, AWS

    Duration: 30mins


    Build a generative AI-powered content moderation solution on AWS (Level 200)
    Content moderation is serious business today as organizations who have the right strategy in place can ensure a safe, compliant and inclusive online environment for their customers, ultimately protecting brand reputation and improve user experience. This session outlines how to use generative AI-based content moderation with Amazon SageMaker. We share how multimodality BLIP-2's model and the Llama2 model are used to improve content moderation performance. Understand how BLIP-2's capabilities in content appropriateness offers better performance especially in policy adaptation efficiency. We discuss ways to achieve high degree of moderation accuracy in a mixed media environment, with broader applications in various domains. The session also covers how to apply these capabilities in the advanced multi-modal models in your content moderation strategy, enabling your organization to maintain a safe digital environment for your customers.

    Speaker: Melanie Li, PhD, Senior AI/ML Specialist Technical Account Manager, AWS
    Duration: 30mins


    Personalize content with generative AI and Amazon Personalize (Level 200)
    Organizations are focused on ways to deliver highly personalized user experiences at scale to achieve higher customer engagement, conversion, and revenue while creating meaningful differentiation. In this session, we showcase how you can use Amazon Personalize with generative AI to boost your user engagement and provide highly-optimized customer interactions. Discover how to use Amazon Bedrock LLM foundation model with algorithms from Amazon Personalize to automatically generate thematic connections between recommended content for any interface. We also demonstrate how to build a custom solution with personalized content descriptions that can be integrated into your existing websites, applications, and email marketing systems with simple APIs.

    Speakers: 
    Tim Wu, Senior AI Specialist Solutions Architect, AWS
    Tristan Nguyen, Specialist Solutions Architect, AWS

    Duration: 30mins


    Bringing the LLM closer to your SQL databases via Agents for Amazon Bedrock (Level 300)
    Organizations today still struggle trying to get tangible and measurable value from data. One of the key reasons is because the data are in different formats and structures, and spread out across data warehouses, cloud databases and other systems. Working on data sets to get insights by leveraging multiple solutions in different layers across databases and APIs add further complexity. In this session, we showcase how to access, analyze diverse data sets and generate insights from structured databases. We demonstrate the use of generative AI, analytics and serverless compute to unlock natural language capabilities, enabling new ways to communicate with your existing systems. Discover how to apply to data sources across multiple locations using the same machine learning model without the need to change the computer language in use. We also explain how this simple, pay-as-you-go (PAYG) pricing model provides you and your stakeholders across the organization the ability to access, analyze the diverse data assets to deliver value from your organizational data, without requiring deep technical skillset and resources.

    Speakers:
    Sam Gordon, Senior Cloud Architect, AWS Professional Services
    Ed Fraga, Cloud Architect, AWS Professional Services

    Duration: 30mins


    Harnessing foundational models for enhanced image creation and search using Amazon Bedrock (Level 200)
    Visual content creation is growing in popularity among organizations of all sizes as it enables them to build stronger customer engagement and driving sales. But there are ensuing challenges in effectively managing, searching, and utilizing this vast array of images across various domains. In this session, we discuss the latest advancements in AI-driven image generation and search technologies, with focus on Foundational Models (FMs) on Amazon Bedrock. We explore the ease of integrating, customizing these models as well as underlining the significance of data privacy and security. The session includes a discussion about responsible AI, and approaches to ensure these powerful tools can prevent the generation of harmful or biased content. We also demonstrate how you can build transformational experiences using images in Amazon Bedrock, enabling you to efficiently scale your workflow while conserving valuable time and resources.

    Speaker: Suman Debnath, Principal Developer Advocate, Data, AWS
    Duration: 30mins


    Build impactful insights with no code AI/ML and generative BI (Level 200)
    Dashboards are meant to answer questions quickly, enabling all users throughout the organization to improve productivity and provide better decision-making from the data. In this session, learn how generative BI capabilities in Amazon QuickSight enables you to author dashboards using natural language. We demonstrate how business users can easily dive deep into data by simply asking questions. Discover how your users can identify, surface and leverage meaningful insights quickly with Amazon QuickSight with no coding required.

    Speaker: Michael Armentano, APJ GTM Lead, Generative BI and Amazon QuickSight, AWS
    Duration: 30mins


    Extend ML capabilities to relational database-driven applications using AWS no-code/low-code solution (Level 200)
    Organizations are looking at ways to improve the data stored in their relational databases and incorporate up-to-the-minute predictions from ML models. However, most of the ML processing are done offline in separate systems, causing delays in receiving ML inferences for use in applications. In addition, developing, installing, and integrating ML models require deep domain knowledge, technical skillset, and appropriate infrastructure. Join this session as we showcase how to extend ML capabilities via digital payment fraud prevention demo using Amazon Aurora ML integration with Amazon SageMaker. We explain how to train models, host endpoints and effectively incorporate real-time model inferences in your applications without any ML training.

    Speaker: Darshit Vora, Senior Startup Solutions Architect, AWS India
    Duration: 30mins

  • AI/ML use cases solutions track 2
  • AI/ML use cases solutions track 2

    AI/ML use cases solutions track 2

    Discover the various machine learning integration services available on AWS to help you build, deploy, and innovate at scale. We also focus on how AI services are applied to common use cases such as personalized recommendations, adding intelligence to your contact center, and improving customer experience.

    Improve customer experience with generative AI-powered contact centers (Level 200)
    Cloud is changing the way how call centers can accelerate innovation, uncover real-time insights from data quickly, and deliver impactful customer experiences at scale. In this session, discover how to use the new generative AI capabilities built into Amazon Connect to deliver immediate outcomes for your contact center. We dive deep into some of the key features such as Amazon Connect Contact Lens, enabling post-contact summarization for increased productivity, and Amazon Q in Connect, which provides you the ability to easily understand customer intent and deliver accurate responses based on data source inputs, all in real-time. At the end of this session, learn how these easy-to-build configuration tools enrich your cloud center solution capabilities and improve customer experience.

    Speaker: Joely Huang, Senior Cloud Architect, AWS Professional Services
    Duration: 30mins


    Build a generative AI-powered chatbot with Amazon Bedrock, Langchain, RAG and Streamlit (Level 300)
    Many organizations are looking at using generative AI to build chatbot applications for business outcomes; including answering FAQs, scheduling, enhancing customer experiences through intuitive engagements and more. Join this session to uncover the key building blocks for a chatbot application powered by LLMs. We demonstrate how you can build chatbots using Amazon Bedrock APIs, Langchain, Streamlit, RAG and interact with high-performing foundation models such as Jurassic-2, Claude, Stable Diffusion, Command, and Amazon Titan. Understand how Amazon Bedrock’s new features such as knowledge bases and agents enable you to complete complex tasks for a wide range of use cases. We conclude by outlining the various tools you can use to build the chatbot best suited for your digital journey and business requirements.

    Speaker: Aman Sharma, Senior Solutions Architect, AWS
    Duration: 30mins


    Deliver relevant, accurate, and customized responses with RAG and Amazon Bedrock (Level 200)
    Generative AI applications can deliver better responses by incorporating organization-specific data with Retrieval Augmented Generation (RAG). However, implementing RAG requires specific skillset and time to configure connections to data sources, manage data ingestion workflows, and write custom code to manage the interactions between the foundation model (FM) and the data sources. This session covers how you can simplify the process with Amazon Bedrock knowledge bases and agents. From the user prompt, Amazon Bedrock automatically identifies data sources, retrieves the relevant information, and adds them to the prompt, thereby giving the FM more information to generate responses. At the end of this session, understand how these tools enable you to deliver more relevant, accurate, and customized responses based on your organization’s proprietary knowledge sources.

    Speaker: Xin Chen, Senior Cloud Architect, AWS Professional Services
    Duration: 30mins


    Build a personalized shopping assistant with Agents for Amazon Bedrock and Amazon OpenSearch service (Level 200)
    Do you wish your e-commerce website can quickly provide your users exactly what is relevant to them? Are you looking to leverage your existing systems such as databases, and microservices to create a personalized shopping assistant? In this session, we explain how to build personalized search experiences using Amazon OpenSearch Service and Amazon Bedrock to enhance your customer experience and improve conversion rates. We share the key reference architecture and demonstrate how this solution enables your customers to quickly locate the relevant products with the requested features. Understand how the agent can compare product information, and provide recommendations. We also explain how to use generative AI with your existing systems to build user-friendly sites, and provide your customers with prompt and precise search results.

    Speaker: Tuan Huynh, Senior Cloud Architect, AWS Professional Services
    Duration: 30mins


    Unlock insights from your structured data using generative AI and analytics (Level 200)
    Users from several organizations often rely on their technical teams to build the dashboards for insights needed for critical decision making. But the solutions built in-house at times require these users to know some SQL/Python languages in order to customize the dashboards to meet their requirements, which may delay the decision-making process. Join this session to learn how to build a simple dashboard using natural language and code generation capabilities of foundation models with Amazon Bedrock and AWS Glue Data Catalog. We explain how this easy-to-build solution provides users the ability to ask the questions about factual information, without requiring in-depth knowledge of where the data is located, structure of data, and programming languages. We also demonstrate how to access key insights from the data quickly using plain English language.

    Speaker: Kamal Manchanda, Solutions Architect, AWS India
    Duration: 30mins


    Optimize your live video streaming experience with AI/ML (Level 200)
    Live video streaming usage has been growing at exponential rate, and many video streaming content providers are looking to deliver real-time viewer engagement, and improve user experience. In this session, we showcase how to run high-quality, low-latency, and resilient live streams with AWS edge solutions. Find out how to optimize user experience in key areas such as content creation with generative AI on AWS, content moderation by leveraging Amazon Rekognition and content recommendations using Amazon Personalize.

    Speakers: 
    Thomas Sauvage, Senior Edge Services Go-to-Market Specialist, AWS
    Christer Whitehorn, Principal Media Edge Services Specialist Solutions Architect, AWS

    Duration: 30mins


    Build a custom real-time multi-object tracking solution at the edge (Level 200)
    Multi-object tracking (MOT) is a computer vision task that involves tracking movements of multiple objects over time in a video sequence. MOT can be applied in numerous practical use cases such as crowd monitoring, traffic management, security surveillance, production cycles, safety monitoring, and navigation as it offers the ability to analyze physical performance in real-time. Join this session as we demonstrate how to train a custom MOT model in Amazon SageMaker and prepare your model for deployment to the edge with AWS IoT Greengrass. We discuss key practical considerations when building a real-time tracking solution for the edge.

    Speaker: Derrick Choo, Senior Solutions Architect, AWS
    Duration: 30mins

  • Build, train, and deploy ML models
  • Build, train, and deploy ML models

    Build, train, and deploy ML models

    Learn how to build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.

    Get started with Amazon SageMaker in minutes (Level 200)
    Amazon SageMaker provides builders the ability to build, train, and deploy ML models for any use case with fully-managed infrastructure, tools, and workflows. In this session, find out how Amazon SageMaker takes away the heavy lifting, enabling high performance and low-cost ML. Explore the technical details of each of the modules of Amazon SageMaker, and the capabilities of the platform. Find out how to build generative AI applications with Amazon SageMaker to accelerate scalable, reliable, and secure foundation model (FM) development. The session features best practices to deploy open-source large language models (LLMs) using Amazon SageMaker and how to access LLMs via Amazon SageMaker Jumpstart. The session concludes with key takeaways with how you can analyze, evaluate, test, retrain, and deploy FMs, helping you to quickly get started.

    Speaker: Pauline Kelly, Solutions Architect, AWS
    Duration: 30mins


    Improving productivity - Harness the power of generative AI foundation models on Amazon SageMaker Canvas (Level 200)
    Machine learning can enable organizations of various industries and sizes to solve business challenges and achieve better outcomes. However, many are struggling to implement ML effectively throughout the organization, beyond their technical users. Join this session to discover how to use Amazon SageMaker Canvas to complete the ML lifecycle — from preparing data and creating models to generating predictions — without writing a single line of code. Learn how both technical and non-technical users in your organization can utilize ready-to-use models or create your own to gain insights from your data and ML models with Amazon SageMaker Canvas. We will explain how to access open-source and Amazon LLMs on Amazon SageMaker Canvas through a single interface, and provide guidance on how to simply prompt the models for assistance with tasks such as generating content, summarizing, categorizing documents, and answering questions with just a few clicks. The session also covers how to ask specific questions targeting your dataset and obtain answers to enhance your productivity, all without the need for an ML skillset.

    Speaker: Sahil Verma, Senior Solutions Architect, AWS India
    Duration: 30mins


    Prepare ML data faster and at scale with Amazon SageMaker (Level 200)
    Data preparation for ML can be challenging because it requires extracting, normalizing data and performing feature engineering which can be time-consuming. This session covers how you can simplify the process of data preparation, feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, bias detection, and visualization, from a single visual interface with Amazon SageMaker Data Wrangler. Find out how to reduce the time it takes to aggregate and prepare data for ML from weeks to minutes. We also share how simplify data preparation at multiple stages in Retrieval Augmented Generation (RAG) models with Amazon SageMaker Data Wrangler.

    Speaker: Ben Friebe, Senior ISV Solutions Architect, AWS


    Optimize training of foundation models on Amazon SageMaker (Level 200)
    Training machine learning models at scale often requires a significant amount of resources, time, and investment. In this session, we explain how to leverage Amazon SageMaker to train and tune machine learning (ML) models without the need to manage infrastructure. We walk through the various strategies to train large language models in a cost-effective and performant manner using Amazon SageMaker. Learn how to optimize training to minimize cost and achieve high performance with distributed training approaches, smart sifting, Amazon SageMaker HyperPod, AWS Trainium, and other optimization methods. The session concludes with tips and best practices for optimizing training to minimize cost and achieve high performance.

    Speakers: 
    Gaurav Singh, Senior Solutions Architect, AWS India
    Smiti Guru, Senior Solutions Architect, AWS India

    Duration: 30mins


    Building NLP models with Amazon SageMaker (Level 300)
    Organizations have to manage massive volumes of voice and text data from various communication channels. Some use NLP to automatically process this data, analyze the intent or sentiment in the messages, and respond in real-time to human communication. But NLP models often consist of hundreds of millions of model parameters. Thus building, training, and optimizing them may require a lot of time, resources, and skills. This session outlines how Amazon SageMaker helps you to quickly build and train NLP models. We share the different distributed training and inference for large language models on Amazon SageMaker for use cases such as sentiment analysis, text summarization, and text classification.

    Speaker: Tapan Hoskeri, Principal Solutions Architect, AWS India
    Duration: 30mins


    Implement MLOps practices with Amazon SageMaker (Level 200)
    Implementing the right MLOps practices enables builders to collaborate effectively in preparing, building, training, deploying, and managing models at scale. In this session, we will showcase the MLOps features in Amazon SageMaker that assist in provisioning consistent model development environments, automating ML workflows, implementing CI/CD pipelines for ML, monitoring models in production, and standardizing model governance capabilities. We will demonstrate how to apply these MLOps practices using Amazon SageMaker features — such as SageMaker projects, SageMaker Pipelines, SageMaker Model Registry, and SageMaker Model Monitor — to quickly deliver high-performance production ML models at scale.

    Speakers: 
    Gaurav Singh, Senior Solutions Architect, AWS India
    Smiti Guru, Senior Solutions Architect, AWS India

    Duration: 30mins


    Real-world MLOps for batch inference with model monitoring using Amazon SageMaker (Level 300)
    Maintaining ML workflows in production is not easy, as there is a need to create continuous integration and continuous delivery (CI/CD) pipelines for ML code and models, manage model versioning, and monitor for data and concept drift, as well as model retraining. In addition, there is a manual approval process to ensure new versions of the model satisfy both performance and compliance requirements. Join this session to learn how to create ML workflows for batch inference to automate key steps such as job scheduling, model monitoring, retraining, and registration with Amazon SageMaker. Discover the best practices in MLOps and how to integrate them with your existing CI/CD and IaC (Infrastructure as Code) tools. We will share key approaches you can use to mitigate challenges and common pitfalls when adopting MLOps practices. The session concludes with guidance on how to reduce the complexities and costs associated with running and maintaining batch inference workloads in production.

    Speaker: Indrajit Ghosalkar, Senior Solutions Architect, AWS
    Duration: 30mins

  • AI/ML for builders and developers
  • AI/ML for builders and developers

    AI/ML for builders and developers

    At AWS, our goal is to put machine learning (ML) in the hands of every builder and developer. Learn and experiment how to use ML and transform the way we live our daily lives.

    Builders guide to AI/ML on AWS (Level 300)
    Join this session to learn the strategies on how to select and leverage machine learning, and generative AI solutions on AWS when developing your software applications. We explain where traditional ML solutions including Amazon Comprehend and Amazon Personalize are best suited for select use cases, and when to apply generative models via AWS Bedrock. We then showcase how Amazon CodeWhisperer enables you to accelerate your overall software development process. This session also features a demo on how to generate code suggestions ranging from snippets to full functions in real-time in the IDE based on comments and existing codes. At the end of this session, understand when and how to utilize the right ML and generative AI solutions for your applications.

    Speaker: Matt Coles, Principal Engineer, AWS
    Duration: 30mins


    Application development cookbook: Recipes for building applications with ML and generative AI (Level 300)
    As the popularity of generative AI applications increases, so does the number of ways builders and developers can harness these capabilities and integrate them into their applications. Did you know that you can build applications with generative AI capabilities in languages other than Python? Are you aware that you can call Large Language Models (LLMs) from services such as AWS Lambda and AWS AppSync? Did you know there are reference codes available that you can leverage to build your applications more efficiently? Join this session to learn about new approaches to building applications, including an example application written in C#. Discover how to add generative AI capabilities such as Vector Embeddings and LangChain with Amazon Bedrock to your applications. We will also explore various options for calling Amazon Bedrock from other AWS services. By the end of this session, you will learn about the tools available to improve productivity and enhance user experience.

    Speaker: Derek Bingham, Senior Developer Advocate, AWS
    Duration: 30mins


    Enrich and turbo charge your generative AI applications with visual workflow (Level 300)
    Developers and builders are constantly looking at how to accelerate the development of generative AI applications to deliver rapid innovation, without requiring extensive investment in time, budgets and resources. Join this session to learn how to use AWS Step Functions, a visual workflow solution, to build distributed applications, automate processes, and create data and ML pipelines. We demonstrate the use of AWS Step Functions for orchestrating generative AI applications with Amazon Bedrock foundation models. At the end of the session, learn how these key building blocks enable you to compose LLMs and integrate with various AWS services into production-grade workflows.

    Speaker: Donnie Prakoso, Principal Developer Advocate, AWS
    Duration: 30mins

  • Infrastructure for AI/ML workloads
  • Infrastructure for AI/ML workloads

    Infrastructure for AI/ML workloads

    AWS offers the ideal combination of high performance, cost-effective, and energy-efficient purpose-built ML tools and accelerators, optimized for ML applications.

    Choosing the right compute for ML training and inference (Level 200)
    Organizations are constantly looking at how to achieve high application performance to deliver outcomes, and manage costs efficiently in the ML productization. Join this session as we share how to choose the right infrastructure for your AI/ML workload requirements. We delve into the highly performant, scalable, and cost-effective ML infrastructure from AWS, ranging from the latest GPUs to purpose-built accelerators including AWS Trainium, AWS Inferentia and Amazon EC2 P5 which are designed for training and running models. The session concludes with key considerations on how to best approach training and inference to get your ML workloads running smoothly and efficiently on AWS.

    Speaker: Santhosh Urukonda, Senior Prototyping Engineer, AWS India
    Duration: 30mins


    Accelerate generative AI and AI/ML workloads with AWS storage (Level 200)
    Organizations looking at using pre-built AI models or building your own model need a well-thought-out data strategy focusing on how they effectively store, prepare, and access their data throughout the AI/ML lifecycle. And in this AL/ML lifecycle which includes preparation, building, training and deployment, selecting the right storage solution makes a huge difference, as it can bring significant benefits to builders and model consumers. In this session, we outline the different generative AI and AI/ML workload requirements. We then discuss key considerations on how to select the right storage solution for the different workloads and requirements. Uncover the AWS storage solutions available to support the different workload requirements. This session also features best practices on how to leverage the different AWS storage solutions to achieve high performance, availability, security, productivity, cost effectiveness, and time to value.

    Speaker: Sandeep Aggarwal, Solutions Architect, Storage, AWS India
    Duration: 30mins


    Improve ML capabilities with pgvector and Amazon Aurora PostgreSQL (Level 200)
    Majority of the organizational data resides in relational databases, and the need to make this data accessible for training, as well as using ML models to generate predictions in database-based applications has also increased, which may result in more resources and time to support application requirements. In this session, we explain how Amazon Aurora PostgreSQL-compatible Edition and Amazon RDS for PostgreSQL offer support for pgvector, an open-source extension for PostgreSQL, so you can easily store, search, index, and query huge volumes of ML embeddings to power your generative AI applications. Find out how to leverage pgvector to store and search embeddings from Amazon Bedrock, Amazon SageMaker, and more, providing you the ability to build new content, enable hyper-personalization, and create interactive experiences. We demonstrate how to build and deploy an AI-powered application with pgvector and Amazon Aurora PostgreSQL-Compatible Edition for sentiment analysis use case, without the need to build custom integrations, move data around or learn separate tools.

    Speaker: Roneel Kumar, Senior Database Solutions Architect, AWS
    Duration: 30mins


    Generate predictive insights with Amazon Redshift ML (Level 200)
    Organizations are managing more data than ever before. Harnessing data to reinvent the business can be challenging, but it is imperative for organizations who wish to stay relevant now and in the future. In this session, we share how Amazon Redshift ML brings machine learning to the data in your data warehouse, enabling you to go beyond insights to predictions. Understand how Amazon Redshift ML enables you to create, train, and apply ML models using familiar SQL commands. We demonstrate with step-by-step guidance how to make inferences on your product feedback data in Amazon Redshift, as a SQL function in queries and reports. With this data driven approach, it eliminates the need for specialized skills, or additional infrastructure.

    Speaker: Paul Villena, Senior Redshift Solutions Architect, AWS
    Duration: 30mins

  • Innovate Builders Zone
  • Infrastructure for AI/ML workloads

    Innovate Builders Zone

    Dive deep into technical stacks, learn how AWS experts have helped solve real-world problems for customers, try out these demos with step-by-step guides, and walk away with the ability to implement these or similar innovative solutions in your own organization.

    Generative AI on AWS (Level 100)
    Join this session to learn about generative AI and its impact to the organization. Uncover the suite of generative AI solutions on AWS, applications for different use cases and its meaningful impact in the various industries. We explain how to leverage AI use case explorer tool, a business outcome centric web search tool that enables users to easily find the AI use cases, discover relevant customer success stories, and mobilize their teams towards AI deployments. We cover how to use an expert-curated action plan to realize the power of AI for your organization. At the end of this session, understand why you should care about generative AI, when, and how to use with the options from AWS.

    Speaker: Nieves Garcia, AI/ML Specialist Lead, AWS


    Generative AI platform on AWS (Level 200)
    Large language models (LLMs) are growing in popularity but it also surfaced new challenges. Multiple teams need to collaborate and have the right workflows and platforms in place for the deployment and management of generative AI solutions in production. Join this session as we dive into the generative platform capabilities and key considerations for deployment and management of generative AI solutions in production. Learn how to leverage various capabilities at different stages when consuming and iterating on LLMs for different use cases, such as prompt templates management, validation system, feedback system, conversation management, caching and more. The session covers MVS (Minimum Variable Service) and best practices in applying the different LLM models for your use cases. This session also covers the approaches to remove heavy lifting in infrastructure management, enabling your technical teams to focus on core tasks instead.

    Speakers: 
    Hao Fei Feng, Senior Cloud Architect, AWS Professional Services
    Bin Liu, Senior Cloud Architect, AWS Professional Services


    Smart traffic management (Level 200)
    Have you ever been stuck at a traffic light even though there are no vehicles coming from the other direction? Do you wish to avoid traffic congestion and get to the destination quickly? In this session, we demonstrate how to build a smart traffic management solution, powered by machine learning at the edge with Amazon SageMaker and AWS IoT. Discover how the solution enables you to automatically observe traffic patterns, vehicle loads on the road and control the lights so that cars move across quickly and reduce traffic congestion. Uncover how it can automatically identify emergency vehicles, including ambulances and police cars, as well as control the traffic lights to enable these vehicles to move to the destinations in the shortest possible time. We also showcase how this solution can manage the traffic flow by automatically tracking accidents, vehicle failures or other incidents that result in road blockage. The session concludes with guidance on how to develop an analytics dashboard for real-time traffic insights.

    Speakers: 
    Chandra Munibudha, Principal Solutions Architect, AWS India
    Satheesh Kumar, Principal Solutions Architect, AWS India


    Troubleshooting with augmented observability and generative AI (Level 200)
    Join this session to learn how to build an augmented reality (AR) observability dashboard that enables you to identify and resolve application and infrastructure issues through a gamified experience. Utilizing an AR headset, Amazon Transcribe, generative AI, and observability solutions, we will demonstrate how to intentionally induce failures within an application. We will then walk you through creating real-time analysis within the architecture and using generative AI-powered voice interaction to request root causes and solutions. This session also showcases how to pinpoint the root cause, provide enhanced details in the AR-rendered architecture, and recommend ways to resolve the problem.

    Speaker: Vikram Shitole, Prototyping Architect, AWS India


    Brick maestro with AI/ML and HPC on AWS (Level 200)
    In this session, we demonstrate how you can develop a 'brick maestro' solution using AI/ML, IoT, and high-performance computing (HPC) solutions on AWS. Discover how this solution utilizes computer vision models with Amazon SageMaker to identify bricks. We will then explain how to leverage ML models to rank the best builds that resemble real objects, and how to influence these rankings by indicating your preferred objects. Learn how to efficiently run this workload in the cloud with HPC solutions on AWS, which provide virtually unlimited compute capacity, a high-performance file system, and high-throughput networking.

    Speakers: 
    Sakthi Srinivasan, Engagement Manager, AWS India
    Jyoti Sharma, Prototyping Engineer, AWS India


    Generative AI-powered conversational intelligence - audio, chats supporting diverse languages (Level 200)
    Many organizations use contact centers to identify crucial product feedback, improve agent productivity, and boost overall customer experience. However, many still rely on manual methods or solutions to analyze calls in local languages which are time consuming, costly and difficult to scale. In this session, we demonstrate how to build an automated solution with Amazon SageMaker Jumpstart, Amazon Comprehend, Amazon Kendra and Amazon Bedrock (Anthropic Claude V2) to transcribe, translate and summarize agent-customer conversations across various languages. Discover how you can easily extract issues, actions, and call quality metrics for data-driven insights. We also demonstrate how to build a chatbot to query across conversations and extend it to a prompting engine.

    Speaker: Kousik Rajendran, Principal Solutions Architect, AWS India


    Transform digital experiences with generative AI: Intelligent video/audio Q&A (Level 200)
    Videos remain as one of the common and powerful mediums for immersive user experiences and higher engagement level. However, majority of the digital assets lack informative metadata needed for effective content search. Thus, many organizations today still need to analyze different segments of the whole file and discover the concepts which can be time consuming and requires a lot of manual effort. This session demonstrates how to build an automated solution, enabling users to ask questions and get relevant responses from the video database even if these assets are in the form of non-text content, or have limited metadata. We demonstrate how this solution provides the response quickly and locate the videos with the specific timestamp of the content that are most relevant to the user’s request.

    Speakers: 
    Melanie Li, PhD, Senior AI/ML Specialist Technical Account Manager, AWS
    Sam Edwards, Cloud Support Engineer, AWS


    Build generative AI applications with no code/low code solutions on AWS (Level 200)
    In this session, find out how to build an application using natural language with a conversational chat interface to perform tasks such as creating narratives, reports, answering questions, summarizing notes, and providing explanations, without writing a single line of code. We start with demonstrating how to create your own generative AI-based application without writing any code using PartyRock, an Amazon Bedrock Playground. Learn the techniques and capabilities needed to take full advantage of generative AI, including experimenting with various foundation models, building intuition with text-based prompting, and chaining prompts together. We then dive into how you can build and disseminate information to your users quickly, ensuring this information stays entirely within your environment. We also showcase the steps to assemble reports from tabular data in excel or CSV format. This session concludes with guidance on how to easily build, test, compare model outputs and share the findings with relevant teams.

    Speaker: Priya Jathar, Solutions Architect, AWS India


    Build a personalized registration application using generative AI and AWS serverless (Level 200)
    Generative AI is useful in delivering outcomes such as improving customer experiences and creating new revenue streams. But what if you want to take it a step further? In this session, we demonstrate how to build a serverless application, with personalized content features using AWS Lambda, AWS Amplify, Amazon DynamoDB and generative AI on AWS. The session also features best practices you can implement for observability using Powertools for AWS Lambda, enabling you to achieve greater agility, scalability, resilience and faster time to market.

    Speakers: 
    Nishant Dhiman, Solutions Architect, AWS India
    Ankush Agrawal, Solutions Architect, AWS India


    Codenator: Enhancing user productivity through AI-powered code generation and secure execution (Level 300)
    Do you wish to have an automated approach to manage code generation and execution? Join this session as we demonstrate how to build executable code effortlessly, speed up development processes and enhance your productivity. Understand how this solution, which is built with multiple modular, reusable and scalable components provide you with the ability to use some of its components for other Generative AI projects. Find out how this secure and isolated sandbox environment ensures the safe execution of code, protecting sensitive data and maintaining system integrity. We also explain the interactive nature of the system which facilitates continuous improvement of code solutions through user feedback, so you can optimize and refine the coding process to meet changing user needs and preferences. This session also covers other advanced tools that you can use to seamlessly integrate with AWS resources, automating the execution of operational actions and efficiently enhancing the manageability of your cloud resources.

    Speakers: 
    Melanie Li, PhD, Senior AI/ML Specialist Technical Account Manager, AWS
    Sam Edwards, Cloud Support Engineer, AWS

  • Closing remarks
  • Opening keynote

    Closing remarks

    Accelerate rapid innovation with data and AI/ML (Level 200)
    Data is the difference between a general application and one that truly knows your business and your customer. Those who are successful in building differentiated applications are able to improve operational efficiencies, invent more compelling customer experiences, and uncover new opportunities. In this session, we provide a recap of the days' sessions and address commonly asked questions. Learn how to build strong data foundation, which includes a comprehensive set of services to meet all your use case requirements. We demonstrate how the integrations across these services break down data silos, and tools to govern data across the end-to-end data workflow so you can innovate quickly. The session also features how to use these tools to remove undifferentiated heavy lifting of data management with automation and intelligence.

    Speakers:
    Matt Coles, Principal Engineer, AWS
    Donnie Prakoso, Principal Developer Advocate, AWS
    Derek Bingham, Senior Developer Advocate, AWS

To learn more about each session, please use your desktop for better viewing experience.

  • Opening keynote
    • Innovate with data and machine learning (Level 100)
  • Accelerating outcomes with AI/ML
    • Choosing the right AI/ML and generative AI tools for your use case (Level 200)
       
    • Architecture patterns for building generative AI applications (Level 200)
       
    • Cost-optimizing AI/ML workloads on AWS (Level 200)
  • Generative AI fundamentals
    • Select the right large language model for your generative AI use case (Level 200)
       
    • LLMOps: Lifecycle of a LLM (Level 200)
       
    • Build an automated large language model evaluation pipeline on AWS (Level 200)
       
    • Using generative AI responsibly and securely on AWS (Level 200)
  • AI/ML use cases solutions track 1
    • Transform your organization with intelligent document processing (IDP) on AWS (Level 200)
       
    • Build a generative AI-powered content moderation solution on AWS (Level 200)
       
    • Personalize content with generative AI and Amazon Personalize (Level 200)
       
    • Bringing the LLM closer to your SQL databases via Agents for Amazon Bedrock (Level 300)
       
    • Harnessing foundational models for enhanced image creation and search using Amazon Bedrock (Level 200)
       
    • Build impactful insights with no code AI/ML and generative BI (Level 200)
       
    • Extend ML capabilities to relational database-driven applications using AWS no-code/low-code solution (Level 200)
  • AI/ML use cases solutions track 2
    • Improve customer experience with generative AI-powered contact centers (Level 200)

    • Build a generative AI-powered chatbot with Amazon Bedrock, Langchain, RAG and Streamlit (Level 300)

    • Deliver relevant, accurate, and customized responses with RAG and Amazon Bedrock (Level 200)

    • Build a personalized shopping assistant with Agents for Amazon Bedrock and Amazon OpenSearch service (Level 200)

    • Unlock insights from your structured data using generative AI and analytics (Level 200)

    • Optimize your live video streaming experience with AI/ML (Level 200)

    • Build a custom real-time multi-object tracking solution at the edge (Level 200)
  • Build, train, and deploy ML models
    • Get started with Amazon SageMaker in minutes (Level 200)
       
    • Improving productivity - Harness the power of generative AI foundation models on Amazon SageMaker Canvas (Level 200)
       
    • Prepare ML data faster and at scale with Amazon SageMaker (Level 200)
       
    • Optimize training of foundation models on Amazon SageMaker (Level 200)
       
    • Building NLP models with Amazon SageMaker (Level 300)
       
    • Implement MLOps practices with Amazon SageMaker (Level 200)
       
    • Real-world MLOps for batch inference with model monitoring using Amazon SageMaker (Level 300)
  • AI/ML for builders and developers
    • Builders guide to AI/ML on AWS (Level 300)
       
    • Application development cookbook: Recipes for building applications with ML and generative AI (Level 300)
       
    • Enrich and turbo charge your generative AI applications with visual workflow (Level 300)
  • Infrastructure for AI/ML workloads
    • Choosing the right compute for ML training and inference (Level 200)
       
    • Accelerate generative AI and AI/ML workloads with AWS storage (Level 200)
       
    • Improve ML capabilities with pgvector and Amazon Aurora PostgreSQL (Level 200)
       
    • Generate predictive insights with Amazon Redshift ML (Level 200)
  • Innovate Builders Zone
    • Generative AI on AWS (Level 100)
       
    • Generative AI platform on AWS (Level 200)
       
    • Smart traffic management (Level 200)
       
    • Troubleshooting with augmented observability and generative AI (Level 200)

    • Brick maestro with AI/ML and HPC on AWS (Level 200)

    • Generative AI-powered conversational intelligence - audio, chats supporting diverse languages (Level 200)

    • Transform digital experiences with generative AI: Intelligent video/audio Q&A (Level 200)

    • Build generative AI applications with no code/low code solutions on AWS (Level 200)

    • Build a personalized registration application using generative AI and AWS serverless (Level 200)

    • Codenator: Enhancing user productivity through AI-powered code generation and secure execution (Level 300)
  • Closing remarks
    • Accelerate rapid innovation with data and AI/ML (Level 200)

Session levels designed for you

INTRODUCTORY
Level 100

Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.

INTERMEDIATE
Level 200

Sessions are focused on providing best practices, details of service features and demos with the assumption that attendees have introductory knowledge of the topics.

ADVANCED
Level 300

Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.


Conference timings

  • Australia & New Zealand
  • Australia
     GMT+11 (AEDT)

    Timing 1: 10.30am - 4.00pm
    Timing 2: 4.30pm - 10.00pm

    New Zealand
     GMT+13 (NZDT)

    Timing 1: 12.30pm - 6.00pm
    Timing 2: 6.30pm - 12.00am

  • ASEAN
  • Singapore
    Malaysia
    Philippines
     GMT+8 (SGT/MYT/PHT)

    Timing 1: 7.30am - 1.00pm
    Timing 2: 1.30pm - 7.00pm
    Timing 3 Keynote rebroadcast:
    8.00pm - 9.00pm

    Thailand
    Vietnam
     GMT+7 (ICT)

    Timing 1: 6.30am - 12.00pm
    Timing 2: 12.30pm - 6.00pm
    Timing 3 Keynote rebroadcast:
    7.00pm - 8.00pm

    Indonesia
     GMT+7 (WIB)

    Timing 1: 06:30 - 12:00
    Timing 2: 12:30 - 18:00
    Timing 3 Keynote rebroadcast: 
    19:00 - 20:00

    Pakistan
     GMT+5 (PKT)

    Timing 1: 4.30am - 10.00am
    Timing 2: 10.30am - 4.00pm
    Timing 3 Keynote rebroadcast: 
    5.00pm - 6.00pm

  • India & Sri Lanka
  • India
     GMT+5.30 (IST)

    Timing 1: 5.00am - 10.30am
    Timing 2: 11.00am - 4.30pm
    Timing 3 Keynote rebroadcast: 5.30pm - 6.30pm

    Sri Lanka
     GMT+5.30 (SLST)

    Timing 1: 5.00am - 10.30am
    Timing 2: 11.00am - 4.30pm
    Timing 3 Keynote rebroadcast: 5.30pm - 6.30pm

  • Korea
  • Korea
     GMT+9 (KST)

    Timing 1: 8.30am - 2.00pm
    Timing 2: 2.30pm - 8.00pm

  • Japan
  • Japan
     GMT+9 (JST)

    Timing 1: 8.30am - 2.00pm
    Timing 2: 2.30pm - 8.00pm

Featured AWS speakers

Olivier Klein

Olivier Klein
Chief Technologist, APJ, AWS

Praveen Jayakumar

Praveen Jayakumar
Head of AI/ML Solutions Architect, AWS India

Donnie Prakoso

Donnie Prakoso
Principal Developer Advocate, APJ, AWS

Nieves Garcia

Nieves Garcia
AI/ML Specialist Lead, APJ, AWS
 

Melanie Li

Melanie Li
PhD, Senior AI/ML Specialist Technical Account Manager, APJ, AWS

Derek Bingham

Derek Bingham
Senior Developer Advocate, APJ, AWS

Santanu Dutt
Head of Technology, APJ, AWS

Matt Coles

Matt Coles
Principal Engineer, APJ, AWS

Sara van de Moosdijk

Sara van de Moosdijk
Senior AI/ML Partner Solutions Architect, APJ, AWS

Learn more about machine learning and AI on AWS

Leader in Gartner Magic Quadrant for Cloud AI Developer Services

100,000+ customers use AWS for machine learning

100,000+

customers use AWS for their AI/ML workloads

10x increase in team productivity using Amazon SageMaker

1.5 trillion+

inference requests per month

40% reduction in data labeling costs using Amazon SageMaker

40%

reduction in data labeling costs using Amazon SageMaker


Frequently Asked Questions

Start building machine learning solutions with AWS Free Tier

Free offers and services for you to build, deploy, and run machine learning applications in the cloud. Sign up for AWS account to enjoy free offers for Amazon SageMaker, Amazon Comprehend, Amazon Rekognition, Amazon Polly, and over 100 AWS services.
View AWS Free Tier Details »
Close

Olivier draws on two decades of expertise in Internet technologies, IT architectures, and software engineering to help organizations of all sizes, from start-ups to large enterprises, apply cloud computing technologies, solve business problems, and create innovative and data driven business models.

Olivier has been working for AWS across Asia Pacific to help customers implement architectural best practices and be successful in their digital transformation journey. He also advises how emerging technologies in the Artificial Intelligence (AI), ML, robotics and IoT space can help create new products, make existing processes more efficient and leverage new engagement channels for end-consumers.

Close

With over 19+ years of experience in the Tech industry, Santanu is currently leading Customer Solutions Management for Asia Pacific where he gets to work across a variety of technologies including Infrastructure, Big Data, ML, microservices, Enterprise Apps, Cloud Migrations, and more across a wide variety of customers in Enterprises and Startups. Santanu is also passionate about Thought Leadership with CXOs and the Developer community alike.

Close

Ali is a software engineering leader living in Auckland, New Zealand focusing on solving real-world problems with technology. Ali has extensive experience in the software development lifecycle, focusing on building software using JS/TS and AWS services. Ali believes good software is built through collaboration. He also mentors and coaches developers and builders to learn and achieve success in their careers.

Close

As the Managing Director of Machine Learning and AI for AWS across Asia Pacific, Luke leads the business, team, strategy, and go-to-market for one of the fastest-growing and most impactful domains in the technology industry. With over 25 years of experience in technology and digital transformation, Luke helps organizations leverage the power of AI and ML to create new products, services, and solutions that deliver value and outcomes for their customers and stakeholders.

Before joining AWS, Luke advised start-ups to large enterprises on how to design, implement, and operate technology transformations that enabled business innovation and growth. He also developed alliances and partnerships within the technology ecosystem.