AWS Innovate Data Edition
Unlock the value of your data
with an end-to-end data strategy

80+

Sessions in 3 languages
Ask the
Experts
Live 1:1 Q&A
Certificate of
Attendance
Level up your skills
Customer
Stories
Get inspired by use cases
Generative AI
Builders Zone
Technical demos

 Asia Pacific & Japan

The journey to innovation starts with your data

In today’s ultra-competitive world, data is the golden ticket driving almost every aspect of innovation.

Join us at AWS Innovate - Data Edition. Learn how to unlock value from your data, lead with a data-driven mindset, and build an end-to-end strategy at every step of the journey from ingesting, storing, and querying data to analyzing, visualizing, and running ML models.


Agenda

From big data, analytics, storage, business intelligence, machine learning, generative AI to anything in between–discover all at this edition of AWS Innovate! Learn key concepts, use cases, and best practices to help you save time and costs managing data, eliminate data silos, gain accurate insights faster, and build a strong data foundation for rapid innovation.

Agenda at a glance

Session details

To learn more about each session, please view using your desktop.

  • Opening keynote
  • Opening keynote

    Opening keynote

    Opening keynote 
    Data is dynamic and comes in different formats which makes it challenging to extract value. A modern data strategy can help you manage, act on, and react to your data so you can make better decisions, respond faster, and uncover new opportunities. Uncover the latest in database, data, analytics and AI/ML and get insights on how organizations are harnessing the power of data to accelerate innovation in their organization. Jumpstart and build the modern data strategy that allows you to consolidate, store, curate, and analyze data at any scale, as well as share data insights with everyone who needs them.

  • Building a data-driven organization
  • Data-driven organizations track 1

    Building a data-driven organization

    About the track

    Get inspired and learn how organizations are using AWS to solve business challenges, optimize business performance, and innovate faster. Start leveraging your data as a strategic asset and reinvent your organization with data today.

    Data-Driven Everything - From vision to value (Level 100)
    While data is abundant and growing rapidly, just producing or storing a lot of it does not automatically create value. Value is realized by creating a data-driven culture that leverages data to invent on behalf of customers using AI-ML, analytics, and actionable insights. However cultural challenges, outdated governance models, organizational silos, and legacy execution approaches stand in the way of realizing this vision. In this session, find out how Amazon's Data Driven Everything (D2E) program enables your organization to address these challenges. The D2E program includes components of mindset, people, processes, and technology to align business and technology leadership, create a compelling vision, enable your organization to deliver value through use cases, increase agility, enhance customer experiences, and enable sustained success. The session also includes customer case study of PVcomBank on their data transformation journey, with a focus on setting up the right skillset and architecture, as well as getting the business and IT to work together to realize their strategic goals of developing new and innovative financial products and services for their customers.

    Speaker: Rohit Dhawan, Head of Data Strategy, AWS
    Duration: 30mins


    Maximizing value: How data architecture aligns with your business architecture to deliver a successful data strategy (Level 200)
    A well-defined data architecture and effective data management approaches are key foundational pillars for organizations looking to building a modern data strategy to quickly deliver insights. In this session, we explore key concepts and strategies to consider towards design and implementation to ensure the outcomes are aligned to your work approaches and data literacy organizational goals. We also discuss onboarding 'on-ramps' for your data 'learners', in addition to your existing data consumers to ensure your data architecture and management policies support all levels of data maturity across your organization.

    Speaker: Jason Hunter, WW Principal Data Strategy Tech, AWS
    Duration: 30mins


    Build an intelligent enterprise data platform (Level 200)
    Organizations are looking to connect customer data across all touchpoints of customer journey to better understand their requirements and offer hyper personalized experiences. To realize the goals of hyper personalization, it is essential to build enterprise grade modern and intelligent data platforms that empower all users in the organization without compromising security, data governance and compliance requirements. Join this session to learn about DBS Bank’s (DBS) modern data platform “Advancing DBS through AI” (ADA). Find out how ADA empowers data analysts, scientists, and employees across the bank with tooling, framework, and democratized access to bank-wide data with a robust governance framework for data management, data discoverability, and data security. As a native hybrid-cloud platform, ADA provides industrialized configuration and management of AWS services including Amazon SageMaker, Amazon EMR, AWS DataSync, Amazon EC2 and other AWS AI/ML services, which allows them on-demand access to secure data movement and project lifecycle management. We also share the vision on how to enable DBS’s ADA platform at scale. Get insights on how DBS managed the challenges such as data security, provisioning, and platform management and achieved the results that had a positive impact within the DBS community.

    Speakers: 
    Unni Pillai, Head of Technology, FSI, AWS
    Matthew Worthy, Executive Director, DataFirst, Data Security Product Owner, DBS Bank

    Duration: 30mins


    Accelerating data driven outcomes in public sector (Level 200)
    Data is playing a pivotal role in digital transformation in regulated industries including public sector. This session outlines how organizations such as education, government, utilities, and healthcare are leveraging AWS cost-effectively for their growing pools of data, drive delivery of use cases, and paving the way for innovation in world-changing projects. We also explain how to quickly deliver insights while iterating on the required governance and technical frameworks.

    Speaker: Karthik Murugan, Head of Data, Public Sector APJ, AWS
    Duration: 30mins


    Multiply your data value creation possibilities with the right data approaches (Level 100)
    Data is the change agent driving digital transformation. The variety of data and workloads and the need for resiliency in shared data environments make storage choices critical to every application. In this session, we outline the data lifecycle and tangible steps to drive your business with the right data storage and management strategies. Discover what are the force multipliers that enable organizations like yours to multiply your data value, respond with agility, and fuel innovation with data-driven insights with AWS data storage and technologies.

    Speaker: Paul Haverfield, Principal Storage Specialist, AWS
    Duration: 30mins


    Accelerate Customer 360 strategy on AWS (Level 100)
    Customer behaviors and expectations have fundamentally changed, compelling organizations to look at approaches to accelerate digital transformation across the value chain. They do not make purchase decisions linearly and use many different, disparate channels to discover and research products—from social media and websites, to email marketing campaigns and targeted ads, as well as actually purchasing in a brick-and-mortar store. In this session, learn how to build the 360-degree view on your customers including their purchase behaviors and preferences and gain access to aggregate interactions across various touch points throughout the entire customer journey so as to tailor their experiences along the purchase journey. We share how these insights enable you to strategize personalized product offerings and marketing campaigns, and advance commitments to customer centricity and create the exceptional customer experiences.

    Speaker: Pierre Semaan, Head of GTM Strategy and Solutions, SMB, APJ, AWS
    Duration: 30mins


    Navigating data protection, governance, and digital sovereignty on AWS (Level 100)
    At AWS, we recognize that protecting customer data and earning customer trust and are key priorities for many organizations. This session shares how to build the right strategy that aligns people, processes, and technology to deliver data governance at scale. Understand how AWS configures appropriate access controls, monitors data for compliance, and layers security to protect data against malicious behavior. We also cover the use of leveraging data sovereignty controls and features to allow organizations to meet the evolving regulatory requirements anywhere they operate, without compromising on the capabilities, performance, innovation, and scale of the cloud.

    Speaker: Michael Stringer, Principal Security Solutions Architect, ANZO Public Sector, AWS
    Duration: 30mins

  • Data movement, processing, management and governance
  • Data movement, processing, management and governance

    Data movement, processing, management and governance

    About the track

    Gain best practices and concepts around data movement, eliminating data silos, and analyzing diverse datasets easily while keeping your data secure. Find out how to easily capture, centralize, and access your data in a quick, cost-effective, and secure fashion using AWS.

    Moving data to AWS - Find the right tool and process (Level 200)
    Finding the right approach of moving data workloads to the cloud can be daunting, but it is critical to ensure organizations can quickly reap the benefits such as increased agility, flexibility, and develop the ability to innovate faster. In this session, we walk you through how to select the right tools, protocols and mechanisms available to move your data efficiently and securely to AWS. We showcase the use of online and offline data transfer methods including AWS DataSync and the AWS Snow Family to accelerate moving your data, from on-prem, edge, and other environments to AWS. The session explores practical use cases, guidance on how to select the suitable methods for your requirements. The session concludes with a demo on how to use AWS DataSync and AWS Snow together.

    Speakers: 
    Lily Jang, Storage Business Development Manager, AWS
    Ameen Khan, Storage Specialist Solutions Architect, AWS

    Duration: 30mins


    Millisecond access to archival data on Amazon S3 (Level 200)
    Customers are storing petabytes of archival data in Amazon S3 and the default choice for archival is Amazon S3 Glacier or Glacier Deep Archive for low-cost data archiving and long-term backup. Besides archival, organizations are also looking at the retrieval of archive data with immediate access such as news content in M&E vertical or any business-critical archival data with low RTO. This session outlines how to select the right storage class on Amazon S3. Uncover what is the decision criteria, right approach and cost-effective method to migrate from offline to online archives within Amazon S3 Storage classes. The session also includes customer use case who migrated petabytes of offline archive to online archive to deliver better customer experience and cost savings.

    Speakers: 
    Ameen Khan, Storage Specialist Solutions Architect, AWS
    Manoj Kalyanaraman, CTO, Dropsuite

    Duration: 30mins


    Free your data from legacy systems and unlock value with AWS (Level 200)
    Data collection is a major challenge for organizations that are trying to become data-driven. There are a number of factors that can prevent data projects from being successful, including a lack of historical data, new techniques that require larger volumes of data, and teams that don’t have the right tools to get the data they need. This session addresses how to extract valuable data from legacy systems like mainframes, big data systems, and old file systems to help organizations achieve their data maturity goals.

    Speaker: Joao Palma, Senior Cloud Architect, AWS
    Duration: 30mins


    Build an interactive live video streaming experience optimized with data (Level 200)
    Many are now turning to interactive live video streaming to interact with their community. Whether it is for e-commerce, social app, virtual events, e-gaming or e-learning, organizations rely on live video streaming to drive better customer engagement and increase their sales revenue. In this session, we share how you can leverage Amazon Interactive Video Service (IVS) to build a low-latency live video streaming platform with interactive features. We demonstrate how you can leverage data generated during the live video stream to get data driven insights and optimize your user experience.

    Speaker: Thomas Sauvage, Senior Go-to-Market Specialist, Amazon IVS, AWS
    Duration: 30mins


    Optimize streaming viewer experiences with AWS data analytics (Level 200)
    Delivering a high-quality viewer experience is essential to retain your viewers and increase user engagement with your content as poor viewer experiences can cause viewers switching to other channels, leading to churn, brand confidence, and loss in revenue streams. In this session, we discuss the common media client data and how to use data analytics for live streaming to work backwards and improve viewer experience. We demonstrate how to build the serverless framework to process real time logs and converting into custom time-series metrics, enabling you to monitor, analyze, and take action based on content delivery performance.

    Speakers: Julian Ju, Edge Services Specialist Solutions Architect, AWS
    Duration: 30mins


    Build scalable, cost-effective disaster recovery strategy to AWS (Level 200)
    IT disasters such as data centre failures, server corruptions, or cyber-attacks can result in data loss, impact business revenue, and result in damage to reputation. In this session, we outline the common DR patterns faced by many organizations and how to achieve scalable, cost-effective application recovery with AWS Elastic Disaster Recovery (commonly referred to as DRS). Find out how DRS minimize downtime and data loss by providing fast, reliable recovery of physical, virtual, and cloud-based servers onto the AWS Cloud. The session also covers how you can continuously replicate servers to AWS with affordable storage, minimal compute, point-in-time recovery which enabling you to minimize downtime, data loss, and significantly reduce costs versus on-premises disaster recovery.

    Speaker: Joydipto Banerjee, Solutions Architect, AWS India
    Duration: 30mins


    Create an effective governance strategy for your feature data using AWS Lake Formation (Level 300)
    Organizations consider data to be their most valuable asset, but they need full visibility and control over the way they produce and use data, including data used to train artificial intelligence models. Feature stores are becoming increasingly popular as a solution to this problem in organizations, but they must balance governance and compliance requirements with the need to provide quick access to working environments and features for multiple machine learning teams. In this session, we'll discuss how AWS Lake Formation can help organizations manage feature data while addressing concerns around governance, security, and compliance.

    Speakers: 
    Gaurav Singh, Solutions Architect, AWS India
    Smiti Guru, Senior Solutions Architect, AWS India

    Duration: 30mins

  • Building future-proof applications
  • Building future-proof applications

    Building future-proof applications

    About the track

    In this track, find out how AWS cloud databases can help you meet your distinct use cases all while delivering operational efficiency, performance, availability, scalability, security, and compliance.

    Choose the right databases for application modernization (Level 200)
    The idea of one-size-fits-all monolithic database no longer fit today, as more organizations are building highly distributed applications using many purpose-built databases. The world is evolving and the categories of databases continue to grow. We are increasingly seeing customers wanting to build internet-scale applications that require diverse data models. In response to these needs, we offer the choice of key-value, wide column, document, in-memory, graph, time-series, and ledger databases. Each solves a specific problem or group of problems. In this session, learn more about AWS purpose-built databases that meet the scale, performance, and manageability requirements of modern applications.

    Speakers: 
    William Wong, Principal Database Solutions Architect, AWS
    Surendar Munimohan, Senior Database Solutions Architect, AWS

    Duration: 30mins


    Migrate and modernize commercial databases on AWS (Level 200)
    As data continues to grow, organizations are increasingly burdened with challenges on scalability, reliability, and cost associated with running these database instances. Join this session to learn how to migrate and modernize your databases to solve these challenges, ensure application availability, and lower total cost of ownership (TCO). We discuss the key considerations that you need to be aware prior to taking your first step in migrating your commercial database to AWS. We dive deep into the migration approaches, tools, and services available to help you migrate your database to Amazon RDS for SQL Server or Oracle, and the approaches on how to modernize your database to open source database on Amazon Aurora. The session also features common use cases to further optimize your database to achieve agility, performance, and, scalability for your modern-day applications.

    Speakers: 
    Barry Ooi, Senior Database Solutions Architect, AWS
    Jay Shin, Senior Database Migration Specialist, AWS

    Duration: 30mins


    Optimize and modernize SQL Server on AWS (Level 300)
    Modernizing legacy SQL Server databases can be time-consuming and resource-intensive because there is often more work to do to migrate the application itself, including re-writing application code that interacts with the database. This session outlines the benefits of running SQL Server on AWS to achieve scalability, high availability, and disaster recovery, and manage licensing costs. We discuss one of the strategies that typically involves application changes and modernizing by using open-source databases or databases built for the cloud, and how to avoid expensive licenses (resulting in lower costs), vendor lock-in periods, and audits. The session includes the demo which provides you the practical perspective on how to use Babelfish for Aurora PostgreSQL, and start running queries in a fraction of the time associated with traditional database migration and optimize your licensing spend.

    Speakers: 
    Sriwantha Attanayake, Principal Partner Solutions Architect, AWS
    Rita Ladda, Microsoft Specialist Solutions Architect, AWS

    Duration: 30mins


    Build scalable applications with Amazon Aurora (Level 200)
    For decades, applications have been built with old-guard commercial databases which are expensive, proprietary. Many impose punitive licensing terms and are difficult to manage and scale. In this session, we share how to manage your data and build scalable, reliable, and high-performance applications with Amazon Aurora, a MySQL and PostgreSQL compatible relational database. It combines the performance and availability of a commercial grade database with the simplicity and cost-effectiveness of an open source database. We discuss the scaling of Amazon Aurora to fulfil application scalability, high availability, and disaster recovery requirements. This session also demonstrates how Amazon Aurora Serverless V2 enables you to scale database workloads instantly from hundreds to thousands of transactions per second and adjust capacity in fine-grained increments to provide just the right amount of database resources.

    Speaker: Roneel Kumar, Senior Database Solutions Architect, AWS
    Duration: 30mins


    Building high performance applications at any scale with Amazon DynamoDB (Level 200)
    NoSQL databases are purpose-built for specific data models and optimized for modern applications like mobile, web, and gaming applications that require scalability, low latency, and flexibility. Join this session to learn how Amazon DynamoDB offers an enterprise-ready database that helps you deliver apps with consistent single-digit millisecond performance and nearly unlimited throughput and storage. We dive deep into the features of DynamoDB, indexes, how to scale easily and its cost components. We also demonstrate how to use NoSQL Workbench to design data model for a highly scalable multi player online gaming application that can deliver consistent single-digit millisecond performance at any scale. Find out how you can easily monitor DynamoDB workloads at scale to observe key metrics such as latency, read and write requests per second, hot partitions, throttling and errors.

    Speaker: Vaibhav Bhardwaj, Senior DynamoDB Specialist Solutions Architect, AWS
    Duration: 30mins


    Operate critical document workloads at scale with Amazon DocumentDB (Level 200) 
    Organizations are increasingly turning to document-oriented databases to easily store and query data that seamlessly evolves with their application needs. Join this session to learn more about document databases and use cases. Find out how Amazon DocumentDB with MongoDB compatibility, enables you to run and operate mission-critical JSON workloads at scale, pushing past the scalability limits seen in traditional databases. This session also features how this solution requires zero to minimal code changes to your existing applications. We provide guidance on how to integrate natively with existing AWS services and migration best practices with a demo on how to get started with DocumentDB.

    Speaker: Gururaj Bayari, Senior Specialist Solutions Architect, AWS
    Duration: 30mins


    Build high performance modern applications using Amazon ElastiCache and MemoryDB for Redis (Level 200)
    Today’s modern applications demand high performance and responsiveness at any scale. In this session, we share how you can build high performance applications that impact revenue, customer experience, and satisfaction using a distributed in-memory data store with Amazon ElastiCache for Redis. Amazon ElastiCache is a fully managed in-memory caching service that accelerates application performance with microsecond latency. Discover how caching can supercharge your workloads, and how to build fast, secure, and highly available applications.

    Speaker: Shirish Kulkarni, Senior ElastiCache/MemDB Specialist, APJ, AWS
    Duration: 30mins

  • Deploy scalable cost-effective analytics workloads track 1
  • Deploy scalable cost-effective analytics workloads track 1

    Deploy scalable cost-effective analytics workloads track 1

    About the track

    Learn the approaches, tools, and frameworks to break down data silos, unify your data, and make data more accessible to everyone who needs it and can seamlessly discover, access, and analyze the data in a secure and governed way.

    Harness the power of data and choose the right analytics for your use case (Level 200)
    With the right analytics approach, organizations can unlock the full potential of your data, derive meaningful insights and make informed decisions. This session outlines how to select right analytics based on your use case to find insights from diverse data types and how to make it available to the right people and systems. We cover how to create a strong foundation to manage data, eliminate data silos, and accelerate time to market with the data lakes and purpose-built data stores on AWS.

    Speaker: Niladri Bhattacharya, Principal Analytics Solutions Architect, AWS
    Duration: 30mins


    Unlock data across organizational boundaries with built-in governance using Amazon DataZone (Level 200)
    Data systems are often sprawling, siloed, and complex, with diverse data sets spreading out across data lakes, data warehouses, cloud databases, SaaS applications, IoT devices, and on-premises systems. To gain value from your data, it needs to be accessible by people and systems that need it for analytics. Join this session to learn how Amazon DataZone allows you to share, search, and discover data at scale across organizational boundaries and collaborate on data projects through a unified data analytics portal that gives you a personalized view of all your data while enforcing your governance and compliance policies.

    Speaker: Vikas Omer, Principal Analytics Specialist, AWS
    Duration: 30mins


    Centralize and unify data access for all data users using AWS Lake Formation (Level 200)
    Many organizations are looking to data, analytics, and ML solutions to make data available for wide ranging analytics. This session covers how to build a unified data platform powered by Amazon S3 based data lake, with purpose-built tools and processing engines to break down data silos, share data across their lines of business and make informed data-driven decisions. Learn how to ensure tighter control on who can access the most sensitive data as a platform user, and cover the principles of least privilege access. Get insights on how to centrally manage access to available datasets and apply fine-grained permissions for various users with their choice of compute engines.

    Speaker: Praveen Kumar, Senior Analytics Solutions Architect, AWS
    Duration: 30mins


    Run Apache Spark securely and at-scale with minimum operational overhead (Level 200)
    It is essential for organizations to reduce time-to-market for their analytics workloads in an ever-evolving market landscape. Many organizations use Apache Spark to achieve the competitive edge, optimize business costs, and accelerate decision making. In this session, we cover 3 options to run open source compatible Apache Spark jobs on AWS. We cover how to run Apache Spark analytics workloads at scale with minimal configuration and operational overhead, and securely on Amazon Athena for Apache Spark and Amazon EMR Serverless.

    Speaker: Amir Shenavandeh, Senior Big Data Solutions Architect, AWS
    Duration: 30mins


    Building a high-performance, transactional serverless data lake on AWS (Level 200)
    Data lake is a central repository to store structured and unstructured data at any scale and in various formats. However, tasks such as updating, deleting a subset of identified records from the data lake, and making concurrent changes can be time consuming and costly. In this session, we explore the most common transactional data lake formats. We will take real-world examples to demonstrate how to build high-performance transactional data lakes to run analytics queries that return consistent and up-to-date result with analytics and serverless solutions including Amazon Apache Iceberg on EMR Serverless and Amazon Athena. The session addresses how to support ACID (Atomicity, Consistency, Isolation, Durability) transactions in a data lake, time-travel, schema / partition evolution and purging of individual records to meet regulatory and compliance needs as data lake use cases grow.

    Speaker: Indrajit Ghosalkar, Senior Solutions Architect, AWS
    Duration: 30mins


    Enhance your applications with Amazon QuickSight (Level 200)
    Every day, users in your organization make decisions that affect business outcomes. When they have the right information at the right time, they can make the choices that move the company in the right direction. This session covers the different embedding approaches you can use to embed Amazon QuickSight within your application and the use cases for embedding QuickSight dashboards. We demonstrate how you can integrate QuickSight into your application architecture, configure permissions and security, and use QuickSight's APIs to embed dashboards and reports.

    Speaker: Olivia Carline, QuickSight Solutions Architect, AWS
    Duration: 30mins


    Log analytics in 20 mins using centralized logging with Amazon OpenSearch solution (Level 200)
    Log analytics is essential to investigate issues to resolve downtimes and increase resilience in applications. Applications may have many moving parts, and not all teams have the right skills or time to build the centralized logging capabilities to correlate events between different application tiers. In this session, we share how Amazon OpenSearch provides you with comprehensive log management, and the ability to simplify the build of log analytics pipelines. Learn how to pull logs from your CDN, firewall, network, applications and databases with just a few clicks and onto a single dashboard without writing any codes. We also demonstrate how you can build a centralized log analytics platform with Amazon OpenSearch Service on AWS in 20 minutes. We also walk through how to ingest and visualize logs from custom applications and other AWS solutions.

    Speaker: Muhammad Ali, Principal Analytics Solutions Architect, AWS
    Duration: 30mins

  • Deploy scalable cost-effective analytics workloads track 2
  • Deploy scalable cost-effective analytics workloads track 2

    Deploy scalable cost-effective analytics workloads track 2

    About the track

    Learn the approaches, tools, and frameworks to break down data silos, unify your data, and make data more accessible to everyone who needs it and can seamlessly discover, access, and analyze the data in a secure and governed way.

    Unlocking analytics capability by migrating your legacy database to an Amazon Redshift data warehouse (Level 200)
    Customers are seeking to drive more value from their data and it is becoming increasingly difficult using legacy applications. Amazon Redshift, a purpose-built large-scale analytics service, provides deeper and quicker insights from your data throughout the business. In this session, we share the common scenarios we see from customers looking to migrate to Amazon Redshift and the benefits unlocked through the modernization of their analytics platform. We showcase how you can quickly and easily migrate legacy databases to Amazon Redshift, take advantage of Redshift's features, and accelerate your time to insight with fast, easy, and secure analytics at scale.

    Speaker: Sean Beath, Redshift Specialist Solutions Architect, AWS
    Duration: 30mins


    Achieve faster time to value by unifying data silos with Amazon Redshift (Level 200)
    Data that you need for insights is not just growing in volume, but also getting more diverse (log data, click stream, voice, video). It often sits in various data silos, even third-party organizations. Users across departments, organizations, and regions are expected to work on transactionally consistent data; however, the process of transforming the data across these silos is fraught with issues like data duplication and loss, inconsistencies, inaccuracies, and delays as data moves and network bottlenecks. This session covers how Amazon Redshift, a fully managed cloud data warehouse service breaks through data silos and enables data sharing across regions and accounts. Learn the common data integration patterns for Amazon Redshift that leverages its native integration with a wide range of services. The session also demonstrates data sharing capabilities and how to share and access live data in a secure way without data movement or data copying.

    Speaker: Paul Villena, Redshift Solutions Architect, AWS
    Duration: 30mins


    Approaches to simplify data integration for faster insights (Level 200)
    Organizations increasingly need near real-time analytics on operational data to improve user experience and optimize processes. However, operational analytics systems are limited to a single database or consist of building custom data pipelines that can be challenging and costly to manage and at times create hours-long delays to obtain transactional data for analytics. This session dives deep on how to leverage Amazon Aurora, which supports zero-ETL integration with Amazon Redshift. Learn how this new no-code integration between Amazon Aurora and Amazon Redshift enables you with near real-time analytics and machine learning on petabytes of transactional data.

    Speaker: Partha Sarathi Sahoo, Senior Technical Account Manager, Analytics, AWS
    Duration: 30mins


    Simplify and accelerate data integration and ETL modernization with AWS Glue (Level 200)
    ETL (Extract-Transform-Load) plays key role in every data transformation journey. The first step in an analytics or machine learning project is to discover and prepare your data to obtain quality results. In this session, learn how AWS Glue, a serverless, scalable data integration solution, provides all-in-one capabilities for all users to easily build and manage data pipelines. We demonstrate how to enable self-service data preparation across the organization and get started on ETL migration with AWS Glue. We also share how users of all skill sets can build and manage data pipelines with AWS Glue.

    Speaker: 
    Suman Debnath, Principal Developer Advocate, Data Engineering, AWS
    Duration: 30mins


    Building a streaming data platform with Amazon Managed Streaming for Apache Kafka and Amazon Kinesis (Level 200)
    Many organizations trying to build streaming analytics architectures from their real-time data sources often struggle with finding the right architectural patterns that work for them. In addition, streaming data analysis also require unique skills and having a good understanding of stream processing concepts like window, time, and state can be complex. This session covers the process of building a streaming data platform with AWS solutions including Amazon Managed Streaming for Apache Kafka and Amazon Kinesis. We dive deep into the architecture and capabilities of these services and demonstrate how they can be used to build a highly scalable, fault-tolerant, and performant data streaming solution. Learn the key concepts such as data ingestion, stream processing, data storage, and data analytics, and get practical insights on best practices and use cases. By the end of the session, walk away with the knowledge of how to build a streaming data platform to manage large volumes of real-time data and unlock valuable insights for your organization.

    Speaker: Masudur Rahaman Sayem, Senior Streaming Solutions Architect, AWS
    Duration: 30mins


    Breaking down data silos: Transfer data between SaaS applications and AWS (Level 200)
    Organizations store critical data in multiple locations, and in distributed applications. The data ingestion, data preparation, and application integration for reporting and analysis can be complex, costly, and time consuming. In this session, we explain how to securely transfer data between software as a service (SaaS) applications and AWS services such as Amazon S3 and Amazon Redshift in just a few clicks. We also demonstrate the use of other analytics and ML solutions to easily set up data flows in minutes without writing code and derive business insights.

    Speaker: Donnie Prakoso, Principal Developer Advocate, AWS
    Duration: 30mins


    Securely collaborate and analyze your data on AWS (Level 200)
    Organizations are constantly looking at ways to securely share their data and improve the ways of collaborating across multiple parties. In this session, learn how to use the technologies on AWS for secure data sharing and collaboration. We cover how to share the data easily, cost effectively and risk free, while preventing data duplication. The session also includes steps on how you can create a secure data clean room in minutes, add restrictions on queries run by each participant with built-in, customizable analysis rules and privacy-enhancing controls.

    Speaker: Allison Quinn, Senior Analytics Specialist, AWS
    Duration: 30mins

  • Data workloads on AWS
  • Data workloads on AWS

    Data workloads on AWS

    About the track

    Find out how to simplify and accelerate the process of building, deploying, and scaling data workloads on AWS.

    Accelerate business value from data with VMware Cloud on AWS and analytics (Level 200)
    Organizations are constantly looking for ways to dismantle data barriers, integrate intelligence from diverse systems and resources, to get value from their data. This session focuses on how to integrate VMware Cloud on AWS with AWS analytics, making it easier to use services to draw meaningful insights from business data. Get insights on how to move existing on-premise applications and databases seamlessly to AWS by leveraging VMware Cloud on AWS, the hybrid cloud service that allows organizations to transition rapidly and seamlessly to the cloud. Once your applications and database are located in AWS, we showcase how to integrate with other AWS solutions to accelerate insights from your data. The session includes a demo on how AWS Lake Formation simplifies the building, managing, securing, and sharing of your data lakes. We also cover how to unveil crucial insights, distribute intelligence across your organization, and publish interactive dashboards with Amazon QuickSight.
     
    Speaker: Greg Vinton, Specialist Solutions Architect, VMware Cloud on AWS
    Duration: 30mins


    Build, deploy, and scale data-intensive workloads with Data on Amazon EKS (Level 200)
    Kubernetes has emerged as a popular platform for those looking at running data and machine learning workloads due to improved agility, scalability, and portability. However, deploying and scaling data workloads on Kubernetes remains a challenge for many customers. There are many conflicting tools with varying levels of maturity, integration, and compatibility with existing platforms. These workloads are often high-throughput, compute-intensive, and critical to business operations, requiring a proper configuration to support their requirements. This session showcases how to leverage Data on EKS (DoEKS) to simplify and speed up the process of building, deploying, and scaling data workloads on Amazon EKS. DoEKS builds on the foundation of the Amazon EKS Blueprints project and incorporates guidance and tools to support the unique challenges and requirements of data-related workloads on Kubernetes. We dive deep into the best practices, examples, and architectures aimed at making it easier to build, deploy, and scale data-intensive workloads on Amazon Elastic Kubernetes Service (Amazon EKS), enabling you to simplify data processing and analysis to extract valuable insights and drive value creation, provide a competitive edge, and enhance customer experiences.
     
    Speaker: Frank Fan, Senior Containers Specialist Solutions Architect, AWS
    Duration: 30mins


    Spark up your Apache Spark workloads with Amazon EMR, EKS, and Amazon FSx for Lustre (Level 300)
    Builders are using Spark on Kubernetes to run big data and machine learning (ML) workloads for easier management and flexible deployments. Join this session as we dive deep into Amazon EMR on EKS, to run your Apache Spark workloads on Amazon Elastic Kubernetes Service (Amazon EKS). Understand how this deployment option enables you to focus on running analytics workloads while Amazon EMR on EKS builds, configures, and manages containers for open-source applications. We demonstrate how to run performance-optimized runtime for Apache Spark for faster workload execution and reduced running costs. Learn how this option offers you robust support for multi-tenancy, enhanced security and plethora of options for building a unified observability solution. We also showcase how to efficiently operate batch and streaming workloads with EMR on EKS and FSx for Lustre to perform data-intensive operations quickly and efficiently.

    Speakers: 
    Vivekanand Tiwari, Cloud Architect, AWS Professional Services
    Haofei Feng, Senior Cloud Architect, AWS Professional Services

    Duration: 30mins

  • Generative AI
  • Generative AI

    Generative AI

    About the track

    Discover the potential of generative AI for your organization with this track. We discuss the techniques, share common use cases, and provide step by step guidance to integrate and implement real-world projects with generative AI on AWS.

    Generative AI for Builders: How to build innovative solutions with AWS (Level 200)
    In this session, learn what is generative AI on AWS and uncover benefits of leveraging the technologies to deliver outcomes including reinventing your applications, create innovative customer experiences and drive unprecedented levels of productivity. We explain the AWS services and tools to build and deploy generative AI models. Understand the many examples on generative AI applications in various domains and industries. We also share best practices and tips for designing and testing generative AI solutions.

    Speaker: Vatsal Shah, Senior Solutions Architect, AWS India
    Duration: 30mins


    Generative AI on AWS: What, why, and how (Level 200)
    Recent advances in generative AI make it the most disruptive set of technologies and capabilities to hit the global market in decades. Organizations can look at how to reinvent applications, create new customer experiences, drive unprecedented levels of productivity, and delivering innovation with generative AI. This session provides you with the quick level-set with explaining what is AI, ML, deep learning, and generative AI. Understand how generative AI works and the benefits. We also explain AWS services that support generative AI use cases and resources to get you started quickly.

    Speaker: Pierre Semaan, Head of Partner Solutions Architecture, APJ, AWS


    Creating superhero avatars with generative AI on AWS and Stable Diffusion (Level 200)
    Stable Diffusion is a deep learning model that allows you to generate realistic, high-quality images and stunning art in just a few seconds. Join this session as we demonstrate how to create your own superhero avatar with Stable Diffusion and Amazon SageMaker Jumpstart, With Amazon SageMaker Jumpstart, you can access a pre-trained Stable Diffusion model that can generate superhero avatars based on your preferences. We conclude with providing you with the step by step guide on how to apply these technologies into your own projects.

    Speaker: Vatsal Shah, Principal Solutions Architect, AWS India


    Enhancing chatbot performance with generative AI using Amazon Kendra and Amazon SageMaker hosted LLMs (Level 300)
    In this session, learn how to implement Retrieval Augmented Generation (RAG) using Amazon Kendra and SageMaker hosted LLM (Large Language Models) to address data privacy and data residency concerns. We showcase how you can combine RAG with Amazon Kendra, a semantic search service, can significantly improve the response quality of chatbots and eliminate hallucination, while ensuring data privacy. Find out how to deploy LLM in Amazon SageMaker in a cost effective and optimized way, as well as integrate RAG and LLM with existing chatbot infrastructure to provide a seamless user experience. The session also outlines the benefits and practical applications of RAG enabled by Amazon Kendra and SageMaker hosted LLM, empowering you to create secure, responsive, and intelligent chatbots.

    Speaker: Ben Friebe, Senior ISV Solutions Architect, AWS
    Duration: 30mins

  • Innovate with data and machine learning
  • Innovate with data and machine learning

    Innovate with data and machine learning

    About the track

    Discover the various machine learning integration services available on AWS that can help you build, deploy, and innovate at scale.

    Bringing machine learning to builders through databases and analytics (Level 200)
    Organizations today generate, process, and collect more data than ever in order to better understand the market landscapes and address customers' evolving needs. In this session, learn the different ways how AWS is empowering builders by adding ML capabilities to the databases and analytics solutions such as Amazon Aurora, Amazon Redshift, Amazon Neptune, and Amazon QuickSight. We will share how bringing machine learning closer to the data is accelerating the data analysis workflow with deeper, faster, and more comprehensive insights to achieve successful outcomes.

    Speaker: Pierre Semaan, GTM Strategy and Solutions, SMB APJ, AWS
    Duration: 30mins


    Accelerate insights from real-time streaming data at scale with databases, analytics and ML (Level 200)
    Processing streaming data can be complex, especially if you need to react in real-time. Join this session to learn how you can to easily collect, process, and analyze real-time, streaming data at scale so you can get timely insights and react quickly to new information. We explain the common streaming data use cases and how to implement AWS solutions including analytics, data warehousing, serverless and ML for your data analytics. We demonstrate via use case scenarios including how to detect online transaction fraud in near-real time. We show how you can apply this approach to various data streaming and event-driven architectures, depending on the desired outcome and actions to take to prevent fraud.

    Speakers: 
    Arun Balaji, Principal Prototyping Engineer, AWS India
    Gopalakrishnan Subramanian, Principal Database Specialist Solutions Architect, AWS India

    Duration: 30mins


    Simplify data analysis and anomaly detection across distributed data sets with federated queries and machine learning (Level 200)
    Builders today have to deal with data in various sources such as data lake, databases, on premise, other cloud systems, and third-party applications. It becomes more complex when they have to work with multiple cross-functional teams. This session showcases how to build the end to end architectural framework with analytics, serverless, and other AWS solutions to gather insights on the data and identify anomalous transactions. See how to use Amazon Athena Federated Query to enable you to overcome these challenges with federated querying right into these disparate data sources, without moving or copying data and generating high performance analytics and insights. We demonstrate how Amazon Athena connects with Amazon SageMaker to run ML inferences with SQL commands on business transactions to gain insights and identify anomalous transactions. The session concludes with how to analyze results of an Athena federated query in Amazon QuickSight to meet varying analytic needs from the same source of truth through interactive dashboards, paginated reports, embedded analytics, and natural language queries.

    Speakers: 
    Sam Gordon, Senior Cloud Architect, AWS Professional Services
    Ed Fraga, Cloud Architect, AWS Professional Services

    Duration: 30mins


    Monitor, operate, and troubleshoot enterprise resources with AWS Chatbot (Level 300)
    Many organizations have disparate enterprise tools and platforms which often do not integrate with each other. This makes data analysis, sharing and collaboration analysis difficult and time consuming. In this session, discover how to build AWS Chatbots, an interactive agent to set up ChatOps for AWS resources. With AWS Chatbot, you can now communicate and collaborate IT operational tasks with your preferred collaboration tools. We cover how to centralize the management of infrastructure and applications, as well as to automate and streamline your workflows. Gain insights how to create an interactive and collaborative experience, as users query and communicate at real time through the chat interface. The session also includes a demo on how to receive alerts and run commands to return diagnostic information, and create support cases, so you can collaborate and respond to events faster, without context switching to other AWS tools.

    Speaker: Vikas Awasthi, Principal Cloud Architect, AWS Professional Services
    Duration: 30mins


    Building an intelligent insights discovery solution using analytics and ML (Level 200)
    Many organizations regardless of size and industry have to manage growing volumes of data. They need to go through large amount of unstructured and semi-structured data to derive meaningful insights. At times, to do in-depth research and mine knowledge from it they have to dive deep into various documents and data. To simplify their work, they want to accurately extract the key data points and associations among various teams and make business decision in a timely manner. To achieve this, they require intelligent discovery and filtering mechanism to get right information at right time and this can be a very time-consuming process. In this session we demonstrate how to quickly extract accurate insights from your documents with analytics and ML. Learn how to use AWS technologies including Amazon S3 storage, Amazon Textract, Amazon Comprehend, AWS Lambda, and Amazon Kendra to build a cognitive and intelligent discovery and search platform, and extract meaningful insights from unstructured documents.

    Speaker: Darshit Vora, Senior Startup Solutions Architect, AWS India
    Duration: 30mins


    Improve semantic search relevance with analytics and machine learning (Level 200)
    The rise of semantic search engines has made search easier for many users. Semantic search uses ML to understand the meaning of queries, and improves the usefulness of search by understanding the intent and contextual meaning of those terms by bringing results that are more relevant than text search. This session covers the importance of search relevance, semantic search, and the underlying architecture. We demonstrate how to build a semantic search engine and improve search relevance with Amazon SageMaker and Amazon OpenSearch Service.

    Speaker: Kamal Manchanda, Solutions Architect, AWS India
    Duration: 30mins


    Real-time analytics at the edge and in the cloud (Level 200)
    Machine failures can cause an adverse impact on the operational efficiency of plants and factories, but identification of critical failures and examining physical parameters pose a challenge. To improve the fault detection process, it is crucial to monitor production systems and collect performance data in real-time. In this session, we discuss and demonstrate various options available to securely connect and collect equipment data to gain real-time insights at the edge and in the cloud using AWS analytics and ML. In addition, we demo a use case where data from multiple equipment are collected and critical parameters are monitored in real time at the edge. Furthermore, we showcase a centralized dashboard with consolidated data from multiple sites.

    Speakers: 
    Arun Balaji, Principal Prototyping Engineer, AWS India
    Vikram Shitole, Prototyping Engineer, AWS India

    Duration: 30mins

  • Closing remarks
  • Closing

    Closing remarks

    Closing remarks
    To make decisions quickly, organizations want to store any amount of data in open formats, break down disconnected data silos, empower people to run analytics or machine learning using their preferred tool or technique, and manage who has access to specific pieces of data with the proper security and data governance controls. This session provides a recap of the days' sessions and addresses some of the commonly asked questions related to data analytics and machine learning. Learn how AWS is freeing organizations and builders to solve real-world business problems in any industry and innovate with confidence. Uncover how technology like machine learning and analytics can unlock opportunities that were either too difficult or impossible to do before, enabling organization with insights, transforming industries and reshaping how customers consume and engage with products and services.

  •  Korean
  •  Japanese

Session levels designed for you

INTRODUCTORY
Level 100

Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.

INTERMEDIATE
Level 200

Sessions are focused on providing best practices, details of service features and demos with the assumption that attendees have introductory knowledge of the topics.

ADVANCED
Level 300

Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.

Conference timings

  • Australia & New Zealand
  • Australia
     GMT+10 (AEST)

    Timing 1: 9.30am - 3.00pm
    Timing 2: 3.30pm - 9.00pm

    New Zealand
     GMT+12 (NZST)

    Timing 1: 11.30am - 5.00pm
    Timing 2: 5.30pm - 11.00pm

  • ASEAN & Pakistan
  • Singapore
    Malaysia
    Philippines
     GMT+8 (SGT/MYT/PHT)

    Timing 1: 7.30am - 1.00pm
    Timing 2: 1.30pm - 7.00pm
    Timing 3 Keynote rebroadcast:
    8.00pm - 9.00pm

    Thailand
    Vietnam
    .
     GMT+7 (ICT)

    Timing 1: 6.30am - 12.00pm
    Timing 2: 12.30pm - 6.00pm
    Timing 3 Keynote rebroadcast:
    7.00pm - 8.00pm

    Indonesia
    .
    .
     GMT+7 (WIB)

    Timing 1: 06:30 - 12:00
    Timing 2: 12:30 - 18:00
    Timing 3 Keynote rebroadcast: 19:00 - 20:00

    Pakistan
    .
    .
     GMT+5 (PKT)

    Timing 1: 4.30am - 10.00am
    Timing 2: 10.30am - 4.00pm
    Timing 3 Keynote rebroadcast:
    5.00pm - 6.00pm

  • India & Sri Lanka
  • India
     GMT+5.30 (IST)

    Timing 1: 5.00am - 10.30am
    Timing 2: 11.00am - 4.30pm
    Timing 3 Keynote rebroadcast:
    5.30pm - 6.30pm

    Sri Lanka
     GMT+5.30 (SLST)

    Timing 1: 5.00am - 10.30am
    Timing 2: 11.00am - 4.30pm
    Timing 3 Keynote rebroadcast:
    5.30pm - 6.30pm

  • Korea
  • Korea
     GMT+9 (KST)

    Timing 1: 8.30am - 2.00pm
    Timing 2: 2.30pm - 8.00pm

  • Japan
  • Japan
     GMT+9 (JST)

    Timing 1: 8.30am - 2.00pm
    Timing 2: 2.30pm - 8.00pm


Featured speakers

Dean Samuels

Dean Samuels
Chief Technologist, ASEAN, AWS

.

Kris Howard

Kris Howard
Head of Dev Relations, APJ, AWS

.

Emily Arnautovic

Emily Arnautovic
Principal Solutions Architect, APJ, AWS

.

Olivier Klein

Olivier Klein
Chief Technologist, APJ, AWS

.

Learn more about Data on AWS

Leader in IDC MarketScape: APeJ (Asia Pacific excluding Japan) Analytic Data Platforms for Decision Support 2023 Vendor Assessment

3x faster with Amazon EMR than standard Apache Spark

5x

better price performance than other cloud data warehouses

200,000+ data lakes run on AWS

100,000

data lakes run on AWS

3x better price performance than other cloud data warehouses.

70%

savings on storage cost for data in data lakes

550,000+ databases migrated to AWS.

550,000+

databases migrated to AWS

100,000+ customers use AWS for machine learning

100,000+

customers use AWS for machine learning

200+ fully featured services for a wide range of technologies, industries, and use cases

99.99999999999%

of data durability


Frequently Asked Questions

Start building your skills with AWS Free Tier

Get familiar with AWS products and services by signing up for an AWS account and enjoy free offers for Amazon EC2, Amazon S3, Amazon Redshift and over 100 AWS services.
View AWS Free Tier Details »
Close

Olivier is a hands-on technologist with more than 10 years of experience in the industry and has been helping customers build resilient, scalable, secure, and cost-effective applications and create innovative and data-driven business models. He advises on how emerging technologies in the AI, ML, and IoT spaces can help create new products, make existing processes more efficient, provide overall business insights, and leverage new engagement channels for consumers.

Close

Emily works with large enterprises helping them understand the capabilities of cloud computing with AWS and working in partnership across the organisation to take advantage of the speed and agility it can bring. She spends most of her time focusing on financial services customers, where managing and governing cloud workloads securely at scale is at the core. Prior to working with AWS, Emily was a senior technology architect in a large systems integration consultancy. She has many years' consulting and software implementation experience engaging with senior stakeholders and decision makers on large-scale, complex software and infrastructure delivery programmes.

Close

Kristine has twenty years of experience helping companies build as a software engineer, business analyst, and team director. She is a frequent speaker at tech events and meetups including AWS Summits and TEDx Melbourne. Kristine is dedicated to meeting and working with developers across the region, and now heads up Developer Relations for AWS in APJ.

Close

Dean comes from an IT infrastructure background and has extensive experience in infrastructure virtualization and automation. He has been with AWS for the past ten years and has had the opportunity to work with businesses of all sizes and industries. Dean is committed to helping customers design, implement, and optimize their application environments for the public cloud to allow them to become more innovative, agile, and secure.