- AWS›
- Events›
- AI Conclave›
- Agenda
AI Conclave Bengaluru
Agentic AI & Data Edition
Page topics
Agenda
Open all| Session Time | Session Title |
| 8:00 AM - 9:30 AM | Registrations, Expo & Networking Breakfast |
| 9:30 AM - 10:00 AM |
Welcome Note
|
| 10:00 AM - 11:30 AM |
Keynote
|
| 11:30 AM - 12:00 PM | AI Awards |
| 12:00 PM - 1:00 PM | Networking Lunch |
| 1:00 PM - 3:30 PM | Breakout Tracks |
| 3:30 PM - 4:30 PM |
Closing Keynote
|
| Session Time | Session Title |
| 1:00 PM - 1:30 PM |
The role of data in agentic AI Everyone knows you need good data for effective AI agents and applications, but what does that truly mean? In this session, discover the role your most valuable data plays in agentic AI applications. Learn how Model Context Protocol (MCP) allows you to access your data wherever it is, whether you use it for Retrieval Augmented Generation (RAG), or use agents to operate on data. Understand how agentic AI data access patterns map to modern data management practices. Discover best practice architectures using AWS database services like Amazon Aurora and OpenSearch Service, along with analytical, data processing and streaming experiences found in Amazon SageMaker and Amazon S3. Learn data lake, governance, and data quality concepts and how Amazon Bedrock AgentCore and Bedrock Knowledge Bases, and other features tie an end-to-end agentic AI solution together.
|
| 1:30 PM - 2:00 PM |
AI Agents from POC to production with Amazon Bedrock AgentCore Going from a pilot to a secure, scalable, and reliable agent system requires more than great prompts. In session, we'll show how Amazon Bedrock AgentCore helps teams move from proof-of-concept to production using enterprise-grade services, including Runtime, Gateway, Identity, Memory, Observability, Code Interpreter, Browser, Policy and Evaluations . Learn how these services simplify development, reduce operational burden, and improve time-to-value.
|
| 2:00 PM - 2:30 PM |
Master AI model development with Amazon SageMaker AI Unlock the complete AI development lifecycle with Amazon SageMaker AI's comprehensive platform. This technical session explores how SageMaker AI provides an integrated end-to-end solution for building, training, deploying, and managing AI models at scale—supporting every stage from initial experimentation to production operations. Dive into key features like serverless model customization, end to end development with HyperPod, efficient model inference options, and flexible development environments. Join this session to learn on how SageMaker AI accelerates AI innovation while optimizing resource utilization.
|
| 2:30 PM - 3:00 PM |
Building enterprise-grade GenAI Platform on AWS As enterprises scale GenAI from pilots to production, building secure, reusable platforms is essential. This technical deep-dive reveals how to architect enterprise-grade Generative AI on AWS using proven patterns and reference architectures. Explore model access, prompt orchestration, RAG, fine-tuning, and agentic workflows with Amazon Bedrock, SageMaker AI, and Bedrock AgentCore. Address critical platform concerns: multi-account strategies, access controls, data isolation, observability, CI/CD pipelines, and guardrails. Learn how leading enterprises combine AWS services with open-source tools to accelerate deployment, enforce governance, and deliver AI-as-a-Service internally—while maintaining cost control and measurable ROI for sustainable AI innovation at scale.
|
| 3:00 PM - 3:30 PM |
The next frontier: Building the agentic future Explore the transformative future of AI Agents with AWS's groundbreaking Nova ecosystem. Discover Nova Forge's revolutionary "open training" approach enabling organizations to build proprietary models infused with their unique data, and Nova Act's breakthrough reliability in browser-based automation. Learn how three new frontier agents—Kiro autonomous developer, Security Agent consultant, and DevOps operational specialist—work independently for extended periods as seamless extensions of your teams. See how AWS Transform dramatically accelerates legacy application modernization, eliminating massive maintenance costs. Explore simplified model customization through Reinforcement Fine Tuning in Amazon Bedrock and serverless capabilities in SageMaker AI.
|
| Session Time | Session Title |
| 1:00 PM - 1:30 PM |
A leaders guide to agentic AI In a landscape of rapid AI evolution and daily headlines, senior leaders need clarity on what Agentic AI means for their business. This session cuts through the noise to provide a practical understanding of autonomous AI agents and their strategic implications for your organization. We will demystify Agentic AI and focus on what matters most to leaders. We will share how both management and architecture models will need to evolve to get value out of the autonomous AI systems.
|
| 1:30 PM - 2:00 PM |
Scaling generative AI and agentic AI use-cases across industries Discover how leading organizations are using GenAI and Agentic AI to solve complex business challenges. This session showcases real-world GenAI implementations across industries demonstrating proven frameworks for scaling POCs into production-ready solutions that drive measurable business outcomes.
|
| 2:00 PM - 2:30 PM |
Transform business intelligence with AI-powered analytics Discover how AI transforms traditional business intelligence through integrated analytics. Learn implementation strategies for combining machine learning, natural language processing, and automated insights. Explore architectural frameworks that enable real-time, adaptive BI systems responding to business needs. See case studies demonstrating how organizations achieve faster, more accurate decisions through comprehensive AI-powered data analysis.
|
| 2:30 PM - 3:00 PM |
Responsible AI: Building systems people can trust Building AI that people can trust means thinking about safety, fairness, and transparency from day one - not as an afterthought. Organizations that succeed with AI use proven frameworks to guide their decisions, asking important questions like: Is our AI treating everyone fairly? Can we explain how it makes decisions? What happens if something goes wrong? For AI agents that work independently, this becomes even more important. Smart companies build in checkpoints where humans review key decisions, set clear boundaries for what AI can and cannot do, and continuously monitor performance. Companies like Indeed show how this approach works in practice—they've built systems that protect millions of job seekers while still moving fast and innovating. The key is making responsible AI part of your culture, not just a compliance checklist.
|
| 3:00 PM - 3:30 PM |
Balance cost, performance & reliability for AI at enterprise scale Deploying generative AI at enterprise scale requires balancing performance, cost, and reliability across diverse business purposes and use cases. Amazon Bedrock offers a complete portfolio of inference options, with on-demand cross-region inference for elastic scaling, on-demand service tiers for balancing performance and cost, including optimization options like prompt caching for improving latency while significantly reducing cost, and batch inference for cost-effective bulk processing. This interactive session covers the tools and approaches needed to architect hybrid inference strategies that enable enterprises to maximize price-performance ratios as AI workloads scale.
|
| Session Time | Session Title |
| 1:00 PM - 1:30 PM |
Build modern applications with Amazon Aurora DSQL As organizations modernize applications for scalability and agility, database architecture becomes crucial. This session explores Amazon Aurora DSQL, a serverless, distributed SQL database for next-generation workloads. Learn to build modern serverless applications using Amazon Aurora DSQL with Amazon API Gateway and AWS Lambda. Examine performance characteristics, scaling behavior, and cost considerations, along with current limitations. Gain insights on choosing Amazon Aurora DSQL for modernization efforts, comparing it to alternatives like Amazon DynamoDB and Amazon Aurora Serverless v2.
|
| 1:30 PM - 2:00 PM |
Best practices for building Apache Iceberg based lakehouse architectures on AWS Discover advanced strategies for implementing Apache Iceberg on AWS, focusing on Amazon S3 Tables and integration of Iceberg Rest Catalog with the lakehouse in Amazon SageMaker. We'll cover performance optimization techniques for Amazon Athena and Amazon Redshift queries, real-time processing using Apache Spark, and integration with Amazon EMR, AWS Glue, and Trino. Explore practical implementations of zero-ETL, change data capture (CDC) patterns, and medallion architecture. Gain hands-on expertise in implementing enterprise-grade lakehouse solutions with Iceberg on AWS.
|
| 2:00 PM - 2:30 PM |
Deep dive into model serving engines on AWS Deploying Large Language Models effectively requires navigating complex choices across inference frameworks and engines—decisions that can impact model performance. This comprehensive session delivers a data-driven evaluation of leading LLM serving frameworks including vLLM, TensorRT-LLM, TGI, SGLang, SageMaker AI LMI, and Dynamo with a deep dive analysis of the performance characteristics, and deployment trade offs. We start with inference fundamentals - KV caching, batching strategies, and quantization - then dive into head-to-head framework comparisons measuring latency, throughput, and cost efficiency. You'll gain practical expertise deploying these frameworks on both Amazon SageMaker and Amazon EKS, with production-ready patterns for auto-scaling, monitoring, and cost optimization
|
| 2:30 PM - 3:00 PM |
Spec-driven development: Shaping the next generation of AI software When you are working through a complex feature you need to make tradeoffs between requirements, system design, and implementations details before you deal with the nitty-gritty of writing the code. In this session, learn how spec-driven development can help you go from idea to production without sacrificing any of the crucial development details.
|
| 3:00 PM - 3:30 PM |
Oracle Database@AWS Oracle Database@AWS is a multicloud offering resulting from a strategic partnership between Oracle and Amazon Web Services (AWS) that allows customers to run Oracle Database services on Oracle Cloud Infrastructure (OCI) deployed within AWS data centers. This unique architecture provides high performance and low-latency connectivity between the Oracle databases and applications running on AWS
|
| 1:00PM -1:10 PM |
Welcome note: Data & AI built on AWS for India
|
| 1:10 PM - 1:30 PM |
Unlocking business value with a modern data platform – J&K Bank Case Study
|
| 1:30 PM - 1:50 PM |
Building intelligent operations: How Swiggy leverages agentic AI to enhance app availability and dev velocity
|
| 1:50 PM - 2:10 PM |
From data foundation to Agentic AI - Freshworks' journey at scale
|
| 2:10 PM - 2:30 PM |
Merchant QR to intelligent payments at scale on AWS
|
| 2:30 PM - 2:50 PM |
The agentic lender
|
| 2:50 PM - 3:10 PM |
Turning emerging online sellers into power brands through a data driven operating system
|
| 3:10 PM - 3:30 PM |
Architecting the future of sports entertainment: SportsCraft on AWS
|