AWS Database Blog
Category: Artificial Intelligence
Build durable AI agents with LangGraph and Amazon DynamoDB
In this post we show you how to build production-ready AI agents with durable state management using Amazon DynamoDB and LangGraph with the new DynamoDBSaver connector, a LangGraph checkpoint library maintained by AWS for Amazon DynamoDB.
Introducing Amazon Aurora powers for Kiro
In this post, we show how you can turn your ideas into full-stack applications with Kiro powers for Aurora. We explore how a new innovation, Kiro powers, can help you use Amazon Aurora best practices built into your development workflow, automatically implementing configurations and optimizations that make sure your database layer is production-ready from day one.
Build a fitness center management application with Kiro using Amazon DocumentDB (with MongoDB compatibility)
In this post, we walk through how we used Kiro, an agentic Integrated Development Environment (IDE), to build a complete fitness center management application that digitizes paper-based fitness tracking. We explore Kiro’s spec-driven development workflow and see how it transforms complex application development into a streamlined, iterative process. Our solution uses Amazon DocumentDB as the backend.
Lower cost and latency for AI using Amazon ElastiCache as a semantic cache with Amazon Bedrock
This post shows how to build a semantic cache using vector search on Amazon ElastiCache for Valkey. As detailed in the Impact section of this post, our experiments with semantic caching reduced LLM inference cost by up to 86 percent and improved average end-to-end latency for queries by up to 88 percent.
Accelerate generative AI use cases with Amazon Bedrock and Oracle Database@AWS
In this post, we walk through the steps of integrating Oracle Database@AWS (ODB@AWS) with Amazon Bedrock for by creating a RAG assistant application using an Amazon Titan embedding model in Amazon Bedrock and vectors stored in Oracle AI Database 26ai.
AI-powered tuning tools for Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL databases: PI Reporter
In this post, we explore an artificial intelligence and machine learning (AI/ML)-powered database monitoring tool for PostgreSQL, using a self-managed or managed database service such as Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL.
Key components of a data-driven agentic AI application
In this post, we look at the costs, benefits, and drawbacks of replacing services for agentic AI with direct database access. Including those that work well and are proven in production, and new services yet to be built. Let’s take a closer look at the anatomy of an agentic AI application and what would factor into such decisions.
Build a dynamic workflow orchestration engine with Amazon DynamoDB and AWS Lambda
In this post, I show you how to build a serverless workflow orchestration engine that uses Amazon DynamoDB and AWS Lambda. The complete implementation is available in a GitHub repository, which includes two fully functional examples that you can deploy and run immediately to see the orchestration engine in action.
Raising the bar on Amazon DynamoDB data modeling
In April 2025, we introduced the Amazon DynamoDB data modeling tool for the Model Context Protocol (MCP) server. The tool guides you through a conversation, collects your requirements, and produces a data model that includes tables, indexes, and cost considerations. In this post, we show you how we built this automated evaluation framework and how it helped us deliver reliable DynamoDB data modeling guidance at scale.
Automating vector embedding generation in Amazon Aurora PostgreSQL with Amazon Bedrock
In this post, we explore several approaches for automating the generation of vector embedding in Amazon Aurora PostgreSQL-Compatible Edition when data is inserted or modified in the database. Each approach offers different trade-offs in terms of complexity, latency, reliability, and scalability, allowing you to choose the best fit for your specific application needs.









