AWS Database Blog
Category: Advanced (300)
How Letta builds production-ready AI agents with Amazon Aurora PostgreSQL
With the Letta Developer Platform, you can create stateful agents with built-in context management (compaction, context rewriting, and context offloading) and persistence. Using the Letta API, you can create agents that are long-lived or achieve complex tasks without worrying about context overflow or model lock-in. In this post, we guide you through setting up Amazon Aurora Serverless as a database repository for storing Letta long-term memory. We show how to create an Aurora cluster in the cloud, configure Letta to connect to it, and deploy agents that persist their memory to Aurora. We also explore how to query the database directly to view agent state.
Lower cost and latency for AI using Amazon ElastiCache as a semantic cache with Amazon Bedrock
This post shows how to build a semantic cache using vector search on Amazon ElastiCache for Valkey. As detailed in the Impact section of this post, our experiments with semantic caching reduced LLM inference cost by up to 86 percent and improved average end-to-end latency for queries by up to 88 percent.
Build persistent memory for agentic AI applications with Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics
Today, we’re announcing a new integration between Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics to provide persistent memory capabilities to agentic AI applications. This integration solves a critical challenge when building agentic AI applications: without persistent memory, agents forget everything between conversations, making it impossible to deliver personalized experiences or complete multi-step tasks effectively. In this post, we show how you can use this new Mem0 integration.
Simplify data integration using zero-ETL from self-managed databases to Amazon Redshift
In this post, we demonstrate how to set up a zero-ETL integration between self-managed databases such as MySQL, PostgreSQL, SQL Server, and Oracle to Amazon Redshift. The transactional data from the source gets replicated in near real time on the destination, which processes analytical queries.
Amazon Ads upgrades to Amazon ElastiCache for Valkey to achieve 12% higher throughput and save over 45% in infrastructure costs
Amazon Ads enables businesses to meaningfully engage with customers throughout their shopping journey, reaching over 300 million audience in the US alone. Delivering the right ad to the right customer in real time at a global scale requires highly available, low-latency infrastructure capable of processing tens of millions of requests per second. In this post, […]
Everything you don’t need to know about Amazon Aurora DSQL: Part 5 – How the service uses clocks
In this post, I explore how Amazon Aurora DSQL uses Amazon Time Sync Service to build a hybrid logical clock solution.
Protect sensitive data with dynamic data masking for Amazon Aurora PostgreSQL
Today, we are launching dynamic data masking feature for Amazon Aurora PostgreSQL-Compatible Edition. In this post we show how dynamic data masking can help you meet data privacy requirements. We discuss how this feature is implemented and demonstrate how it works with PostgreSQL role hierarchy.
Optimize database performance using resource governor on Amazon RDS for SQL Server
You can now use resource governor with Amazon RDS for SQL Server Enterprise Edition to optimize your database performance by controlling how compute resources are allocated across different workloads. This post shows you how to optimize your database performance using resource governor on Amazon RDS for SQL Server. We walk you through the step-by-step process of enabling and configuring the feature, including how to set up resource pools, create workload groups, and implement classifier functions for effective resource management.
Implement high availability in Amazon RDS for SQL Server Web Edition using block-level replication
Amazon RDS for SQL Server has enhanced SQL Server 2022 Web Edition by introducing high availability through block-level replication in Multi-AZ deployments. With this release, you can quickly set up and maintain highly available databases while significantly reducing operational overhead. In this post, we discuss the benefits of block-level replication and how to get started. For more information, see Licensing Microsoft SQL Server on Amazon RDS.
Accelerating data modeling accuracy with the Amazon DynamoDB Data Model Validation Tool
Today, we’re introducing the Amazon DynamoDB Data Model Validation Tool, a new component of the MCP server that closes the loop between generation, evaluation, and execution. The validation tool automatically tests generated data models against Amazon DynamoDB local, refining them iteratively until every access pattern behaves as intended.









