AWS Database Blog
Category: Advanced (300)
Optimizing correlated subqueries in Amazon Aurora PostgreSQL
Correlated subqueries can cause performance challenges in Amazon Aurora PostgreSQL which can cause applications to experience reduced performance as data volumes grow. In this post, we explore the advanced optimization configurations available in Aurora PostgreSQL that can transform these performance challenges into efficient operations without requiring you to modify a single line of SQL code.
Improve Aurora PostgreSQL throughput by up to 165% and price-performance ratio by up to 120% using Optimized Reads on AWS Graviton4-based R8gd instances
In this post, we demonstrate how your workloads can benefit from upgrading Graviton2-based R6g and R6gd instances to Graviton4-based R8gd instances with Aurora PostgreSQL 17.5 on Aurora I/O-Optimized using an Optimized Reads-enabled tiered cache.
Why Regeneron chose Amazon RDS Custom for Oracle to deploy COTS and GxP applications on AWS
Regeneron, a leading biotechnology company, effectively harnesses traditional on-premises solutions with a sophisticated database architecture to bolster essential commercial-off-the-shelf (COTS) and GxP business applications. In this post, we highlight why Regeneron chose to use Amazon RDS Custom for Oracle to deploy COTS and GxP applications on AWS. This decision underscores their commitment to advancing from a legacy architecture to a robust, scalable, and resilient managed service. By doing so, Regeneron not only enhances their backend database infrastructure but also ensures adherence to GxP procedures, demonstrating their dedication to operational excellence and regulatory compliance.
Build and explore Knowledge Graphs faster with Amazon Neptune using Graph.Build and G.V() – Part 2
This is a guest blog by Arthur Bigeard, Founder at gdotv, in partnership with Charles Ivie, Sr Graph Architect at AWS. G.V() is a graph database IDE available for Desktop or on AWS Marketplace, offering extensive graph visualization and querying capabilities for Amazon Neptune and Neptune Analytics. In Part 1 of this series, we demonstrated […]
Build and explore Knowledge Graphs faster with Amazon Neptune using Graph.Build and G.V() – Part 1
This is a guest blog post by Richard Loveday, Head of Product at Graph.Build, in partnership with Charles Ivie, Graph Architect at AWS. The Graph.Build platform is a dedicated, no-code graph model design studio and build factory, available on AWS Marketplace. Knowledge graphs have been widely adopted by organizations, powering use cases such as social […]
Build a fitness center management application with Kiro using Amazon DocumentDB (with MongoDB compatibility)
In this post, we walk through how we used Kiro, an agentic Integrated Development Environment (IDE), to build a complete fitness center management application that digitizes paper-based fitness tracking. We explore Kiro’s spec-driven development workflow and see how it transforms complex application development into a streamlined, iterative process. Our solution uses Amazon DocumentDB as the backend.
Exploring Optimize CPU feature on Amazon RDS for SQL Server
Amazon RDS for SQL Server now supports the Optimize CPU feature. With the Optimize CPU feature you can define the number of vCPUs when you launch new instances or when modifying existing database instances. This feature also provides a detailed billing breakdown of RDS infrastructure costs, and licensing costs for SQL Server and Windows OS. It is available starting from the 7th Generation instance class. In this post, we explore how to use the Optimize CPU feature with Amazon RDS for SQL Server.
How Letta builds production-ready AI agents with Amazon Aurora PostgreSQL
With the Letta Developer Platform, you can create stateful agents with built-in context management (compaction, context rewriting, and context offloading) and persistence. Using the Letta API, you can create agents that are long-lived or achieve complex tasks without worrying about context overflow or model lock-in. In this post, we guide you through setting up Amazon Aurora Serverless as a database repository for storing Letta long-term memory. We show how to create an Aurora cluster in the cloud, configure Letta to connect to it, and deploy agents that persist their memory to Aurora. We also explore how to query the database directly to view agent state.
Lower cost and latency for AI using Amazon ElastiCache as a semantic cache with Amazon Bedrock
This post shows how to build a semantic cache using vector search on Amazon ElastiCache for Valkey. As detailed in the Impact section of this post, our experiments with semantic caching reduced LLM inference cost by up to 86 percent and improved average end-to-end latency for queries by up to 88 percent.
Build persistent memory for agentic AI applications with Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics
Today, we’re announcing a new integration between Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics to provide persistent memory capabilities to agentic AI applications. This integration solves a critical challenge when building agentic AI applications: without persistent memory, agents forget everything between conversations, making it impossible to deliver personalized experiences or complete multi-step tasks effectively. In this post, we show how you can use this new Mem0 integration.









