AWS Database Blog
Category: Amazon Aurora
Using the shared plan cache for Amazon Aurora PostgreSQL
In this post, we discuss how the Shared Plan Cache feature of the Amazon Aurora PostgreSQL-Compatible Edition can significantly reduce memory consumption of generic SQL plans in high-concurrency environments.
AWS Organizations now supports upgrade rollout policy for Amazon Aurora and Amazon RDS automatic minor version upgrades
AWS Organizations now supports an upgrade rollout policy, a new capability that provides a streamlined solution for managing automatic minor version upgrades across your database fleet. This feature supports Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition and Amazon RDS database engines MySQL, PostgreSQL, MariaDB, SQL Server, Oracle, and Db2. It eliminates the operational overhead of coordinating upgrades across hundreds of resources and accounts while validating changes in less critical environments before reaching production. In this post, we explore how upgrade rollout policy works, its key benefits, and how you can use it to implement a systematic approach to database maintenance across your organization.
Unlock Amazon Aurora’s Advanced Features with Standard JDBC Driver using AWS Advanced JDBC Wrapper
In this post, we show how you can enhance your Java application with the cloud-based capabilities of Amazon Aurora by using the JDBC Wrapper. Simple code changes shared in this post can transform a standard JDBC application to use fast failover, read/write splitting, IAM authentication, AWS Secrets Manager integration, and federated authentication.
Implement multi-Region endpoint routing for Amazon Aurora DSQL
Applications using Aurora DSQL multi-Region clusters should implement a DNS-based routing solution (such as Amazon Route 53) to automatically redirect traffic between AWS Regions. In this post, we show you automated solution for redirecting database traffic to alternate regional endpoints without requiring manual configuration changes, particularly in mixed data store environments.
Optimizing correlated subqueries in Amazon Aurora PostgreSQL
Correlated subqueries can cause performance challenges in Amazon Aurora PostgreSQL which can cause applications to experience reduced performance as data volumes grow. In this post, we explore the advanced optimization configurations available in Aurora PostgreSQL that can transform these performance challenges into efficient operations without requiring you to modify a single line of SQL code.
Improve Aurora PostgreSQL throughput by up to 165% and price-performance ratio by up to 120% using Optimized Reads on AWS Graviton4-based R8gd instances
In this post, we demonstrate how your workloads can benefit from upgrading Graviton2-based R6g and R6gd instances to Graviton4-based R8gd instances with Aurora PostgreSQL 17.5 on Aurora I/O-Optimized using an Optimized Reads-enabled tiered cache.
Introducing Amazon Aurora powers for Kiro
In this post, we show how you can turn your ideas into full-stack applications with Kiro powers for Aurora. We explore how a new innovation, Kiro powers, can help you use Amazon Aurora best practices built into your development workflow, automatically implementing configurations and optimizations that make sure your database layer is production-ready from day one.
Netflix consolidates relational database infrastructure on Amazon Aurora, achieving up to 75% improved performance
Netflix operates a global streaming service that serves hundreds of millions of users through a distributed microservices architecture. In this post, we examine the technical and operational challenges encountered by their Online Data Stores (ODS) team with their current self-managed distributed PostgreSQL-compatible database, the evaluation criteria used to select a database solution, and why they chose to migrate to Amazon Aurora PostgreSQL to meet their current and future performance needs. The migration to Aurora PostgreSQL improved their database infrastructure, achieving up to 75% increase in performance and 28% cost savings across critical applications.
How Letta builds production-ready AI agents with Amazon Aurora PostgreSQL
With the Letta Developer Platform, you can create stateful agents with built-in context management (compaction, context rewriting, and context offloading) and persistence. Using the Letta API, you can create agents that are long-lived or achieve complex tasks without worrying about context overflow or model lock-in. In this post, we guide you through setting up Amazon Aurora Serverless as a database repository for storing Letta long-term memory. We show how to create an Aurora cluster in the cloud, configure Letta to connect to it, and deploy agents that persist their memory to Aurora. We also explore how to query the database directly to view agent state.
Everything you don’t need to know about Amazon Aurora DSQL: Part 5 – How the service uses clocks
In this post, I explore how Amazon Aurora DSQL uses Amazon Time Sync Service to build a hybrid logical clock solution.









