AWS Database Blog

Category: Advanced (300)

Getting started with Change Data Capture in Amazon Aurora DSQL

In this post, we demonstrate how to configure Aurora DSQL Change Data Capture and stream database changes into Kinesis Data Streams. You will learn how CDC works, how to configure a streaming pipeline, and how to consume change events. By the end of this post, you will have a working CDC pipeline that streams database changes into a durable event stream that downstream applications can process.

Upgrade strategies for Amazon RDS for MySQL 8.0 to 8.4

This post is part of a two-part series on upgrading RDS for MySQL 8.0 to 8.4. Here, we cover the end of standard support timeline, extended support costs, upgrade methods, and key best practices. For a step-by-step implementation guide, see Best practices for upgrading RDS for MySQL 8.0 to 8.4 with prechecks, Blue/Green, and rollback.

How HotelTrader cut inter-AZ cost 95% and latency by 49% with Valkey GLIDE on Amazon ElastiCache

In this post, you learn how HotelTrader reduced inter-availability zone data transfer costs by 95% and improved average latency by 49% by migrating from the Redis Lettuce client to Valkey GLIDE on Amazon ElastiCache. The post walks through how HotelTrader identified hidden cross-AZ data transfer costs in their multi-AZ ElastiCache cluster, implemented Valkey GLIDE’s AZ-affinity read strategy to route requests to local replicas, optimized throughput with request batching, and executed a zero-downtime migration using A/B testing over 15 days.

Filter, transform, and load your DynamoDB table exports using AWS Glue

In this post, we show how you can load (import) an Amazon DynamoDB full or incremental table export into a second DynamoDB table with precise control over what gets loaded, at what write rate, and with the ability to observe the progress. This technique helps drive large-scale data migrations and synchronizations where you want maximum control.

Migrating Amazon RDS for PostgreSQL to Amazon Aurora using seeded logical replication

In this post, we show you how to migrate from an Amazon RDS for PostgreSQL to Amazon Aurora PostgreSQL-Compatible Edition using seeded logical replication. For live migrations with minimal downtime, AWS provides several approaches, including Aurora read replicas, the snapshot/restore method combined with ongoing replication, and AWS DMS.

Amazon Aurora DSQL connections: Drivers, strings, and best practices

Connecting to Amazon Aurora DSQL requires a different approach than traditional PostgreSQL databases. Instead of long-lived passwords, you use short-lived IAM authentication tokens. Instead of static endpoints, you work with distributed cluster endpoints that route connections across Availability Zones. In this post, you learn how to configure connection strings, set up drivers in Python, Java, and Node.js, and implement best practices for authentication, connection pooling, and lifecycle management with Amazon Aurora DSQL.

Query billion-scale vectors with SQL: Integrating Amazon S3 Vectors and Aurora PostgreSQL

In this post, you’ll learn how to query Amazon S3 Vectors from Amazon Aurora PostgreSQL-Compatible Edition using standard SQL, and how to combine vector similarity results with relational filters in a single query, for example, finding the most semantically similar products and then filtering by price, stock status, or tenant in one SQL statement.