AWS Database Blog
Category: Advanced (300)
Ingest CSV data to Amazon DynamoDB using AWS Lambda
In this post, we explore a streamlined solution that uses AWS Lambda and Python to read and ingest CSV data into an existing Amazon DynamoDB table. This approach adheres to organizational security restrictions, supports infrastructure as code (IaC) for table management, and provides an event-driven process for ingesting CSV datasets into DynamoDB.
Perform OS upgrades for Amazon RDS Custom for SQL Server CEV with Multi-AZ
Amazon Relational Database Service (Amazon RDS) Custom for SQL Server gives you enhanced control through OS shell-level access and database administrator privileges. With this control comes the shared responsibility model, which requires you to manage your own OS and database patching. Operating system (OS) changes made after instance creation aren’t persistent. To maintain OS-level customizations, […]
Extract and migrate data from nested tables with user-defined nested types from Oracle to PostgreSQL
In Oracle, UDTs can have member functions written in PL/SQL that are integrated directly into the UDT. In contrast, PostgreSQL currently doesn’t allow member functions within UDTs. In this post, we dive deep into these differences and provide guidance for a smooth migration, helping ensure that the integrity of your data models is maintained throughout the process. We will also walk you through the details of converting complex member type functions in the multi-nested UDT from Oracle to PostgreSQL.
AWS DMS implementation guide: Building resilient database migrations through testing, monitoring, and SOPs
In this post, we present proactive measures for optimizing AWS DMS implementations from the initial setup phase. By using strategic planning and architectural foresight, organizations can enhance their replication system’s reliability, improve performance, and avoid common pitfalls.
Understanding transaction visibility in PostgreSQL clusters with read replicas
On April 29, 2025, Jepsen published a report about transaction visibility behavior in Amazon RDS for PostgreSQL Multi-AZ clusters. We appreciate Jepsen’s thorough analysis and would like to provide additional context about this behavior, which exists both in Amazon RDS and community PostgreSQL. In this post, we dive into the specifics of the issue to provide further clarity, discuss what classes of architectures it might affect, share workarounds, and highlight our ongoing commitment to improving community PostgreSQL in all areas, including correctness.
How Habby enhanced resiliency and system robustness using Valkey GLIDE and Amazon ElastiCache
Habby is a game studio that creates interactive entertainment to connect players worldwide. We adopted Valkey GLIDE, a client library for Amazon ElastiCache for Valkey and Redis OSS, to address our system challenges. Our system uses the Amazon ElastiCache for Redis OSS publish/subscribe (Pub/Sub) functionality for the chat message sending. However, we faced challenges with connection stability during infrastructure changes, such as instance scaling, Redis OSS version upgrades, and hardware failures. This post describes our messaging system architecture and explains how we improved system reliability by using Valkey GLIDE as the client communicating with Amazon ElastiCache.
Migrate SQL Server user databases from Amazon EC2 to Amazon RDS Custom using Amazon EBS snapshots
In this post, we present a practical approach to one of the most significant challenges organizations face when adopting Amazon RDS Custom for SQL Server: migrating large datasets from SQL Server on Amazon EC2 to Amazon RDS Custom for SQL Server efficiently and cost-effectively. By using SQL Server’s native detach and attach method combined with EBS snapshots, you can migrate your databases without requiring Amazon S3 or AWS DMS.
Best practices to handle AWS DMS tasks during PostgreSQL upgrades
When you decide to upgrade your PostgreSQL database which is configured as source or target for an ongoing AWS DMS task, it’s important to factor this into your upgrade planning. In this post, we discuss the best practices to handle the AWS DMS tasks during PostgreSQL upgrades to minor or major versions.
How Amazon Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications
In this post, we discuss how the Amazon Finance Automation team used AWS purpose built databases, such as Amazon DynamoDB, Amazon OpenSearch Service, and Amazon Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.
How Heroku migrated hundreds of thousands of self-managed PostgreSQL databases to Amazon Aurora
In this post, we discuss how Heroku migrated their multi-tenant PostgreSQL database fleet from self-managed PostgreSQL on Amazon Elastic Compute Cloud (Amazon EC2) to Amazon Aurora PostgreSQL-Compatible Edition. Heroku completed this migration with no customer impact, increasing platform reliability while simultaneously reducing operational burden. We dive into Heroku and their previous self-managed architecture, the new architecture, how the migration of hundreds of thousands of databases was performed, and the enhancements to the customer experience since its completion.









