AWS Database Blog
Category: Advanced (300)
Connect Amazon Bedrock Agents with Amazon Aurora PostgreSQL using Amazon RDS Data API
In this post, we describe a solution to integrate generative AI applications with relational databases like Amazon Aurora PostgreSQL-Compatible Edition using RDS Data API (Data API) for simplified database interactions, Amazon Bedrock for AI model access, Amazon Bedrock Agents for task automation and Amazon Bedrock Knowledge Bases for context information retrieval.
Run SQL Server post-migration activities using Cloud Migration Factory on AWS
In this post, we show you essential post-migration tasks to perform after migrating your SQL Server database to Amazon EC2 and how to automate this activity by using Cloud Migration Factory on AWS (CMF), such as validating database status, configuring performance settings, and running consistency checks. Additionally, we explore how the CMF solution can automate these essential tasks, providing efficiency, scalability, and heightened visibility to simplify and expedite your migration process.
How to configure a Linked Server between Amazon RDS for SQL Server and Teradata database
In this post, we demonstrate how to configure a linked server between Amazon RDS for SQL Server and a Teradata database instance. We guide you through the step-by-step process to establish this connection and show you how to verify its functionality.
How Amazon maintains accurate totals at scale with Amazon DynamoDB
Amazon’s Finance Technologies Tax team (FinTech Tax) manages mission-critical services for tax computation, deduction, remittance, and reporting across global jurisdictions. The Application processes billions of transactions annually across multiple international marketplaces. In this post, we show how the team implemented tiered tax withholding using Amazon DynamoDB transactions and conditional writes.
Build an AI-powered text-to-SQL chatbot using Amazon Bedrock, Amazon MemoryDB, and Amazon RDS
Text-to-SQL can automatically transform analytical questions into executable SQL code for enhanced data accessibility and streamlined data exploration, from analyzing sales data and monitoring performance metrics to assessing customer feedback. In this post, we explore how to use Amazon Relational Database Service (Amazon RDS) for PostgreSQL and Amazon Bedrock to build a generative AI text-to-SQL chatbot application using Retrieval Augmented Generation (RAG). We’ll also see how we can use Amazon MemoryDB with vector search to provide semantic caching to further accelerate this solution.
Amazon DynamoDB data modeling for Multi-tenancy – Part 3
In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this last part of the series, we explore how to validate the chosen data model from both a performance and a security perspective. Additionally, we cover how to extend the data model as new access patterns and requirements arise.
Amazon DynamoDB data modeling for Multi-Tenancy – Part 2
In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we continue the design process, selecting a partition key design and creating our data schema. We also show how to implement the access patterns using the AWS Command Line Interface (AWS CLI).
Scaling Amazon RDS for MySQL performance for Careem’s digital platform on AWS
Careem powers rides, deliveries, and payments across the Middle East, North Africa and South Asia. As Careem grew, so did its data infrastructure challenges. Their monolithic 270 TB Amazon RDS for MySQL database consisting of one writer and five read replicas— experienced performance issues due to increased storage utilization, slow queries, high replica lag, and increased Amazon RDS cost. In this post, we provide a step-by-step breakdown of how Careem successfully implemented a phased data purging strategy, improving DB performance while addressing key technical challenges.
Amazon CloudWatch Database Insights applied in real scenarios
In this post, we show how you can use Amazon CloudWatch Database Insights for troubleshooting your Amazon RDS and Amazon Aurora resources. CloudWatch Database Insights serves as a database observability solution offering a tailored experience for DevOps engineers, application developers, and database administrators. This tool is designed to accelerate database troubleshooting processes and address issues across entire database fleets, enhancing overall operational efficiency.
Ingest CSV data to Amazon DynamoDB using AWS Lambda
In this post, we explore a streamlined solution that uses AWS Lambda and Python to read and ingest CSV data into an existing Amazon DynamoDB table. This approach adheres to organizational security restrictions, supports infrastructure as code (IaC) for table management, and provides an event-driven process for ingesting CSV datasets into DynamoDB.









