AWS Database Blog

Category: Technical How-to

A hybrid approach for homogeneous migration to an Amazon DocumentDB elastic cluster

Today, customers use document databases for many different types of applications. For example, gaming clients use them for handling users’ attribute information, while a stock application employs a document-oriented database to store chronological quote data. As the number of documents grows over time, you need more compute and storage than what is traditionally offered through […]

Scale your connections with Amazon DocumentDB using mongobetween

Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. You can use the same application code written using MongoDB API (versions 3.6, 4.0, and 5.0) compatible drivers, and tools to run, manage, and […]

Use AWS DMS to migrate data from IBM Db2 DPF to an AWS target

AWS has introduced a new feature in AWS Database Migration Service (AWS DMS) that simplifies the migration of data from IBM Db2 databases with the Database Partitioning Feature (DPF) databases to Amazon Simple Storage Service (Amazon S3), a highly scalable and durable object storage service. With this new capability, you can now migrate your data from IBM Db2 DPF databases to Amazon S3, paving the way for building robust data lakes in the cloud. This new feature streamlines the migration process, provides data integrity, and minimizes the risk of data loss or corruption, even when dealing with large volumes of data distributed across multiple partitions and databases of varying sizes. In this post, we delve into the intricacies of this new AWS DMS feature and demonstrate how to implement it. We explore best practices for orchestrating data flows and optimizing the migration process, achieving a smooth transition from on-premises IBM Db2 DPF databases to a cloud-based data lake on Amazon S3.

Create a fallback migration plan for your self-managed MySQL database to Amazon Aurora MySQL using native bi-directional binary log replication

In this post, we show you how to set up bi-directional replication between an on-premises MySQL instance and an Aurora MySQL instance. We cover how to configure and set up bi-directional replication and address important operational concepts such as monitoring, troubleshooting, and high availability. In certain use cases, native bi-directional binary log replication can either provide a simpler fallback plan for your migration or provide a way to migrate applications or schemas individually, rather than all at the same time.

How LeadSquared accelerated chatbot deployments with generative AI using Amazon Bedrock and Amazon Aurora PostgreSQL

LeadSquared is a new-age software as a service (SaaS) customer relationship management (CRM) platform that provides end-to-end sales, marketing, and onboarding solutions. Tailored for sectors like BFSI (banking, financial services, and insurance), healthcare, education, real estate, and more, LeadSquared provides a personalized approach for businesses of every scale. LeadSquared Service CRM goes beyond basic ticketing, […]

Migrate logins, database roles, users, and object-level permissions from Azure SQL Database to Amazon RDS for SQL Server

In this post, we demonstrate how to migrate SQL logins, database roles, users, and object-level permissions from Azure SQL Database to Amazon Relational Database Service (Amazon RDS) for SQL Server using T-SQL. Within SQL Server, a SQL login acts as a security principal, allowing a user or application to connect to a SQL Server instance. […]

Provision and manage Amazon RDS for Oracle using Terraform

This the first post in a multi-part series where we discuss how you can set up Amazon Relational Database Service (Amazon RDS) for Oracle with Terraform. Terraform by HashiCorp allows you to define the instructions for setting up the infrastructure as a code, simplifying and automating the process instead of doing everything manually. Overview of […]

A generative AI use case using Amazon RDS for SQL Server as a vector data store

Generative artificial intelligence (AI) has reached a turning point, capturing everyone’s imaginations. Integrating generative capabilities into customer-facing services and solutions has become critical. Current generative AI offerings are the culmination of a gradual evolution from machine learning and deep learning models. The leap from deep learning to generative AI is enabled by foundation models. Amazon […]

Enable fine-grained access control and observability for API operations in Amazon DynamoDB

Customers choose Amazon DynamoDB to improve their applications’ performance, scalability, and resiliency. DynamoDB’s serverless architecture simplifies operations by abstracting hardware, scaling, patches, and maintenances. Managing data access and security in DynamoDB is different than instance-based database solutions. DynamoDB uses AWS Identity and Access Management (IAM) to authenticate and authorize access to resources, whereas RBDMS solutions rely on firewalls rules, […]

Migrate a multi-TB SQL Server database to Amazon RDS Custom for SQL Server using Amazon FSx for Windows File Server

This is the second part in a two-part series on how to migrate a multi-TB database to Amazon Relational Database Services (Amazon RDS) Custom for SQL Server. RDS Custom for SQL Server is a managed database service that automates database setup, operation, backups, high availability, and scalability while providing access to both the database and […]