AWS Database Blog

Category: Technical How-to

Explore the prerequisites required to create an Amazon RDS Custom for SQL Server instance

Customers often ask us how they can create an RDS Custom for SQL Server database in their existing networking infrastructure. They want to ensure that the database servers are created within the security perimeter designed by their networking teams. They also want to understand different components and services involved when creating an RDS Custom for SQL Server instance. In this post, we demonstrate how to create an RDS Custom for SQL Server instance. We also show how to create the required prerequisites within an existing networking infrastructure. Amazon RDS Custom requires these prerequisites to create the necessary resources in your AWS account.

Handle traffic spikes with Amazon DynamoDB provisioned capacity

If you’re using Amazon DynamoDB tables with provisioned capacity, one challenge you might face is how best to handle a sudden request traffic increase (spike) without being throttled. The more sudden and extended the traffic spike, the more likely a table will experience throttles. However, throttles aren’t inevitable even for spiky traffic. Here we walk you through eight designs to handle traffic spikes, and present their advantages and disadvantages.

Migrate logins, database roles, users and object-level permissions to Amazon RDS for SQL Server using T-SQL

In this post, we explain how to migrate the logins, database roles, users, and object-level permissions from on-prem or Amazon Elastic Compute Cloud (Amazon EC2) for SQL Server to Amazon Relational Database Service (Amazon RDS) for SQL Server using the T-SQL.

Amazon RDS for Oracle Transportable Tablespaces using RMAN

In this post, we show you how you can use the RMAN XTTS functionality to migrate from an Oracle database hosted on Amazon Elastic Compute CLoud (Amazon EC2) to Amazon RDS for Oracle. Combined with Amazon Elastic File System (Amazon EFS) integration, XTTS can help reduce the complexity of your migration strategy, reduce the number and copies of data and backups required (as well as associated storage space consumption), and reduce the application downtime associated with completing the migration of your data.

Model hierarchical automotive component data using Amazon DynamoDB

In this post, we discuss an automotive manufacturing information management use case where we store information about components within a vehicle as well as the hierarchy between each of the components. For our automotive use case, we use Amazon DynamoDB to deliver transactional queries, such as component attribute lookups. We will also show you how to use DynamoDB for larger responses such as a recursive query for all the components in a vehicle. While recursive object relationships can be represented in graph databases and possibly traditional RDBMS (with complex joins), these deeper queries can also be represented in DynamoDB.

Use the DBMS_CLOUD package in Amazon RDS Custom for Oracle for direct Amazon S3 integration

In this post, we demonstrate how to use the DBMS_CLOUD package to transfer files between S3 buckets and directories in an RDS Custom for Oracle database. We also show how you can access data from Amazon S3 directly using Oracle features such as external tables and hybrid partition tables. The features provided by DBMS_CLOUD could vary between different Oracle releases, so pay close attention to the steps in the post and make sure you reference DBMS_CLOUD in the Oracle Database 19c documentation. To avoid confusion, the option discussed in this post is for RDS Custom for Oracle, not for RDS for Oracle. RDS for Oracle offers S3 integration.

Archival solutions for Oracle database workloads in AWS: Part 1

This is a two-part series. In this post, we explain three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In Part 2 of this series, we explain three archival solutions using native Oracle products and utilities. All of these options allow you to join current Oracle data with archived data.

Archival solutions for Oracle database workloads in AWS: Part 2

This post is a continuation of Archival solutions for Oracle database workloads in AWS: Part 1. Part 1 explains three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In this post, we explain three archival solutions using native Oracle products and utilities.

Data Modeling Best Practices to Unlock the Value of your Time-series Data

Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day. In this post, we guide you through the essential concepts of Timestream and demonstrate how to use them to make critical data modeling decisions. We walk you through how data modeling helps for query performance and cost-effective usage. We explore a practical example of modeling video streaming data, showcasing how these concepts are applied and the resulting benefits. Lastly, we provide more best practices that directly or indirectly relate to data modeling.

Troubleshoot networking issues during database migration with the AWS DMS diagnostic support AMI

In this post, we introduce the key functionalities, architecture, and configurations of the AWS DMS diagnostic support AMI. Then, we show you how to launch the AMI with proper networking configurations and AWS Identity and Access Management (IAM) permissions using AWS CloudFormation. Last, we demonstrate an example of how network latency results in significant replication lag and how to use the AMI to diagnose the issue.