AWS Database Blog
Category: Database
Build aggregations for Amazon DynamoDB tables using Amazon DynamoDB Streams
In this post, we discuss how to perform aggregations on a DynamoDB table using Amazon DynamoDB Streams and AWS Lambda. The content includes a reference architecture, a step-by-step guide on enabling DynamoDB Streams for a table, sample code for implementing the solution within a scenario, and an accompanying AWS CloudFormation template for easy deployment and testing.
Amazon RDS for Oracle Transportable Tablespaces using RMAN
In this post, we show you how you can use the RMAN XTTS functionality to migrate from an Oracle database hosted on Amazon Elastic Compute CLoud (Amazon EC2) to Amazon RDS for Oracle. Combined with Amazon Elastic File System (Amazon EFS) integration, XTTS can help reduce the complexity of your migration strategy, reduce the number and copies of data and backups required (as well as associated storage space consumption), and reduce the application downtime associated with completing the migration of your data.
Model hierarchical automotive component data using Amazon DynamoDB
In this post, we discuss an automotive manufacturing information management use case where we store information about components within a vehicle as well as the hierarchy between each of the components. For our automotive use case, we use Amazon DynamoDB to deliver transactional queries, such as component attribute lookups. We will also show you how to use DynamoDB for larger responses such as a recursive query for all the components in a vehicle. While recursive object relationships can be represented in graph databases and possibly traditional RDBMS (with complex joins), these deeper queries can also be represented in DynamoDB.
Use the DBMS_CLOUD package in Amazon RDS Custom for Oracle for direct Amazon S3 integration
In this post, we demonstrate how to use the DBMS_CLOUD package to transfer files between S3 buckets and directories in an RDS Custom for Oracle database. We also show how you can access data from Amazon S3 directly using Oracle features such as external tables and hybrid partition tables. The features provided by DBMS_CLOUD could vary between different Oracle releases, so pay close attention to the steps in the post and make sure you reference DBMS_CLOUD in the Oracle Database 19c documentation. To avoid confusion, the option discussed in this post is for RDS Custom for Oracle, not for RDS for Oracle. RDS for Oracle offers S3 integration.
Archival solutions for Oracle database workloads in AWS: Part 1
This is a two-part series. In this post, we explain three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In Part 2 of this series, we explain three archival solutions using native Oracle products and utilities. All of these options allow you to join current Oracle data with archived data.
Archival solutions for Oracle database workloads in AWS: Part 2
This post is a continuation of Archival solutions for Oracle database workloads in AWS: Part 1. Part 1 explains three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In this post, we explain three archival solutions using native Oracle products and utilities.
Data Modeling Best Practices to Unlock the Value of your Time-series Data
Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day. In this post, we guide you through the essential concepts of Timestream and demonstrate how to use them to make critical data modeling decisions. We walk you through how data modeling helps for query performance and cost-effective usage. We explore a practical example of modeling video streaming data, showcasing how these concepts are applied and the resulting benefits. Lastly, we provide more best practices that directly or indirectly relate to data modeling.
Troubleshoot networking issues during database migration with the AWS DMS diagnostic support AMI
In this post, we introduce the key functionalities, architecture, and configurations of the AWS DMS diagnostic support AMI. Then, we show you how to launch the AMI with proper networking configurations and AWS Identity and Access Management (IAM) permissions using AWS CloudFormation. Last, we demonstrate an example of how network latency results in significant replication lag and how to use the AMI to diagnose the issue.
Improve availability of Amazon Neptune during engine upgrade using blue/green deployment
Amazon Neptune is a fully managed graph database service built for the cloud that makes it easier to build and run graph applications that work with highly connected datasets. Neptune provides built-in security, continuous backups, serverless compute, and integrations with other AWS services. Neptune supports in-place upgrades of cluster and database instances. Upgrade of a Neptune cluster can be done either manually or automatically (during the database maintenance window).
Introducing the Advanced JDBC Wrapper Driver for Amazon Aurora
Today’s modern applications are expected to be scalable and resilient. The top of this list is scalability, which depending on the size of the application workload could mean the ability to handle millions of users on demand. With stateful applications such as eCommerce, Financial Services and Games, this means having highly available databases. With the release of Amazon Aurora in 2015, customers could run relational databases in an Aurora cluster comprising of one writer and up to 15 low-latency reader nodes. This enables applications to scale reads significantly. However, as with any database supporting multiple instances, developers have built complex application logic to deal with special events such as switchover or failover.