Category: Amazon RDS

AWS Database Migration Service

Do you currently store relational data in an on-premises Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL database? Would you like to move it to the AWS cloud with virtually no downtime so that you can take advantage of the scale, operational efficiency, and the multitude of data storage options that are available to you?

If so, the new AWS Database Migration Service (DMS) is for you! First announced last fall at AWS re:Invent, our customers have already used it to migrate over 1,000 on-premises databases to AWS. You can move live, terabyte-scale databases to the cloud, with options to stick with your existing database platform or to upgrade to a new one that better matches your requirements.  If you are migrating to a new database platform as part of your move to the cloud, the AWS Schema Conversion Tool will convert your schemas and stored procedures for use on the new platform.

The AWS Database Migration Service works by setting up and then managing a replication instance on AWS. This instance unloads data from the source database and loads it into the destination database, and can be used for a one-time migration followed by on-going replication to support a migration that entails minimal downtime.  Along the way DMS handles many of the complex details associated with migration, including data type transformation and conversion from one database platform to another (Oracle to Aurora, for example). The service also monitors the replication and the health of the instance, notifies you if something goes wrong, and automatically provisions a replacement instance if necessary.

The service supports many different migration scenarios and networking options  One of the endpoints must always be in AWS; the other can be on-premises, running on an EC2 instance, or running on an RDS database instance. The source and destination can reside within the same Virtual Private Cloud (VPC) or in two separate VPCs (if you are migrating from one cloud database to another). You can connect to an on-premises database via the public Internet or via AWS Direct Connect.

Migrating a Database
You can set up your first migration with a couple of clicks! You simply create the target database, migrate the database schema, set up the data replication process, and initiate the migration. After the target database has caught up with the source, you simply switch to using it in your production environment.

I start by opening up the AWS Database Migration Service Console (in the Database section of the AWS Management Console as DMS) and clicking on Create migration.

The Console provides me with an overview of the migration process:

I click on Next and provide the parameters that are needed to create my replication instance:

For this blog post, I selected one of my existing VPCs and unchecked Publicly accessible. My colleagues had already set me up with an EC2 instance to represent my “on-premises” database.

After the replication instance has been created, I specify my source and target database endpoints and then click on Run test to make sure that the endpoints are accessible (truth be told, I spent some time adjusting my security groups in order to make the tests pass):

Now I create the actual migration task. I can (per the Migration type) migrate existing data, migrate and then replicate, or replicate going forward:

I could have clicked on Task Settings to set some other options (LOBs are Large Objects):

The migration task is ready, and will begin as soon as I select it and click on Start/Resume:

I can watch for progress, and then inspect the Table statistics to see what happened (these were test tables and the results are not very exciting):

At this point I would do some sanity checks and then point my application to the new endpoint. I could also have chosen to perform an ongoing replication.

The AWS Database Migration Service offers many options and I have barely scratched the surface. You can, for example, choose to migrate only certain tables. You can also create several different types of replication tasks and activate them at different times.  I highly recommend you read the DMS documentation as it does a great job of guiding you through your first migration.

If you need to migrate a collection of databases, you can automate your work using the AWS Command Line Interface (CLI) or the Database Migration Service API.

Price and Availability
The AWS Database Migration Service is available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore),  and Asia Pacific (Sydney) Regions and you can start using it today (we plan to add support for other Regions in the coming months).

Pricing is based on the compute resources used during the migration process, with a charge for longer-term storage of logs. See the Database Migration Service Pricing page for more information.



Amazon RDS Update – Support for MySQL 5.7

You can now launch Amazon RDS database instances that run MySQL 5.7 .

This release of MySQL offers a number of performance, scalability, and security enhancements. Here are some of the most important and relevant ones:

  • A Performance Schema that provides access to new and improved performance metrics.
  • Optimizer improvements for better parsing, EXPLAINing, and querying.
  • GIS with native InnoDB spatial indexes and integration with Boost.Geometry (read MySQL 5.7 and GIS, an Example and Making Use of Boost Geometry in MySQL GIS to learn more).
  • Improved parallel replication using a new logical clock mode (read Multi-threaded Replication Performance in MySQL 5.7 to learn more).
  • Improved InnoDB scalability and temporary table performance. Improved tablespace discovery during crash recovery, and dynamic buffer pool resizing.

Read the MySQL 5.7 Release Notes for more information!

Launching a Database Instance
As always, you can launch these instances from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, the RDS API (CreateDBInstance), or via a CloudFormation template. Here’s how you launch a database instance from the Console:

After I launched my instance, I edited its security group to include the public IP address of one of my EC2 instances. Then I connected with it in the usual way:

Then I took a quick look at the new Performance Schema:

Time was kind of tight and my MySQL is kind of rusty so I didn’t have a chance to exercise any of the new features. I’ll leave that up to you!

Available Now
Amazon RDS for MySQL is available in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), China (Beijing), South America (São Paulo), and AWS GovCloud (US) Regions.


PS – Many of you have asked about an in-place upgrade from version 5.6 to version 5.7. I checked with the development team and they confirmed that this is in the works. They did want to make version 5.7 available as quickly as possible, and recommended two upgrade options that you can use now: dump and reload, or read replicas.


Amazon RDS Update – Share Encrypted Snapshots, Encrypt Existing Instances

We want to make it as easy as possible for you to secure your AWS environment. Some of our more recent announcements in this area include encrypted EBS boot volumes, encryption at rest for Amazon Aurora, and support for AWS Key Management Service (KMS) across several different services.

Today we are giving you some additional options for data stored in Amazon Relational Database Service (RDS). You can now share encrypted database snapshots with other AWS accounts. You can also add encryption to a previously unencrypted database instance.

Sharing Encrypted Snapshots
When you are using encryption at rest for a database instance, automatic and manual database snapshots of the instance are also encrypted. Up until now, encrypted snapshots were private to a single AWS account and could not be shared. Today we are giving you the ability to share encrypted snapshots with up to 20 other AWS accounts. You can do this from the AWS Management Console, AWS Command Line Interface (CLI), or via the RDS API. You can share encrypted snapshots within an AWS region, but you cannot share them publicly. As is the case with the existing sharing feature, today’s release applies to manual snapshots.

To share an encrypted snapshot, select it and click on Share Snapshot. This will open up the Manage Snapshot Permissions page. Enter one or more account IDs (click on Add after each one) and click on Save when you have entered them all:

The accounts could be owned by your organization (perhaps you have separate accounts for dev, test, staging, and production) or by your business partners. Backing up to your mission-critical databases to a separate AWS account is a best practice, and one that you can implement using this new feature while also gaining the benefit of encryption at rest.

After you click on Save, the other accounts have access to the shared snapshots. The easiest way to locate them is to visit the RDS Console and filter the list using Shared with Me:

The snapshot can be used to create a new RDS database instance. To learn more, read about Sharing a Database Snapshot.

Adding Encryption to Existing Database Instances
You can now add encryption at rest using KMS keys to a previously unencrypted database instance. This is a simple, multi-step process:

  1. Create a snapshot of the unencrypted database instance.
  2. Copy the snapshot to a new, encrypted snapshot. Enable encryption and specify the desired KMS key as you do so:
  3. Restore the encrypted snapshot to a new database instance:
  4. Update your application to refer to the endpoint of the new database instance:

And that’s all you need to do! You can use a similar procedure to change encryption keys for existing database instances. To learn more, read about Copying a Database Snapshot.



New – Enhanced Monitoring for Amazon RDS (MySQL 5.6, MariaDB, and Aurora)

Amazon Relational Database Service (RDS) makes it easy for you to set up, run, scale, and maintain a relational database. As is often the case with the high-level AWS model, we take care of all of the details in order to give you the time to focus on your application and your business.

Enhanced Monitoring
Advanced RDS users have asked us for more insight into the inner workings of the service and we are happy to oblige with a new Enhanced Monitoring feature!

After you enable this feature for a database instance, you get access to over 50 new CPU, memory, file system, and disk I/O metrics. You can enable these features on a per-instance basis, and you can choose the granularity (all the way down to 1 second). Here is the list of available metrics:

And here are some of the metrics for one of my database instances:

You can enable this feature for an existing database instance by selecting the instance in the RDS Console and then choosing Modify from the Instance Options menu:

Turn the feature on, pick an IAM role, select the desired granularity, check Apply Immediately, and then click on Continue.

The Enhanced Metrics are ingested into CloudWatch Logs and can be published to Amazon CloudWatch. To do this you will need to set up a metrics extraction filter; read about Monitoring Logs to learn more. Once the metrics are stored in CloudWatch Logs, they can also be processed by third-party analysis and monitoring tools.

Available Now
The new Enhanced Metrics feature is available today in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions. It works for MySQL 5.6, MariaDB, and Amazon Aurora, on all instance types except t1.micro and m1.small.

You will pay the usual ingestion and data transfer charges for CloudWatch Logs (see the CloudWatch Logs Pricing page for more info).


Amazon RDS Update – Cross-Account Snapshot Sharing

Today I would like to tell you about a new cross-account snapshot sharing feature for Amazon Relational Database Service (RDS). You can share the snapshots with specific AWS accounts or you can make them public.

Cross-Account Snapshot Sharing
I often create snapshot backups as part of my RDS demos:

The snapshots are easy to create and can be restored to a fresh RDS database instance with a couple of clicks.

Today’s big news is that you can now share unencrypted MySQL, Oracle, SQL Server, and PostgreSQL snapshots with other AWS accounts. If you, like many sophisticated AWS customers, use separate AWS accounts for development, testing, and production, you can now share snapshots between AWS accounts in a controlled fashion. If a late-breaking bug is discovered in a production system, you can create a database snapshot and then share it with select developers so that they can diagnose the problem without having to have access to the production account or system.

Each snapshot can be shared with up to 20 other accounts (we can raise this limit for your account if necessary; just ask). You can also mark snapshots as public so that any RDS user can restore a database containing your data. This is a great way to share data sets and research results!

Here is how you share a snapshot with another AWS account using the RDS Console (you can also do this from the command line or the RDS API):

Here’s how a snapshot appears in the accounts that it is shared with (again, this functionality is also accessible from the command line and the RDS API):

Here is how you create a public snapshot:

Snapshot sharing works across regions, but does not apply to the China (Beijing) region or to AWS GovCloud (US).



Amazon RDS Update – Oracle + Brazil + Larger Volumes + More

I love to demo Amazon Relational Database Service (RDS) to live audiences! They always appreciate the fact that I can launch a MySQL, Oracle, SQL Server, PostgreSQL, or Amazon Aurora database instance with a couple of clicks.

Today I would like to bring you up to date on a bunch of improvements that we have recently made to the service. I was not able to blog about these at launch time so this might not be news, but I did want to make sure that you didn’t miss anything important. Here’s a quick summary of what I want to share with you:

  • The t2.large database instance type is now available.
  • Support for Oracle and the latest patches is now available.
  • R3 and T2 database instances can now run Oracle.
  • The R3 database instances are now available in Brazil.
  • Database instances running MySQL, Oracle, SQL Server, and PostgreSQL can now be provisioned with even more storage (4 – 6 TB, depending on the database engine).
  • Tags on database instances are now copied to snapshots, and from there to instances restored from the snapshots.
  • You now have access to a license-included offering for SQL Server Enterprise Edition.

Availability of t2.large Database Instances
The T2 instances provide you with a baseline level of CPU performance and the ability to burst above the baseline. They are designed for workloads that do not need the entire CPU on a full or consistent basis, and are priced lower than comparable M3 DB instances.

In addition to the existing instance types (db.t2.micro, db.t2.small, and db.t2.medium), you can now run all supported database engines on the new db.t2.large instance type. This instance type offers twice as much memory and 50% more CPU credits per hour than the db.t2.medium.  It is available in the US East (Northern Virginia), US West (Northern California), US West (Oregon), South America (São Paulo), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and China (Beijing) regions.

The t2.large also supports encryption at rest. You can set this up on the Configure Advanced Settings page:

Support for Oracle
RDS for Oracle now supports version of Oracle datatabase 12c. You can use the new In-Memory option to store a subset of your data in an in-memory column format that is optimized for performance. This is a great fit for the newly available R3 databases instances described in the next section.

As part of this update, we also applied the April 2015 Oracle Patch Set Updates (PSU) for Oracle Database 11g and 12c and enabled access to the DBMS_REPAIR package. We also improved the integration with AWS CloudHSM; you can now access a single CloudHSM partition from multiple RDS accounts and you can store TDE master keys for multiple RDS Oracle databases on a single CloudHSM partition.

You now have access to the following versions of Oracle through RDS:


Oracle on R3 and T2 Database Instances
The R3 instances are optimized for memory-intensive applications and have the lower cost per GiB of RAM of any DB instance. The instances deliver high sustained memory bandwidth and offer lower network latency, all at prices that are up to 28% lower than comparable M2 DB instances.

You can now run Oracle Database on the R3 and T2 instances:

R3 in Brazil
The R3 database instances are now available in the South America (São Paulo) Region, and can be used with the MySQL, Oracle, SQL Server, and PostgreSQL database engines.

Provision Even More Storage
Earlier this year we increased the amount of storage that you can provision when you use Provisioned IOPS or General Purpose (SSD) storage for an RDS database instance. Here are the new limits:

  • MySQL, PostgreSQL, and Oracle database instances can now be provisioned with up to 6 TB of storage.
  • SQL Server database instances can now be provisioned with up to 4 TB of storage and up to 20,000 IOPS (double the former limit).

Instance Tags to Snapshots, and Back
If you add tags to your database instances, create snapshots of those instances, and then use the snapshots to create fresh instances, the tags now appear on the new instances.

SQL Server Enterprise, License Included
You can now run SQL Server Enterprise Edition as a License Included offering on RDS. In other words, you do not need to purchase a separate license for the product; the pricing includes the software license, the underlying hardware resources, and the RDS management capabilities.

Available Now
These options are available now (some of them have been around for a month or two) and you can start using them today!



New – Simplified Reserved Instance Options for Amazon RDS

Reserved Instances have been a part of the AWS pricing model for quite some time. You can reserve an instance and receive a significant discount, along with a capacity reservation, when you purchase a Reserved Instance (RI).

You don’t need to make any code or administrative changes in order to benefit from Reserved Instances. We’ll automatically apply Reserved Instance rates first when we compute your bill in order to minimize your costs.

New Payment Options
We are simplifying the Reserved Instance options that are available to users of Amazon Relational Database Service (RDS). The new model is payment-based and provides a single type of Reserved Instance, with three payment options:

  • No Upfront – No upfront payment is required. This option provides a substantial discount (typically about 30%) over the one year term of the RI when compared to On-Demand.
  • Partial Upfront – With a balance of payments between upfront and hourly, this option replaces the previous Heavy Utilization RI and provides a high discount (typically about 60% for a three year term) over the course of a one or three year term.
  • All Upfront – This option allows you to pay for the entire RI term (one or three years) with one upfront payment, and provides the best discount (typically about 63% for a three year term).

Purchase Now
These options are available for all of the database engines supported by RDS (MySQL, PostgreSQL, Oracle and SQL Server) in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (São Paulo) regions. They are not available for SQL Server License Included.

As part of this simplification, you will no longer be able to purchase the Light or Medium Utilization Reserved Instances on or after August 15, 2015.

More Info
For complete information on these new cost-saving options, please take a look at the RDS Pricing page.




Look Before You Leap – The Coming Leap Second and AWS (Updated)

My colleague Mingxue Zhao sent me a guest post designed to make sure that you are aware of an important time / clock issue.

Note: This post was first published on May 18, 2015. We made some important additions and corrections on May 25, 2015.

— Jeff;

The International Earth Rotation and Reference Systems (IERS) recently announced that an extra second will be injected into civil time at the end of June 30th, 2015. This means that the last minute of June 30th, 2015 will have 61 seconds. If a clock is synchronized to the standard civil time, it should show an extra second 23:59:60 on that day between 23:59:59 and 00:00:00. This extra second is called a leap second. There have been 25 such leap seconds since 1972. The last one took place on June 30th, 2012.

Clocks in IT systems do not always follow the standard above and can behave in many different ways. For example:

  • Some Linux kernels implement a one-seconds backwards jump instead of the extra “:60” second, repeating the 59th second (see the article, Resolve Leap Second Issues in Red Hat Enterprise Linux for more information).
  • Windows time servers ignore the leap second signal and will sync to the correct time after the leap second (see How the Windows Time Service Treats a Leap Second for more information).
  • Some organizations, including Amazon Web Services, plan to spread the extra second over many hours surrounding the leap second by making every second slightly longer.
  • If a clock doesn’t connect to a time synchronization system, it drifts on its own and will not implement any leap second or an adjustment for it.

If you want to know whether your applications and systems can properly handle the leap second, contact your providers. If you run time-sensitive workloads and need to know how AWS clocks will behave, read this document carefully. In general, there are three affected parts:

  • The AWS Management Console and backend systems
  • Amazon EC2 instances
  • Other AWS managed resources

For more information about comparing AWS clocks to UTC, see the AWS Adjusted Time section of this post.

AWS Management Console and Backend Systems
The AWS Management Console and backend systems will NOT implement the leap second. Instead, we will spread the one extra second over a 24-hour period surrounding the leap second by making each second slightly longer. During these 24 hours, AWS clocks may be up to 0.5 second behind or ahead of the standard civil time (see the AWS Adjusted Time section for more information).

You can see adjusted times in consoles (including resource creation timestamps), metering records, billing records, Amazon CloudFront logs, and AWS CloudTrail logs. You will not see a “:60” second in these places and your usage will be billed according to the adjusted time.

Amazon EC2 Instances
Each EC2 instance has its own clock and is fully under your control; AWS does not manage instance clocks. An instance clock can have any of the behaviors listed at the beginning of this post. Contact your OS provider to understand the expected behavior of your operating system.

If you use the Amazon Linux AMI, your instance will implement the one-second backwards jump and will see “23:59:59” twice.  You may find the following information useful:

If you use SUSE Linux Enterprise Server, take a look at Fixes and Workaround to Avoid Issues Caused by Leap Second 2015.

Other AWS Managed Resources
Other AWS resources may also have their own clocks. Unlike EC2 instances, these resources are fully or partially managed by AWS.

The following resources will implement the one-second backwards jump and will see :23:59:59″ twice:

  • Amazon CloudSearch clusters
  • Amazon EC2 Container Service instances
  • Amazon EMR Clusters
  • Amazon RDS instances
  • Amazon Redshift instances

To enable time synchronization on EMR clusters, your VPC has to allow access to NTP. Make sure that your EMR clusters have access to the Internet, and that your security groups and network ACLs allow outbound UDP traffic on port 123.

AWS Adjusted Time
This section provides specific details on how clocks will behave in the AWS Management Console and backend systems.

Starting at 12:00:00 PM on June 30th, 2015, we will slow down AWS clocks by 1/86400. Every second on AWS clocks will take 1+1/86400 seconds of “real” time, until 12:00:00 PM on  July 1st, 2015, when AWS clocks will be behind by a full second. Meanwhile, the standard civil time (UTC) will implement the leap second at the end of June 30th, 2015 and fall behind by a full second, too. Therefore, at 12:00:00 PM July 1st, 2015, AWS clocks will be synchronized to UTC again. The table below illustrates these changes.

UTC AWS Adjusted Clock AWS vs. UTC Notes
11:59:59 AM June 30th, 2015 11:59:59 AM June 30th, 2015 +0 AWS clocks are synchronized to UTC.
12:00:00 PM 12:00:00 PM +0
12:00:01 Each second is 1/86400 longer and AWS clocks fall behind UTC. The gap gradually increases to up to 1/2 second.
12:00:01 +1/86400
12:00:02 +2/86400
 …  …  …
 23:59:59 +43199/86400
23:59:60 Leap second injected to UTC.
00:00:00 AM July 1st, 2015 -1/2 AWS clocks gain 1/2 second ahead of UTC.
00:00:00 AM July 1st, 2015 AWS clocks keep falling behind and the gap with UTC shrinks gradually.
00:00:01 -43199/86400
00:00:02 -43198/86400
 …  …  …
11:59:59 AM -1/86400
11:59:59 AM
12:00:00 PM July 1st ,2015 12:00:00 PM July 1st ,2015 +0 The gap shrinks to zero. AWS clocks synchronize to UTC again.
12:00:01 12:00:01 +0

If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

Mingxue Zhao, Senior Product Manager

Amazon RDS Update – Oracle Database 12c Now Available

I’ve got good news for users of Amazon Relational Database Service (RDS).  You can now launch Database Instances that run Oracle Database 12c (version to be precise).

This version of Oracle Database introduces over 500 new features! Here are a few  that I found interesting:

Data Redaction – You can mask sensitive data fields such as SSNs or credit card numbers to hide it in query results. Read Introduction to Oracle Data Redaction to learn more.

Adaptive Query Optimization – This feature allows the query optimizer to adjust execution plans based on stats gathered at statement execution time, improving query performance. Read the Optimzer with Oracle Database 12c white paper to learn more.

Inline PL/SQL Functions and Procedures – You can now define stored procedures and functions in the WITH clause and use these inline objects in your queries. Read about With Clause Enhancements in Oracle Database 12c to learn more.

Top-N Queries – You can now retrieve the top or bottom N rows from an ordered set and more easily page through the results. See the blog post, Row Limiting Clause for Top-N Queries in Oracle Database 12c for some helpful examples.

APEX Update – You can take advantage of version 4.2.6 of Oracle Application Express. Read the Application Express Release Notes to learn about what’s new and read the APEX Options to see how to get started.

Launch an Instance Today
You can launch an RDS Database Instance running Oracle Database 12c today using the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, the CreateDBInstance API function, or a AWS CloudFormation template. You can read the Oracle Concepts section of the RDS documentation to learn about parameter changes, APEX option changes, and other information on how to use this new release of Oracle.



Amazon RDS Update – Data at Rest Encryption using AWS KMS Keys

You can now encrypt your Amazon RDS for SQL Server and Amazon RDS for Oracle databases using keys that you manage through AWS Key Management Service (KMS) (this feature was already available for Amazon RDS for MySQL and Amazon RDS for PostgreSQL).

The encryption applies to data at rest on the underlying storage for the database instance, as well as to automated backups, read replicas, and snapshots. It is applied transparently and you don’t need to make any changes to your application. You can enable encryption and choose your keys (or create new ones) when you create a new database instance:

Amazon RDS encryption can be used concurrently with the Transparent Data Encryption (TDE) option that is already available for Oracle and SQL Server.

To learn more about the use of KMS with RDS, read Encrypting Amazon RDS Resources.