Q: What is Amazon Aurora?

Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora with MySQL-compatibility delivers up to five times the performance of MySQL without requiring any changes to most MySQL applications. Amazon RDS manages your Amazon Aurora database, handling time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair. You pay a simple monthly charge for each Amazon Aurora database instance you use. There are no upfront costs or long-term commitments required. Amazon Aurora with PostgreSQL compatibility is now available in preview. FAQs on PostgreSQL compatibility for Amazon Aurora are available here.

Q: What does "MySQL-compatible" mean?

It means that most of the code, applications, drivers and tools you already use today with your MySQL databases can be used with Aurora with little or no change. The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 using the InnoDB storage engine. Certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora.

Q: How do I try Amazon Aurora?

Amazon Aurora is now generally available. To try Amazon Aurora, sign in to the console, select RDS under the Database category, and choose Amazon Aurora as your database engine.

Q: How much does Amazon Aurora cost?

Please see our pricing page for current pricing information.

Q. Amazon Aurora replicates each chunk of my database volume six ways across three Availability Zones. Does that mean that my effective storage price will be three or six times what is shown on the pricing page?

No. Amazon Aurora’s replication is bundled into the price. You are charged based on the storage your database consumes at the database layer, not the storage consumed in Amazon Aurora’s virtualized storage layer.

Q. In which AWS regions is Amazon Aurora available?

Amazon Aurora is currently available in the US West (Oregon), US East (N. Virginia), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai) AWS Regions.

Q: How can I migrate from MySQL to Amazon Aurora and vice versa?

You have several options. You can use the standard mysqldump utility to export data from MySQL and mysqlimport utility to import data to Amazon Aurora, and vice-versa. You can also use Amazon RDS’s DB Snapshot migration feature to migrate an RDS MySQL DB Snapshot to Amazon Aurora using the AWS Management Console. Migration completes for most customers in under an hour, though the duration depends on format and data set size. For more information see Amazon Aurora’s Data Export and Import guide.

Q: Does Amazon Aurora participate in the AWS Free Tier?

The AWS Free Tier for Amazon RDS offers benefits for Micro DB Instances; Amazon Aurora does not currently offer Micro DB Instance support. Please see our pricing page for current pricing information.

Q: What are IOs in Amazon Aurora and how are they calculated?

IOs are input/output operations performed by the Aurora database engine against its SSD-based virtualized storage layer. Every database page read operation counts as one IO. The Aurora database engine issues reads against the storage layer in order to fetch database pages not present in the buffer cache. Each database page is 16KB.

 

Aurora was designed to eliminate unnecessary IO operations in order to reduce costs and to ensure resources are available for serving read/write traffic. Write IOs are only consumed when pushing transaction log records to the storage layer for the purpose of making writes durable. Write IOs are counted in 4KB units. For example, a transaction log record that is 1024 bytes will count as one IO operation. However, concurrent write operations whose transaction log is less than 4KB can be batched together by the Aurora database engine in order to optimize I/O consumption. Unlike traditional database engines Amazon Aurora never pushes modified database pages to the storage layer, resulting in further IO consumption savings.

You can see how many IOs your Aurora instance is consuming by going to the AWS Console. To find your IO consumption, go to the RDS section of the console, look at your list of instances, select your Aurora instances, then look for the “Billed read operations” and “Billed write operations” metrics in the monitoring section.

 

Q: What does "five times the performance of MySQL" mean?

Amazon Aurora delivers significant increases over MySQL performance by tightly integrating the database engine with an SSD-based virtualized storage layer purpose-built for database workloads, reducing writes to the storage system, minimizing lock contention and eliminating delays created by database process threads. Our tests with SysBench on r3.8xlarge instances show that Amazon Aurora delivers over 500,000 SELECTs/sec and 100,000 updates/sec, five times higher than MySQL running the same benchmark on the same hardware. Detailed instructions on this benchmark and how to replicate it yourself are provided in the Amazon Aurora Performance Benchmarking Guide.

Q: How do I optimize my database workload for Amazon Aurora?

Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. However, one area where Amazon Aurora improves upon MySQL is with highly concurrent workloads. In order to maximize your workload’s throughput on Amazon Aurora, we recommend building your applications to drive a large number of concurrent queries.

Q: What are the minimum and maximum storage limits of an Amazon Aurora database?

The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.

Q: How do I scale the compute resources associated with my Amazon Aurora DB Instance?

You can scale the compute resources allocated to your DB Instance, up to 32 vCPUs and 244 GiB Memory, in the AWS Management Console (selecting the desired DB Instance and clicking the Modify button). Memory and CPU resources are modified by changing your DB Instance class.

When you modify your DB Instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the "Apply Immediately" flag to apply your scaling requests immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied.

Q: How do I enable backups for my DB Instance?

Automated backups are always enabled on Amazon Aurora DB Instances. Backups do not impact database performance.

Q: Can I take DB Snapshots and keep them around as long as I want?

Yes, and there is no performance impact when taking snapshots. Note that restoring data from DB Snapshots requires creating a new DB Instance.

Q: If my database fails, what is my recovery path?

Amazon Aurora automatically maintains 6 copies of your data across 3 Availability Zones and will automatically attempt to recover your database in a healthy AZ with no data loss. In the unlikely event your data is unavailable within Amazon Aurora storage, you can restore from a DB Snapshot or perform a point-in-time restore operation to a new instance. Note that the latest restorable time for a point-in-time restore operation can be up to 5 minutes in the past.

Q: What happens to my automated backups and DB Snapshots if I delete my DB Instance?

You can choose to create a final DB Snapshot when deleting your DB Instance. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date. Amazon Aurora retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted. Only DB Snapshots are retained after the DB Instance is deleted (i.e., automated backups created for point-in-time restore are not kept).

Q: Can I share my snapshots with another AWS account?

Aurora gives you the ability to create snapshots of your databases, which you can use later to restore a database. You can share this snapshot with a different AWS account, and the owner of the recipient account can use your snapshot to restore a DB that contains your data. You can even choose to make your snapshots public – that is, anybody can restore a DB containing your (public) data. You can use this feature to share data between your various environments (production, dev/test, staging, etc.) that have different AWS accounts, as well as keep backups of all your data secure in a separate account in case your main AWS account is ever compromised.

Q: Will I be billed for shared snapshots?

There is no charge for sharing snapshots between accounts. However, you may be charged for the snapshots themselves, as well as any databases you restore from shared snapshots. Learn more about Aurora pricing.

Q: Can I automatically share snapshots?

We do not support sharing automatic DB snapshots. To share an automatic snapshot, you must manually create a copy of the snapshot, and then share the copy.

Q: How many accounts can I share snapshots with?

You may share manual snapshots with up to 20 AWS account IDs. If you want to share the snapshot with more than 20 accounts, you can either share the snapshot as public, or contact support for increasing your quota.

Q: In which regions can I share my Aurora snapshots?

You can share your Aurora snapshots in all AWS regions where Aurora is available.

Q. Can I share my Aurora snapshots across different regions?

No. Your shared Aurora snapshots will only be accessible by accounts in the same region as the account that shares them.

Q: Can I share an encrypted Aurora snapshot?

No. Sharing an encrypted Aurora snapshot is not supported at his time.
 

Q: How does Amazon Aurora improve my database’s fault tolerance to disk failures?

Amazon Aurora automatically divides your database volume into 10GB segments spread across many disks. Each 10GB chunk of your database volume is replicated six ways, across three Availability Zones. Amazon Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.

Q: How does Aurora improve recovery time after a database crash?

Unlike other databases, after a database crash Amazon Aurora does not need to replay the redo log from the last database checkpoint (typically 5 minutes) and confirm that all changes have been applied, before making the database available for operations. This reduces database restart times to less than 60 seconds in most cases. Amazon Aurora moves the buffer cache out of the database process and makes it available immediately at restart time. This prevents you from having to throttle access until the cache is repopulated to avoid brownouts.

Q: What kind of replicas does Aurora support?

Amazon Aurora supports two kinds of replicas. Amazon Aurora Replicas share the same underlying volume as the primary instance. Updates made by the primary are visible to all Amazon Aurora Replicas. You can also create MySQL Read Replicas based on MySQL’s binlog-based replication engine. In MySQL Read Replicas, data from your primary instance is replayed on your replica as transactions. For most use cases, including read scaling and high availability, we recommend using Amazon Aurora Replicas.

You have the flexibility to mix and match these two replica types based on your application needs:

Feature Amazon Aurora Replicas MySQL Replicas
Number of replicas Up to 15 Up to 5
Replication type Asynchronous (milliseconds) Asynchronous (seconds)
Performance impact on primary Low High
Act as failover target Yes (no data loss) Yes (potentially minutes of data loss)
Automated failover Yes No
Support for user-defined replication delay No Yes
Support for different data or schema vs. primary No Yes

Q. Can I have cross-region replicas with Amazon Aurora?

Yes, you can setup a cross-region Aurora Replica from the RDS console. The cross-region replication is based on single threaded MySQL binlog replication and the replication lag will be influenced by the change/apply rate and delays in network communication between the specific regions selected.

Q. Can I create Aurora Read Replicas on the cross-region replica cluster?
Yes, you can add Aurora Replicas on the cluster that will share the same underlying storage as the cross-region replica. The cross-region replica acts as the primary on the cluster and the Aurora Replicas on the cluster will typically lag behind the primary by 10s of milliseconds.

Q. Can I failover my application from my current primary to the cross-region replica?
Yes, you can promote your cross-region replica to be the new primary from the RDS console. The promotion process typically takes a few minutes depending on your workload. The cross-region replication will stop once you initiate the promotion process.

Q: Can I prioritize certain replicas as failover targets over others?

A: Yes. You can assign a promotion priority tier to each instance on your cluster. When the primary instance fails, Amazon RDS will promote the replica with the highest priority to primary. If there is contention between 2 or more replicas in the same priority tier, then Amazon RDS will promote the replica that is the same size as the primary instance. For more information on failover logic, read the Amazon Aurora User Guide.

Q: Can I modify priority tiers for instances after they have been created?

A: You can modify the priority tier for an instance at any time. Simply modifying priority tiers will not trigger a failover.

Q: Can I prevent certain replicas from being promoted to the primary instance?

A: You can assign lower priority tiers to replicas that you don’t want promoted to the primary instance. However, if the higher priority replicas on the cluster are unhealthy or unavailable for some reason, then Amazon RDS will promote the lower priority replica.

Q: How can I improve upon the availability of a single Amazon Aurora database?

You can add Amazon Aurora Replicas. Amazon Aurora Replicas share the same underlying storage as the primary instance. Any Amazon Aurora Replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary DB Instance failure. To increase database availability, simply create 1 to 15 replicas, in any of 3 AZs, and Amazon RDS will automatically include them in failover primary selection in the event of a database outage.

Q: What happens during failover and how long does it take?

Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual administrative intervention.

  • If you have an Amazon Aurora Replica, in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which is in turn is promoted to become the new primary. Start-to-finish, failover typically completes within a minute.
  • If you do not have an Amazon Aurora Replica (i.e. single instance), Aurora will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in a different Availability Zone. From start to finish, failover typically completes in under 15 minutes.

Your application should retry database connections in the event of connection loss.

Q: If I have a primary database and an Amazon Aurora Replica actively taking read traffic and a failover occurs, what happens?

Amazon RDS will automatically detect a problem with your primary instance and begin routing your read/write traffic to an Amazon Aurora Replica. On average, this failover will take less than a minute. In addition, the read traffic that your Amazon Aurora Replicas were serving will be briefly interrupted.

Q: How far behind the primary will my replicas to be?

Since Amazon Aurora Replicas share the same data volume as the primary, there is virtually no replication lag. We typically observe lag times in the 10s of milliseconds. For MySQL Read Replicas, the replication lag can grow indefinitely based on change/apply rate as well as delays in network communication. However, under typical conditions, under a minute of replication lag is common.

Q: Can I use Amazon Aurora in Amazon Virtual Private Cloud (Amazon VPC)?

Yes, all Amazon Aurora DB Instances must be created in a VPC. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. This gives you complete control over who can access your Amazon Aurora databases.

Q: Does Amazon Aurora encrypt my data in transit and at rest?

Yes. Amazon Aurora uses SSL (AES-256) to secure data in transit. Amazon Aurora allows you to encrypt your databases using keys you manage through AWS Key Management Service (KMS). On a database instance running with Amazon Aurora encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster. Encryption and decryption are handled seamlessly. For more information about the use of KMS with Amazon Aurora, see the Amazon RDS User's Guide.

Q: Can I encrypt an existing unencrypted database?

Currently, encrypting an existing unencrypted Aurora instance is not supported. To use Amazon Aurora encryption for an existing unencrypted database, create a new DB Instance with encryption enabled and migrate your data into it.

Q: How do I access my Amazon Aurora database?

Access to Amazon Aurora databases must be done through the database port entered on database creation. This is done to provide an additional layer of security for your data. Step by step instructions on how to connect to your Amazon Aurora database is provided in the Amazon Aurora Connectivity Guide.

Amazon Aurora with PostgreSQL compatibility is now available in preview, and delivers up to twice the performance of PostgreSQL running on the same hardware without requiring any modification to your existing PostgreSQL applications. This section of the Amazon Aurora FAQs applies specifically to the preview of Amazon Aurora with PostgreSQL compatibility. Sign up for the preview.

Q: What does "PostgreSQL-compatible" mean?

It means that all of the code, applications, drivers and tools you use today with your PostgreSQL databases can be used with Amazon Aurora with no change. Amazon Aurora is designed to be wire-compatible with PostgreSQL 9.6.

Q: Once I sign up for the preview, how long it will take for me to get access and how will I know?

We will be admitting people to the preview based on capacity. When you are admitted, you will receive an email with details on how to access the preview and the private forum.

Q: Will I be charged for my usage during the preview?

There is no charge for your preview database instances or storage used by your preview databases during the preview period.

Q: What are the unique features of the PostgreSQL-compatible edition of Amazon Aurora that are not available with the community edition of PostgreSQL?

• Consistently High Throughput: Amazon Aurora uses a variety of software and hardware techniques to ensure the database engine is able to fully leverage available compute, memory and networking. I/O operations use distributed systems techniques such as quorums to improve performance consistency. Testing on standard benchmarks such as pgbench has shown up to a 3X increase over stock PostgreSQL 9.5 on the same hardware.

• Fault-tolerant and Self-healing Storage: Your data is replicated six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing: data blocks and disks are continuously scanned for errors and replaced automatically.

• Storage Auto-scaling: Amazon Aurora will automatically grow the size of your database volume as your database storage needs grow. Your volume will grow in increments of 10 GB up to a maximum of 64 TB or a maximum volume size you define. You don't need to provision excess storage for your database to handle future growth. You only pay for the storage you actually consume.

• Instance Monitoring and Repair: Amazon RDS continuously monitors the health of your Amazon Aurora database and underlying EC2 instance. If a database issue occurs, Amazon RDS will automatically restart the database and associated processes. Amazon Aurora does not require crash recovery replay of database redo logs, greatly reducing restart times. Amazon Aurora also isolates the database buffer cache from the database processes, allowing the cache to survive a database restart. If there are issues with the underlying instance, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.

Q. Amazon Aurora replicates my database six-ways across three Availability Zones. Does that mean that my effective storage price will be more than what is shown on the pricing page?

No. Amazon Aurora’s replication is included in the price. You are charged based only on the storage your database uses.

Q: How can I migrate from PostgreSQL to Amazon Aurora and vice versa?

To migrate from PostgreSQL on premises, PostgreSQL on EC2, or RDS for PostgreSQL, you can use the standard pg_dump utility to export data from PostgreSQL and the pg_restore utility to import data to Amazon Aurora, and vice versa. You can also move data from PostgreSQL into Amazon Aurora using the AWS Database Migration Service, which can easily and transparently migrate data without application downtime from any Oracle, SQL Server, MySQL, or PostgreSQL database, either running on-premise or within AWS, directly into a MySQL-compatible or PostgreSQL-compatible instance of Amazon Aurora. Database schemas and database code (such as code written in Oracle PL/SQL) can also be easily migrated from Oracle and SQL Server to Amazon Aurora using the AWS Schema Conversion Tool, a self-service database schema and code migration tool.

Q: Do I need to change client drivers to use Amazon Aurora with PostgreSQL compatibility?

Amazon Aurora will work with standard PostgreSQL database drivers.

Q: Do PostgreSQL extensions work with Amazon Aurora?

Amazon Aurora supports the same popular extensions that are available for Amazon RDS for PostgreSQL including PostGIS, dblink, and many data type, index, search, and other useful extensions. We will continue to make additional extensions available based on customer need.

Q: What does twice the performance of PostgreSQL mean?

Amazon Aurora delivers significant increases over PostgreSQL performance by tightly integrating the database engine with an SSD-based virtualized storage layer purpose-built for database workloads, reducing writes to the storage system, minimizing lock contention, and eliminating delays created by database process threads.

Q: Will you continue to support Amazon RDS for PostgreSQL (Community Edition)?

Yes, we will continue to support and enhance future releases of PostgreSQL on Amazon RDS.

Q. What will the pricing be for Amazon Aurora with PostgreSQL compatibility?

The pricing for PostgreSQL compatibility will be the same as the pricing for MySQL compatibility.

Q: Can I use Amazon Aurora with PostgreSQL compatibility in Amazon Virtual Private Cloud (Amazon VPC)?

Yes, all Amazon Aurora database instances must be created in a VPC. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. This gives you complete control over who can access your Amazon Aurora databases.

Q: What security features does Amazon Aurora with PostgreSQL compatibility provide?

Amazon Aurora supports SSL to protect data in transit and transparent database encryption to protect data at rest. You can choose to use AWS KMS to manage your encryption key or supply your own encryption keys using a hardware security module (HSM). Amazon Aurora integrates with AWS CloudTrail and AWS Identity and Access Management (IAM), enabling tracking of all API calls and integration via federation with your LDAP and Active Directory systems.

Q: Will all Amazon Aurora data be encrypted at rest?

Yes, at your option, all data stored on PostgreSQL-compatible Amazon Aurora database instances can be encrypted at rest, using keys you can manage on the AWS Key Management System (KMS).

Q: What are the minimum and maximum storage limits of an Amazon Aurora database?

The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow up to 64 TB in 10GB increments with no impact to database performance. There is no need to provision storage in advance.

Q: How do I enable backups for my DB Instance?

Automated backups are taken continuously and are always enabled on Amazon Aurora database instances. Backups do not impact database performance.

Q: Can I take DB Snapshots and keep them around as long as I want?

Yes. There is no performance impact when taking snapshots. Note that restoring data from database snapshots requires creating a new database instance.

Q: Will I have data loss if my Amazon Aurora instance fails?

Amazon Aurora automatically maintains 6 copies of your data across 3 Availability Zones (AZ) and will attempt to recover your database in a healthy AZ with no data loss. In the unlikely event your data is unavailable within Amazon Aurora storage, you can restore from a snapshot or perform a point-in-time restore operation to a new instance. Note that the latest restorable time for a point-in-time restore operation can be as recent as 5 minutes in the past.

Q: What happens to my automated backups and DB Snapshots if I delete my DB Instance?

You can choose to create a final DB Snapshot when deleting your DB Instance. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date. Amazon Aurora retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted. Only DB Snapshots are retained after the DB Instance is deleted. Automated backups are deleted when you delete your DB Instance.

Q: What kind of replicas does Aurora support?

At the beginning of the preview period, the PostgreSQL-compatible edition of Amazon Aurora does not support read replicas. Support for both Amazon Aurora read replicas and external read replicas will be added at a later date. Please reach out to your preview contact for more information.

Q: How does Amazon Aurora improve my database’s fault tolerance to disk failures?

Amazon Aurora automatically divides your database volume into 10GB segments spread across many disks. Each 10GB chunk of your database volume is replicated six ways, across three Availability Zones. Amazon Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.