AWS Database Blog

Estimate cost savings for the Amazon Aurora I/O-Optimized feature using Amazon CloudWatch

Amazon Aurora is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora supports MySQL and PostgreSQL open-source database engines. Aurora storage consists of a shared cluster storage architecture that makes it highly available, durable, scalable, and performant by design.

As of this writing, you can choose between two storage configurations for Aurora that impact billing:

  • Aurora Standard – This offers cost-effective pricing for applications with moderate I/O usage. In this mode, the total cost for using Aurora depends on the number and type of compute instances and storage of the DB cluster. Additionally, there is a charge for I/O operations made.
  • Aurora I/O-Optimized – This is a new configuration that provides improved pricing for I/O-intensive applications. In the I/O-Optimized configuration, you only pay for the compute and storage usage, with no additional charges for I/O operations. This enables I/O-intensive applications to run at a lower cost and enables applications to have more predictable pricing.

Cost savings provided by Aurora I/O-Optimized can be significant when I/O spending accounts for 25% or more of total Aurora database spending. With the general availability of Aurora I/O-Optimized in May 2023, Aurora database users want to identify Aurora clusters in their AWS environment that will benefit from the cost savings of the I/O optimization feature.

You can use AWS Cost Explorer to check the costs incurred for different AWS services in their environment. In Cost Explorer, cost details are cumulative for all Aurora clusters in the Region. It is not possible to segregate the cost details for Aurora at the individual cluster level in the same Region. If you have multiple Aurora clusters in a single Region, there are few alternate ways that can determine I/O usage cost per cluster level. Currently, we have the following ways to identify I/O spending at the individual cluster level:

  • AWS cost allocation tags – You can use cost allocation tags to track cost details for the tagged resources. This method requires you to tag each resource, and cost details will be visible only after tagging occurs.
  • AWS Cost and Usage Reports dashboard – You can create dashboards that can help visualize comprehensive cost and usage details from AWS CUR reports. This method requires having familiarity with creating customized SQL queries to return the storage and I/O usage.
  • Amazon CloudWatch – You can use CloudWatch metrics to get storage and I/O usage details for your Aurora cluster and generate cost estimates.

In this post, we explain how you can break down the cost of a single Aurora cluster, especially when you have multiple Aurora clusters in a Region. We do this process in four steps: First, we analyze CloudWatch metrics to gather total IOPs and storage usage information for an individual Aurora database cluster. Then we describe how to get high-level cost estimates on both Aurora Standard and Aurora I/O-Optimized for the storage and I/O components. Then, we add the compute costs considering a stable workload. Finally, we compare the cost of the clusters in both storage configurations and inform our decision.

Analyze CloudWatch metrics

To estimate the costs of the cluster in Standard and I/O-Optimized, we analyze CloudWatch metrics to determine the storage and I/O usage. We then use the CloudWatch metrics to derive the cost estimates for each configuration. For this example, we use an Aurora cluster that has a consistent workload. Because IOPs and storage are variable components, we analyze the CloudWatch metrics of the cluster and calculate cost estimates for a 1-month period.

You can view CloudWatch metrics for your Aurora cluster using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands.

Using the console, use the following steps to find IOPs and storage usage details for your individual Aurora database cluster:

  1. On the Amazon RDS console, choose Databases in the navigation pane.
  2. Note the details of the Aurora cluster’s DB identifier and size.

For this example, we look at the cluster aurora-database, which has two (r6g.large) instances.

  1. On the CloudWatch console, choose Dashboards in the navigation pane.
  2. Choose Create dashboard.

  1. For Dashboard name, enter a name (for example, aurora-iops-and-storage-details).
  2. Choose Create dashboard.

  1. In the Add Widget dialog box, select Number and choose Next.

  1. In the Add metric graph dialog box, choose Source and enter the following code:
    {
    "sparkline": true,
    "metrics": [
    [ { "expression": "m1+m2", "label": "Total-IOPs", "id": "e1", "period": 2592000, "stat": "Sum" } ],
    [ "AWS/RDS", "VolumeWriteIOPs", "DBClusterIdentifier", "aurora-database", { "id": "m1" } ],
    [ ".", "VolumeReadIOPs", ".", ".", { "id": "m2" } ],
    [ ".", "VolumeBytesUsed", ".", ".", { "id": "m3", "stat": "Average" } ]
    ],
    "view": "singleValue",
    "stacked": false,
    "region": "us-west-2",
    "stat": "Sum",
    "period": 2592000,
    "singleValueFullPrecision": true,
    "start": "2023-05-01T00:00:00.000Z",
    "end": "2023-05-31T23:59:00.000Z"
    }

Make the following changes in the preceding code:

      • Replace DBClusterIdentifier with the name of your Aurora database cluster
      • Replace Region with your current Region (for example, us-west-2)
      • Replace start and end with the start date and end date of the month whose cost estimate you want to generate
  1. Choose Save before proceeding to the next step.

  1. In the Add metric graph dialog box, choose the Options tab and select Number widget format.
  2. Choose Create widget.

Now you can observe the Total-IOPs and VolumeBytesUsed metrics on the dashboard.

Alternatively, you can use the AWS CLI to find IOPs and storage usage details for your individual Aurora DB cluster. To get started, refer to Configure the AWS CLI. Make sure the Region for your Aurora cluster has been exported properly.

Before you run the AWS CLI command, create an input.json file with the following script:

[
{
"Id": "e1",
"Expression": "m1+m2",
"Label": "Total IOPS"
},
{
"Id": "e2",
"Expression": "(m3/1024/1024/1024)",
"Label": "Storage Volume in GB"
},

{
"Id": "m1",
"MetricStat": {
"Metric": {
"Namespace": "AWS/RDS",
"MetricName": "VolumeWriteIOPs",
"Dimensions": [{
"Name": "DBClusterIdentifier",
"Value": "aurora-database"

}]
},
"Period": 2592000,
"Stat": "Sum",
"Unit": "Count"

},
"ReturnData": false
},
{
"Id": "m2",
"MetricStat": {
"Metric": {
"Namespace": "AWS/RDS",
"MetricName": "VolumeReadIOPs",
"Dimensions": [{
"Name": "DBClusterIdentifier",
"Value": "aurora-database"
}]
},
"Period": 2592000,
"Stat": "Sum",
"Unit": "Count"
},
"ReturnData": false
},
{
"Id": "m3",
"MetricStat": {
"Metric": {
"Namespace": "AWS/RDS",
"MetricName": "VolumeBytesUsed",
"Dimensions": [{
"Name": "DBClusterIdentifier",
"Value": "aurora-database"
}]
},
"Period": 2592000,
"Stat": "Average"
},
"ReturnData": false
}
]

Then run the following command. Replace the input parameters --start-time and --end-time with the start date and end date of the month whose cost estimate you want to generate:

>> aws cloudwatch get-metric-data --metric-data-queries file://input.json --start-time 2023-05-01T00:00:00Z --end-time 2023-05-31T23:59:00Z

After the total I/O and storage usage details are captured at the individual Aurora cluster level, we can use the following method of sample calculation to come up with cost saving estimates per month.

Transform CloudWatch metrics into cost estimates

The three main components that influence how Aurora clusters cost for Standard and I/O-Optimized configurations are compute, storage, and I/O operations. In this example, we have two DB instances with instance class as r6g.large, total I/O operations (read and write) value as 5,974 million, and total storage (VolumeBytesUsed) as 4.1 TB. We describe the cost calculation for each component using these values.

The next step is to transform the values of the CloudWatch metrics to cost.

Compute

The compute cost is based on the DB instance class of the DB instances in the Aurora cluster. For this example, we use two r6g.large On-Demand instances in the us-east-1 Region. We use the pricing details in the following table to calculate pricing estimates for compute. You can refer to Amazon Aurora Pricing for pricing details for different instance types for On-Demand or Reserved Instances.

Memory Optimized Instances – Current Generation  Aurora Standard (Price Per Hour)  Aurora I/O-Optimized (Price Per Hour)
 db.r6g.large  $0.26  $0.34

The following table summarizes the cost calculation.

 Configuration  Calculation  Cost
 Aurora Standard  2 * db.r6g.large (at $0.26 per hour) * 30 days * 24 hours  $374.40
 Aurora I/O-Optimized  2 * db.r6g.large (at $0.34 per hour) * 30 days * 24 hours  $489.60

Storage

For this example, we have calculated our storage cost for 4.1 TB in the us-east-1 Region. The pricing may be different for different Regions for both Standard and I/O-Optimized. Refer to Amazon Aurora Pricing before you begin the cost analysis for your Aurora cluster.

 Storage Rate  Aurora Standard  Aurora I/O-Optimized
 $0.10 per GB-month  $0.225 per GB-month

The following table summarizes the cost calculation.

 Configuration  Calculation  Cost
 Aurora Standard  4.1 TB * $0.10 per GB-month (30 days)  $419.80
 Aurora I/O-Optimized  4.1TB * $0.225 per GB-month (30 days)  $944.60

IOPS

For this example, we have used the following IOPS rate for cost estimates based on the total number of IOPS (read and write) calculated using CloudWatch metrics.

 I/O Rate  Aurora Standard  Aurora I/O-Optimized
 $0.20 per 1 million requests  Included

The following table summarizes the cost calculation.

 Configuration  Calculation  Cost
 Aurora Standard  5,974 million I/Os * $0.20 per 1 million I/Os  $1194.00
 Aurora I/O-Optimized  –  Included

Cluster cost comparison

The following table summarizes the final cost estimates for Standard and I/O-Optimized for our Aurora cluster with DB instance class r6g.large, storage size 4.1 TB, and total IOPS usage of 5,974 million.

Cost Component  I/O – Standard (Current Cost)  I/O – Optimized (Revised Cost)
 Compute  $374.40  $489.60
 Storage  $419.80  $944.60
 IOPS  $1,194.00  $0.00
 Total  $1,988.20  $1,434.22

In this example, switching to the I/O-Optimized configuration can reduce the total cluster cost by 28%.

Note that the preceding costs are high-level estimates for the whole month, where storage costs are calculated at the end of the month (not on a daily basis).

You can use this method for calculating your total cost estimates and make an informed decision between choosing Aurora Standard and Aurora I/O-Optimized.

Clean up

After you have calculated the costs, delete the CloudWatch dashboard using following steps:

  1. On the CloudWatch console, choose Dashboards in the navigation pane.
  2. Select your dashboard (aurora-iops-and-storage-details) and choose Delete.

Summary

In this post, we discussed how to use CloudWatch metrics to determine I/O and storage usage for an Aurora cluster at the individual cluster level and decide if switching to Aurora I/O-Optimized provides cost benefits as compared to running on the standard configuration.

If you have any questions or suggestions, leave your feedback in the comments section.


About the Authors

Sarabjeet Singh is a Database Specialist Solutions Architect at Amazon Web Services. He works with our customers to provide guidance and technical assistance on database projects, helping them improve the value of their solutions when using AWS.

Poulami Maity is a Database Specialist Solutions Architect at Amazon Web Services. She works with AWS customers to help them migrate and modernize their existing databases to the AWS Cloud.