AWS Cloud Operations & Migrations Blog

Build an AWS Config Custom Rule to Optimize Amazon EBS Volume Types

This blog provides step-by-step instructions for building an AWS Config custom rule and a custom Config Remediation so that you can optimize your EBS Volume types with Amazon EBS gp3 volumes.

AWS Config is a service that lets you assess, audit, and evaluate your AWS resource configurations. AWS Config provides AWS Managed Rules, which are predefined, customizable rules to evaluate whether your AWS resources follow AWS best practices. AWS Config also lets you remediate noncompliant resources evaluated by AWS Config Rules, which are applied using AWS Systems Manager Automation documents. These documents define the actions to be conducted on noncompliant AWS resources evaluated by AWS Config Rules.

Often, AWS customers want to build their own AWS Config rules and leverage custom Config rules. With custom Config rules, each custom rule is associated with an AWS Lambda function, which contains logic that evaluates whether your AWS resources comply with the rule.

The AWS Config RDK is an open-source tool that helps you set up AWS Config, author rules, and then test them through various AWS resource types. As a result, you can focus on the rule development and easily create your own custom Config rules. The AWS Config RDK is now available for download via the aws-config-rdk GitHub repo.

Solution Overview

This blog guides you through using AWS Config to optimize EBS volumes using next generation General Purpose SSD gp3 volumes. With gp3 volumes, AWS customers can meet the IOPS and throughput requirements for transaction-intensive workloads, such as virtual desktops, test and development environments, low-latency interactive applications, and boot volumes.

Existing General Purpose SSD (gp2) volumes mean that performance is tied to storage capacity, enabling you to get higher IOPS and throughput for their applications by provisioning larger storage volume size. However, you likely want to scale performance and throughput without paying for storage that they don’t need. With gp3 volumes, you receive the lowest cost SSD volume that balances price performance for various workloads. Note that gp3 offers SSD-performance at a 20% lower cost per GB than gp2 volumes.

The following image shows this solution’s architecture:

The AWS user creates a new EBS volume, which is gp2 or the non-optimized type. This creation triggers AWS Config to evaluate the EBS volume. Since this resource type is a custom Config rule, it is backed by a lambda function that holds the logic to evaluate the rule. The rule is evaluated on the EBS volume. Since it is non-compliant, an AWS Systems Manager Automation document runs for remediation. There is therefore an API call to modify the volume to the optimized type.

Figure 1. Solution’s Architecture

In this post, you learn to:

  • Install the RDK in a Cloud9 instance.
  • Create a custom Config rule to check for existing EBS volumes. Set the desired volume type as gp3, so that other volume types will be marked NON-COMPLIANT.
  • Build a custom Remediation Action so that non-compliant volumes can be modified to gp3.
  • Test the solution by provisioning a gp2 volume that will be modified.


This solution requires a managed Amazon EC2 Instance deployment running on AWS Cloud9 – an IDE that lets you write, run, and debug your code with just a browser.

To complete the steps, you need the following:

Learn How to build an AWS Custom Config Rule with Custom Remediations in order to Optimize EBS Volume Type

Install the RDK

In this step, you install the RDK in your existing Cloud9 environment.

  1. Navigate to your Cloud9 environment and install the RDK by running pip install rdk in the Cloud9 terminal.
  2. Verify that the RDK is properly installed by running rdk -h.

If properly installed, you will see information about RDK usage, positional arguments, and optional arguments. You will also see:

“The RDK is a command-line utility for authoring, deploying, and testing custom AWS Config rules.”

  1. Run the command rdk init to create an S3 bucket that stores the Config rule you create later in the post.

Create a Custom Config Rule

Now that you successfully installed the RDK, you can create a custom rule.

  1. When you create the rule, specify that the runtime is python3.8, the resource type is an EBS volume, and the proper input parameters. In this example, the desired type is gp3 volumes.

Next generation gp3 volumes let you independently provision IOPS and throughput separately from storage capacity. This lets you scale performance for transaction-intensive workloads without needing to provision more capacity. Therefore, you only pay for the resources you need. The new gp3 volumes also deliver a baseline performance of 3,000 IOPS and 125 MB/s at any volume size.

Enter the following command:

rdk create ebs-volume_desired_type --runtime python3.8 --resource-types AWS::EC2::Volume --input-parameters '{"desiredvolumeType":"gp3"}'
  1. Your local Rule files are now created. Navigate to the parameters.json file. Confirm you set up the proper parameters.

Ensure that "SourceEvents" says "SourceEvents": "AWS::EC2::Volume" NOT "SourceEvents": "AWS::EC2::Instance." See the correct example shown in the following image:

In the Parameters file, the source event shows the resource type as an EC2 volume.

Figure 2. Parameters File

  1. Now, you must add the custom logic so that the EBS volume is marked as non-compliant if it is not the gp3 type.
  2. To do this, open the file named
  3. Navigate to line 50, or where it says #Add your custom logic here.
  4. Delete the line that says return 'NOT_APPLICABLE' and copy and paste the following code:
    if configuration_item['resourceType'] !='AWS::EC2::Volume':
        return 'NOT_APPLICABLE'
    if configuration_item['configuration']['volumeType'] == valid_rule_parameters['desiredvolumeType']:
        return 'COMPLIANT'
    return 'NON_COMPLIANT'

You have now specified that the rule should be executed on EBS volumes. The resource becomes marked as compliant if the proper volume type is passed into the parameters.

  1. Save your rule by selecting file and save.
  2. Test your rule by running the command:
rdk test-local ebs-volume_desired_type
  1. After you see “OK”, deploy the rule by running the command:
rdk deploy ebs-volume_desired_type

In the background, when deploying a custom rule with the RDK, you are provisioning an AWS CloudFormation stack that deploys a Lambda function with the custom rule.

  1. After you see “Config deploy complete,” navigate to the AWS Config console.
  2. Click on Rules and you should now see the ebs-volume_desired_type rule:

In the AWS Config console, under Rules, there is a new custom rule showing "ebs-volume_desired_type".

Figure 3. Ebs-volume_desired_type Rule Created

You have now successfully created a custom Config rule using the RDK.

Build a Remediation Action

To modify EBS volumes marked as non-compliant, you can build a remediation action by using an AWS Systems Manager Automation Runbook. An Automation runbook defines the actions that Systems Manager conducts on your AWS resources when an automation runs. AWS Config uses these documents in order to remediate non-compliant Config rules. While AWS provides various pre-built automation documents for customers, you build one from scratch in this solution that specifically modifies EBS volumes to gp3.

  1. Go to the AWS Systems Manager console.
  2. Select Documents on the left pane. Select Create document.
  3. Click Automation to build a new automation document.
  4. Name the document config-modifytogp3.
  5. Under Document attributes, navigate to the section named “assume role.” Note that if you plan to utilize this Remediation action in an AWS Config Conformance Pack or as an Automatic Remediation, you must also create a role with the proper IAM permissions needed to call the API calls in your automation runbook.

As shown in the following image, you can provide the role as {{ AutomationAssumeRole }} with the output as ['ModifyVolume.Output'].

  1. Add AutomationAssumeRole for the Parameter name. Select String for data type and add No for required. Lastly, enter ^arn:aws(-cn|-us-gov)?:iam::\d{12}:role\/[\w+=,.@_\/-]+|^$ as the allowed pattern, which refers to the arn for the appropriate IAM role.
  2. Next, click add a parameter and enter volumeid for Parameter name, Select String for data type and specify “Yes” for Required. See the below image for guidance:

The role is named "{{AutomationAssumeRole}}" with an output as [“ModifyVolume.Output”]. The Parameter Name is "AutomationAssumeRole", with "string" for type and "no" for the required option. The allowed pattern is enter ^arn:aws(-cn|-us-gov)?:iam::\d{12}:role\/[\w+=,.@_\/-]+|^$ . The parameter name is "volumeid", the data type is "string" and "yes" is selected to indicate the required parameter..

Figure 4. Automation Document Parameters

  1. Below Add Step, enter ModifyVolume for Step name. Select “Call and run AWS API actions” for Action type.
  2. Below Inputs, enter ec2 for Service and ModifyVolume for API.
  3. Select Additional Inputs. Enter VolumeId for Input Name and {{volumeid}} for Input value.
  4. Click to add an optional input. Enter VolumeType for Input Name and gp3 for Input value.
  5. Under outputs, add Output for name, $ for Selector, and StringMap for Type.
  6. Click Create automation once finished.

Below "Add Step", "ModifyVolume" is shown for the step name. The action type is "Call and run AWS API actions". Below Inputs, the service is "ec2" and the API is "ModifyVolume". Below "Outputs", "Output" is shown under name, "$" for Selector, and "StringMap" for Type.

Figure 5. Automation Document Steps

  1. Now that you created the automation, navigate back to the AWS Config console.
  2. Select Rules.
  3. Select the custom rule ebs-volume_desired_type.
  4. Click Actions, and then select Manage remediation.
  5. Keep the method as manual remediation.
  6. Below Remediation action details, select “confg-modifytogp3”.
  7. Below Resource ID parameter, select “volumeId”.
  8. Click Save changes once finished.

You now successfully created a remediation action to modify the non-compliant EBS volumes and change them to the gp3 type.

Test the Solution

To test the solution, create a gp2 EBS volume, which should be marked as non-compliant. Then, manually remediate the non-compliant resource so that the EBS volume changes to a gp3 type.

  1. Navigate to the EC2 console.
  2. Below Elastic Block Store, select Volumes.
  3. Select Create Volume.
  4. Keep every setting as default, and note the Volume Type is gp2.
  5. Select Create Volume, as shown in the below image:

Using the EC2 console, create a new volume where the default settings are selected. Note the Volume type is gp2.

Figure 6. Create gp2 Volume

  1. Note the volume ID of the newly created volume.
  2. Navigate back to the AWS Config console.
  3. Select Rules, and then select the custom rule ebs-volume_desired_type.
  4. Click Actions.
  5. Select Re-evaluate. Now that you created a new EBS volume, Config must evaluate the rule on this new resource.
  6. Under Resources in Scope, find the volume you just created, which should be marked non-compliant.
  7. Select this resource, and select Remediate.
  8. Navigate back to the EC2 console so you can check if the volume type has been successfully modified.
  9. Select Volumes.
  10. Select the recently created volume, and it should now read the gp3 volume type as shown in the below image:

In the recently remediated volume, under its description, the volume type now reads "gp3".

Figure 7. Gp3 Volume

At this point, you successfully created a custom config rule, built an automation document, and properly remediated the non-compliant volume.

Clean up

This post creates a number of AWS resources. Should you choose to provision these resources within your own AWS account, some nominal monthly charges will be incurred. To avoid any costs, please make sure to delete the resources you created, including the custom Config rule, Systems Manager Automation Document, and the EBS Volume. To destroy the custom Config rule, you can use the undeploy command.

About the Author

Chloe Goldstein Headshot

Chloe Goldstein

Chloe Goldstein is a Partner Solutions Architect at AWS. Working with AWS Consulting partners and Independent Software Vendors, Chloe helps these organizations leverage AWS best practices to improve the security, availability, and performance of their cloud applications and workloads.