AWS Management Tools Blog

Use new resource types in AWS Resource Groups to support day-to-day operations

AWS Resource Groups recently announced its support for additional resource types, including Amazon DynamoDB tables, AWS CloudTrail trails, Amazon SageMaker models, and many more. This blog post will walk you through some examples of how you could use AWS Resource Groups, and its new resource type support to drive some of your day-to-day operations.

AWS includes a variety of resource types you can work with, each serving a particular purpose. Examples include an Amazon EC2 instance, an AWS Lambda function, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to act on them as a group rather than move from one AWS service to another for each task. This is where AWS Resource Groups comes in. It helps you organize your AWS resources so that you can address them as a single unit. Resource groups make it easier to drive tasks on large numbers of resources at one time. A resource group is a collection of AWS resources in the same AWS Region that match tag-based criteria provided in a search query. You can define these queries in the Resource Groups console or using the AWS CLI. The search query includes lists of resources types and tag key/value.

A resource group can represent an application, a software component, a business unit, an environment, a team, or even an area of ownership. For example, if you are developing a web application, you can maintain separate sets of resources for your alpha, beta, and release stages. You can use resource groups to indicate that resources are a part of each of these release stages, and then you can act on them separately. Resource groups can also be nested, to represent more complex, multi-tier applications.

You can also use resource groups to perform bulk actions. For example, if you manage large numbers of related resources, such as EC2 instances that make up an application layer, you might need to perform bulk actions on these resources at one time. Other examples of bulk actions include the following:

  • Applying updates or security patches.
  • Upgrading an application version.
  • Opening or closing ports to network traffic.
  • Collecting specific log and monitoring data.

Let’s look at how you can use the AWS Management Console to create and manage a resource group that contains a collection of Amazon DynamoDB tables that are a part of your database application tier. In this example, we have a running 3-tier application in our test environment, called MobileApp1.

This 3-tier application contains a backend tier, a frontend tier, and a database tier, each with its own matching resource group and underlying AWS resources. Now let’s see how to use the AWS CLI to enable Amazon DynamoDB Auto Scaling on our database tier.

Navigating to the Database resource group, you can see we have three Amazon DynamoDB tables. Note that the query that makes up this resource group includes all DynamoDB tables (AWS::DynamoDB::Table) that have been assigned with a particular tag/key value (env:test).

To enable Auto Scaling, we’ll use the following CLI command to list the members of the Database resource group.

CLI Input:
aws resource-groups list-group-resources --group-name "Database" \
         --filters Name=resource-type,Values=AWS::DynamoDB::Table
CLI Output:
{
    "ResourceIdentifiers": [
        {
            "ResourceType": "AWS::DynamoDB::Table",
            "ResourceArn": "arn:aws:dynamodb:us-east-1:242482887137:table/Table1"
        },
        {
            "ResourceType": "AWS::DynamoDB::Table",
            "ResourceArn": "arn:aws:dynamodb:us-east-1:242482887137:table/Table2"
        },
        {
            "ResourceType": "AWS::DynamoDB::Table",
            "ResourceArn": "arn:aws:dynamodb:us-east-1:242482887137:table/Table3"
        }
    ]
}

Now we’ll use the same CLI command, to list DynamoDB tables, parse that list to resource-ids, and use the result set as an input for our DynamoDB Auto Scaling command.

CLI Input:
TABLES=`aws resource-groups list-group-resources \
--group-name "Database" \
--filters Name=resource-type,Values=AWS::DynamoDB::Table \
--output text \
| awk '{print $2}' | cut -d : -f 6-`

for table in $TABLES
do
 echo $table
 aws application-autoscaling register-scalable-target \
 --service-namespace dynamodb \
 --scalable-dimension "dynamodb:table:WriteCapacityUnits" \
 --min-capacity 5 \
 --max-capacity 10 \
 --resource-id "$table"
done
CLI Output:
table/Table1
table/Table2
table/Table3

This results in Auto Scaling being enabled on these three DynamoDB tables, which we can verify using the following CLI command:

CLI Input:
TABLES=`aws resource-groups list-group-resources \
--group-name "Database" \
--filters Name=resource-type,Values=AWS::DynamoDB::Table \
--output text \
| awk '{print $2}' | cut -d : -f 6-`

for table in $TABLES
do
  echo $table
  aws application-autoscaling describe-scalable-targets \
  --service-namespace dynamodb \
  --resource-id "$table" --output text
done
CLI Output:
table/Table1
SCALABLETARGETS 1536137274.85   10      5       table/Table1    arn:aws:iam::(..truncated..)e      dynamodb:table:WriteCapacityUnits       dynamodb
table/Table2
SCALABLETARGETS 1536141329.56   10      5       table/Table2    arn:aws:iam::(..truncated..)e      dynamodb:table:WriteCapacityUnits       dynamodb
table/Table3
SCALABLETARGETS 1536141330.13   10      5       table/Table3    arn:aws:iam::(..truncated..)e      dynamodb:table:WriteCapacityUnits       dynamodb

Conclusion

In this example you’ve seen how you can take actions against a collection of resources. Taking bulk action can also be automated using tools such as AWS Systems Manager Automation. Using Automation, you stage multiple API calls or CLI commands to any AWS service, directly through the Systems Manager console. To learn more about how the same scaling actions we showed in this blog post can be used in Systems Manager, read how AWS Systems Manager Automation supports invoking AWS APIs.

About the author

Florian

Florian leads the team behind AWS Resource Groups. Before joining AWS, he worked as a interim CTO and freelance engineer for companies ranging from small startups to large organizations. He is always excited if someone uses the feedback button in the bottom left of the console.