AWS Cloud Operations Blog
Four ways to retrieve any AWS service property using AWS CloudFormation (Part 3 of 3)
This post is the last in a series on how to build customizations using AWS CloudFormation. In part 1, we introduced you to cfn-response and crhelper and discussed the scenarios they are best suited for. In part 2, we addressed a coverage gap in our public roadmap and showed you how to build an AWS CloudFormation macro. In this post, we’ll show you how to use AWS CloudFormation resource types to build customizations.
When we talk to customers, we realize that many of them avoid using customizations because they add complexity and risk to their infrastructure code. Injecting home-grown code into mission-critical enterprise deployments is not trivial. In enterprises with tens of thousands of AWS resources, it can have a large impact. Because customer requirements are so varied, we provide multiple approaches to better meet their needs. You can pick the option that best fits your scenario or challenge. Sometimes, you need to use cfn-response to quickly triage and resolve an urgent production problem. Other times, a robust resource type is more appropriate, especially for very large, multi-account, and multi-Region deployments. Understanding all four options allows you to solve problems in a versatile, case-specific way.
Prerequisites
We use the cloudformation-cli (cfn), which allows you to build your own resource types with the same tools that AWS now uses to build AWS CloudFormation native resource types. Because cloudformation-cli was designed to be language-agnostic, we will use the cloudformation-cli-python-plugin. We will also use YAML and Docker Engine for platform-independent packaging.
Option 3: Custom resource using AWS CloudFormation macros
About this blog post | |
Time to read | 15 minutes |
Time to complete | ~ 30 minutes |
Learning level | Expert (400) |
AWS Services | AWS CloudFormation Amazon Relational Database Service (Amazon RDS) AWS Secrets Manager |
Software Tools | AWS CLI version 2 AWS CloudFormation CLI Python plugin Docker Engine (latest stable version) Linux, macOS, or Windows subsystem for Linux Python 3.7+ (includes pip installer) |
With the release of AWS CloudFormation resource types, you now have the option to create, provision, and manage resources in the same way that AWS developers create native resources. In this context, a resource type includes both a specification and handlers that control API interactions with the underlying AWS or third-party service. To create new private resource types, you need to model, develop, and register them. To speed up your development, you use the language- agnostic AWS CloudFormation CLI tool, which you invoke with the cfn
command. The tool helps developers generate a base model and scaffolding code automatically. It also assists with local testing and validation and registering your resource types.
Resources created with this method inherit features like rollback, change sets, the ability to write custom error messages that are displayed in event logs, resource import, drift detection, and other planned enhancements. Resource types require less management overhead, because there is no AWS Lambda function to provision or manage. You can also customize timeouts beyond 60 minutes, which you cannot do when you use the other options we’ve discussed.
The cfn
tool minimizes the work required by developers. You only need to implement the required lifecycle handlers: CREATE, READ, UPDATE, DELETE, and LIST. In the example used in this post, we only need to write a READ handler. After a resource type is tested and registered, you can reference it in any AWS CloudFormation template in the same way you refer to any other resource.
The following example addresses the request in GitHub issue 105 to fetch the DbiResourceId
of the AWS::RDS::DBInstance using the resource type method. To make this consistent with the other options, we use the Python plugin for the AWS CloudFormation CLI, but you can choose other supported languages like Java or Golang.
Getting started
To avoid any Python library version dependencies, create a virtual environment. The following command creates a custom_app
directory with the virtual environment inside of it.
$ cd ~ && pwd
/home/my-userid
$ mkdir option4
$ cd option4
$ python3 -m venv custom_app/env
$ source custom_app/env/bin/activate
Use pip to install the cloudformation-cli-python-plugin using pip
. It will also install other required dependencies.
(env) $ pip install cloudformation-cli-python-plugin
Create a directory named demo-rds-detail
, and then use cfn init
to initialize a new resource type. When initialization starts, you will be asked for a resource type name. In this post, we use Demo::RDS::Detail
. When prompted, use Python 3.7. Finally, choose to use Docker to package all Python dependencies. We recommend the choice of Docker to resolve any platform-specific dependencies.
(env) $ pwd
/home/my-userid/option4
(env) $ mkdir demo-rds-detail
(env) $ cd demo-rds-detail
(env) $ cfn init
Initializing new project
Do you want to develop a new resource(r) or a module(m)?.
>> r
What's the name of your resource type?
(Organization::Service::Resource)
>> Demo::RDS::Detail
Select a language for code generation:
[1] python36
[2] python37
(enter an integer):
>> 2
Use docker for platform-independent packaging (Y/n)?
This is highly recommended unless you are experienced
with cross-platform Python packaging.
>> Y
Initialized a new project in /home/my-userid/option4/demo-rds-detail
(env) $
You have now successfully initialized the project!
Modeling resource types
Take a moment to inspect the files that were generated by cfn init
and are now under the demo-rds-detail
folder. Start by updating the default resource type schema generated in the demo-rds-detail.json
file. Replace it with the following content:
{
"typeName": "Demo::RDS::Detail",
"description": "An example resource schema demonstrating some basic constructs and validation rules.",
"sourceUrl": "https://github.com/aws-cloudformation/aws-cloudformation-rpdk.git",
"properties": {
"DBInstanceIdentifier": {
"description": "DBInstanceIdentifier of DB Instance",
"type": "string"
},
"DbiResourceId": {
"description": "The AWS Region-unique, immutable identifier for the DB instance",
"type": "string"
}
},
"additionalProperties": false,
"required": [],
"readOnlyProperties": [
"/properties/DBInstanceIdentifier",
"/properties/DbiResourceId"
],
"primaryIdentifier": [
"/properties/DBInstanceIdentifier"
],
"handlers": {
"create": {
"permissions": []
},
"read": {
"permissions": [
"rds:DescribeDBInstances"
]
},
"update": {
"permissions": []
},
"delete": {
"permissions": []
},
"list": {
"permissions": []
}
}
}
Resource type schemas
The resource schema defines the shape of the resource: its inputs, outputs, and other metadata. To simplify authoring and provide a consistent modeling experience, this meta-schema approach defines all properties that are accepted and returned by the resource type.
The Properties section maps to the properties you allow when the resource is declared on a template. A resource must contain at least one property. In this section, you can explicitly define validation rules that will be evaluated on your behalf before your execution logic starts. You can specify the following rules:
- Whether the attribute is treated as a string.
- Whether the attribute accepts an array of values (a list or enum).
- Regular expression (regex) patterns that the property value must comply with.
- Minimum or maximum lengths that the property value must comply with.
By explicitly defining these validation rules in the schema, you avoid having to add input validation logic to any handler, focusing on more critical exception handling. For all allowed property elements, see Resource type schema in the CloudFormation Command Line Interface User Guide.
In our example, we first identify DBInstanceIdentifier
as an input and DbiResourceId
as the attribute we will retrieve with !GetAtt
. We identify them as readOnlyProperties
. No additionalProperties
are required. To get the missing DbiResourceId
, we need the DBInstanceIdentifier
, so we classify it as required. It is also the primaryIdentifier
of our type, which can be retrieved through !Ref
. By identifying DBInstanceIdentifier
and DbiResourceId
as readOnlyProperties
, we explicitly state that they are not mutable.
The following table lists what these schema options do.
Schema section | Description | Comments |
properties | Specify all input and output properties allowed for the resource to be used. | This is where you define validation rules. |
additionalProperties | Optional properties that are also allowed. | These properties might be passed to downstream API calls to further customizations. |
required | Required input properties. Different from the allowed input and output properties. | If the resource declaration doesn’t have this property, you will get an error, even if other properties are included. |
readOnlyProperties | If any of these properties must change, AWS CloudFormation must delete and replace the resource. | In this example, although DbiResourceId is not an input, it will be retrieved by any !GetAtt call, but it is not mutable. |
primaryIdentifier | The property that would be retrieved as a result of a !Ref operation. |
In this example, DbiResourceId can be retrieved by both !Ref and !GetAtt , but DbiResourceId is only accessible through !GetAtt . |
Next, the handlers section specifies the operations performed by the resource type. Here we only need one permission for the read handler (rds:DescribeDBInstances
), because we only need to describe the Amazon RDS for MariaDB instance.
Now validate
the schema to ensure that the model adheres to the resource type schema specification. After you successfully validate it, use the generate
CLI command to regenerate a resource-role.yaml
template. The template creates the required IAM execution role and the data model in models.py
to correspond with the schema’s latest changes. Every time you change the schema file, you must use cfn generate
to keep your project code in sync with the schema.
(env) $ pwd
/home/my-userid/option4/demo-rds-detail
(env) $ cfn validate
Resource schema for Demo::RDS::Detail is valid
(env) $ cfn generate
Generated files for Demo::RDS::Detail
(env) $
Develop the handler
Replace the demo-rds-detail/src/handlers.py
file generated by the Python plugin with the following code snippet:
import logging
from typing import Any, MutableMapping, Optional
from cloudformation_cli_python_lib import (
Action,
HandlerErrorCode,
OperationStatus,
ProgressEvent,
Resource,
SessionProxy,
exceptions,
)
from .models import ResourceHandlerRequest, ResourceModel
# Use this logger to forward log messages to CloudWatch Logs.
LOG = logging.getLogger(__name__)
TYPE_NAME = "Demo::RDS::Detail"
resource = Resource(TYPE_NAME, ResourceModel)
test_entrypoint = resource.test_entrypoint
@resource.handler(Action.CREATE)
def create_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
model = request.desiredResourceState
progress: ProgressEvent = ProgressEvent(
status=OperationStatus.SUCCESS,
resourceModel=model,
)
return progress
@resource.handler(Action.UPDATE)
def update_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
model = request.desiredResourceState
progress: ProgressEvent = ProgressEvent(
status=OperationStatus.SUCCESS,
resourceModel=model,
)
return progress
@resource.handler(Action.DELETE)
def delete_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
model = request.desiredResourceState
progress: ProgressEvent = ProgressEvent(
status=OperationStatus.SUCCESS,
resourceModel=model,
)
return progress
@resource.handler(Action.READ)
def read_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
model = request.desiredResourceState
progress: ProgressEvent = ProgressEvent(
status=OperationStatus.IN_PROGRESS,
resourceModel=model,
)
try:
if isinstance(session, SessionProxy):
# 1. retrieve boto3 client
client = session.client("rds")
# 2. Invoke describe/retrieve function using ResourceRef
response = client.describe_db_instances(DBInstanceIdentifier=model.DBInstanceIdentifier)
# 3. Parse and return required attributes
model.DbiResourceId = response.get('DBInstances')[0].get('DbiResourceId')
LOG.info('Retrieved DBiResourceId!')
# Setting Status to success will signal to cfn that the operation is complete
progress.status = OperationStatus.SUCCESS
except TypeError as e:
# exceptions object lets CloudFormation know the type of failure that occurred
raise exceptions.InternalFailure(f"was not expecting type {e}")
return progress
@resource.handler(Action.LIST)
def list_handler(
session: Optional[SessionProxy],
request: ResourceHandlerRequest,
callback_context: MutableMapping[str, Any],
) -> ProgressEvent:
return ProgressEvent(
status=OperationStatus.SUCCESS,
resourceModels=[],
)
Each of the resource type handlers must always return a ProgressEvent object. As the documentation illustrates, the ProgressEvent
object includes, among other execution control details, a ResourceModel
object. As you write and execute your handlers, you must pass back a ResourceModel
object that conforms to the schema we defined.
Let’s inspect the logic at the read_handler
method. Create a boto3 client, followed by the one additional AWS SDK describe_db_instances API call to retrieve the missing DbiResourceId
property. By setting the missing property in the model object, you can now access the DbiResourceId
using the !GetAtt
intrinsic function from any AWS CloudFormation template. The ProgressEvent
object also allows authors to specify the operation status as IN_PROGRESS, SUCCESS, or FAILURE to signal AWS CloudFormation and to populate the event status in the console. Because our scenario only requires a property to be read, all other handler methods in the code snippet return SUCCESS.
Register the resource type
Now that you have modeled the schema and developed your handler’s logic, it is time to register your resource type. Verify that the requirements.txt
refers to the latest version of the cloudformation-cli-python-lib support library. As of this writing, the demo-rds-detail/requirements.txt
file should have the following:
cloudformation-cli-python-lib>=2.1.3
Now it’s time to register the resource type using the cfn submit command. Your terminal output should look something like this:
(env) $ cfn submit --set-default
Starting Docker build. This may take several minutes if the image 'lambci/lambda:build-python3.7' needs to be pulled first.
Successfully submitted type. Waiting for registration with token ‘01234567890-random-unique-registration-token’ to complete.
Registration complete.
{'ProgressStatus': 'COMPLETE', 'Description': 'Deployment is currently in DEPLOY_STAGE of status COMPLETED; ', 'TypeArn': 'arn:aws:cloudformation:YOUR-REGION:123456789101:type/resource/Demo-RDS-Detail', 'TypeVersionArn': 'arn:aws:cloudformation:YOUR-REGION:123456789101:type/resource/Demo-RDS-Detail/00000001', 'ResponseMetadata': {'RequestId': '123456789-random-request-id-123456789', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': ‘123456789-random-request-id-123456789‘, 'content-type': 'text/xml', 'content-length': '669', 'date': 'Thu, 03 Dec 2020 22:06:09 GMT'}, 'RetryAttempts': 0}}
Set default version to 'arn:aws:cloudformation:YOUR-REGION:123456789101:type/resource/Demo-RDS-Detail/00000001
(env) $
The first time cfn submit
runs, it creates the IAM execution role using the resource-role.yaml
file that the tool first generated and kept updated with your resource’s schema. You can override this and use an existing IAM role with the --role-arn
parameter. The resource will run under this role regardless of the calling user’s permissions, while allowing for the permissions you specified in the schema. Much like how AWS SAM packaged and deployed our macro in Option 3, cfn submit runs a series of steps to build, package, upload, and deploy our code to the cloudformation registry. Finally, it registers the extension with the AWS CloudFormation registry, which makes the resource type available for use in other stacks in your account. If you encounter errors, check the rpdk.log
file, which logs all activities performed by the CLI.
You should now see the Demo::RDS::Detail
resource type in the private AWS CloudFormation registry. You can use the describe-type command to verify the resource type registration:
(env) $ aws cloudformation describe-type --type RESOURCE --type-name Demo::RDS::Detail
{
"SourceUrl": "https://github.com/aws-cloudformation/aws-cloudformation-rpdk.git",
"Description": "An example resource schema demonstrating some basic constructs and validation rules.",
"TimeCreated": "2020-11-19T23:22:15.142000+00:00",
"Visibility": "PRIVATE",
"TypeName": "Demo::RDS::Detail",
"LastUpdated": "2020-12-03T03:51:14.351000+00:00",
"DeprecatedStatus": "LIVE",
"ProvisioningType": "FULLY_MUTABLE",
"Type": "RESOURCE",
"Arn": "arn:aws:cloudformation:us-east-1:123456789012:type/resource/Demo-RDS-Detail/00000001",
"Schema": "[details omitted]"
}
Take it for a spin!
Here is an AWS CloudFormation template based on the Amazon RDS example from the public documentation to demonstrate that we have indeed resolved the missing attribute issue.
cfn-rds-resource.yml
AWSTemplateFormatVersion: 2010-09-09
Description: >-
AWS CloudFormation Sample Template for creating an Amazon RDS DB instance:
**WARNING** This template creates an Amazon RDS DB instance. You will be billed for the AWS
resources used if you create a stack from this template.
Parameters:
DBInstanceID:
Default: mydbinstance
Description: My database instance
Type: String
MinLength: '1'
MaxLength: '63'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
ConstraintDescription: >-
Must begin with a letter and must not end with a hyphen or contain two
consecutive hyphens.
DBInstanceClass:
Default: db.t2.micro
Description: DB instance class
Type: String
DBAllocatedStorage:
Default: '20'
Description: The size of the database (GiB)
Type: Number
MinValue: '20'
MaxValue: '16384'
ConstraintDescription: must be between 20 and 16,384 GiB.
DBEngineVersion:
Default: '10.5.8'
Description: The MariaDB engine version
Type: String
Resources:
DBSecret:
Type: AWS::SecretsManager::Secret
Properties:
Description: 'Password MariaDB database access'
GenerateSecretString:
SecretStringTemplate: '{"username": "customchangeme"}'
GenerateStringKey: 'password'
PasswordLength: 16
ExcludeCharacters: '"@/\'
MyDemoDB:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceIdentifier: !Ref DBInstanceID
DBInstanceClass: !Ref DBInstanceClass
AllocatedStorage: !Ref DBAllocatedStorage
BackupRetentionPeriod: 0
Engine: mariadb
EngineVersion: !Ref DBEngineVersion
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, ':SecretString:username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, ':SecretString:password}}' ]]
DbiIdRetriever:
Type: Demo::RDS::Detail
DependsOn: MyDemoDB
Properties:
DBInstanceIdentifier: !Ref DBInstanceID
Outputs:
InstanceId:
Description: InstanceId of the newly created RDS Instance
Value: !Ref MyDemoDB
DbiResourceId:
Description: DbiResourceId from custom resource type
Value: !GetAtt DbiIdRetriever.DbiResourceId
What the template does
We start with the Parameters section to capture input values like DBInstanceID
, DBInstanceClass
, DBAllocatedStorage
, and DBEngineVersion
. By default, when this template is run without changing any parameters, it creates a 20 GB Amazon RDS for MariaDB database on a db.t2.micro
instance.
In the Resources section, we use AWS Secrets Manager to generate and store the database’s administrative user name and password, which we dynamically reference in the MyDemoDB
resource. We then create the AWS::RDS::DBInstance
using the input parameters. Our last resource is where we invoke the Demo::RDS::Detail
resource type that we just created to retrieve the missing DbiResourceId
property. You’ll see that there’s no AWS Lambda function to reference or macros to import to use the resource type.
The Outputs section allows us to display the Amazon RDS for MariaDB name
and DbiResourceId
using !Ref
and !GetAtt
intrinsic function calls.
Now that you know what this template does, deploy it! Copy and paste the template code into an empty file named cfn-rds-resource.yml
. Then, deploy and view the stack outputs using the following AWS CLI commands. It takes 10 minutes to make the MariaDB AWS::RDS::DBInstance
, so this operation should take approximately 10 to 15 minutes to complete.
# deploy stack
(env) $ aws cloudformation deploy --stack-name rds-demo-retrieve-stack --template-file cfn-rds-resource.yml
# validate stack output
(env) $ aws cloudformation describe-stacks --stack-name rds-demo-retrieve-stack --query "Stacks[0].Outputs"
[
{
"OutputKey": "InstanceId",
"OutputValue": "mydbinstance",
"Description": "InstanceId of the newly created RDS Instance"
},
{
"OutputKey": "DbiResourceId",
"OutputValue": "db-12345EXAMPLE",
"Description": "DbiResourceId from custom resource type"
}
]
You have now successfully created a new AWS CloudFormation resource type that looks and operates like any other resource created by AWS or third-party contributors. Among its possible uses, we showed how you can use resource types to retrieve any additional attribute or property.
AWS CloudFormation resource types are still new. Because the code is open source and available on GitHub, the tools to fast-track development are maturing quickly. You will incur nominal handler operation charges when you create resource types outside of AWS::*
, Alexa::*
, or Custom::*
namespaces. Although the options to use macros and resource types are more advanced than the options explained in part 1, they do result in more durable solutions.
Clean up resources
To clean up the resources created in this option, first delete the rds-demo-retrieve-stack
, and then remove the resource type from the AWS CloudFormation registry. You should also remove a stack created by cfn submit
that created the execution role. Finally, you can opt to deactivate and wipe out the Python virtual environment.
(env) $ aws cloudformation delete-stack --stack-name rds-demo-retrieve-stack
(env) $ aws cloudformation deregister-type --type-name Demo::RDS::Detail --type RESOURCE
(env) $ aws cloudformation update-termination-protection --stack-name demo-rds-detail-role-stack --no-enable-termination-protection
(env) $ aws cloudformation delete-stack --stack-name demo-rds-detail-role-stack
(env) $ source deactivate
$
After you remove the stacks, you might notice a stack named CloudFormationManagedUploadInfrastructure
remains. This stack was created as part of the first use of cfn submit
. This stack provisions all the infrastructure required to upload artifacts to AWS CloudFormation as you create future resource types. It includes two Amazon Simple Storage Service (Amazon S3) buckets, an IAM role, and an AWS Key Management Service (AWS KMS) key that are reused as you build other resource types. We did this for your convenience, to avoid having to create and maintain multiple buckets, roles, and keys as you create multiple resource types. If you need to delete the stack, make sure that the buckets are empty.
Conclusion
We have many options you can use to address about 70 issues tracked in the AWS CloudFormation public roadmap. In this blog series, we demonstrated four distinct ways to customize AWS CloudFormation, regardless of the skill level of your infrastructure automation teams. There are even more options to do custom programming with other open source and partner tools. When you use AWS CloudFormation, you benefit from a mature community of hundreds of thousands of customers across large and small organizations. Because we see many users with various preferences, unique requirements, and challenging constraints, we offer many customization options for all types of AWS Cloud practitioners.
As we demonstrated with the macros and resource type examples, you can use AWS CloudFormation templates to better deal with dependencies and parameterization. You can do so while following the good programming practices we mentioned in part 1, including handling exceptions, restricting runtimes with timeouts, and ensuring all expected events are handled properly. With macros (part 2) and resource types, you have solutions that are suitable for deployment in large organizations that follow best practices related to the distribution and maintenance of production code.
Further reading
Resource types expose features like resource import and drift detection to your custom code even if the target resource is outside of the ones natively provided by AWS. You can even create resource types to integrate third-party services, local data center applications, and various other API-driven solutions alongside native AWS resources in your stack. We have barely scratched the surface of what we can do with macros and resource types.
For more information about resource types, see Creating resource types in the CloudFormation Command Line Interface User Guide. You’ll also find the resource type implementations and the AWS CloudFormation CLI tool and Python plugin we used here on GitHub.
Other posts in this three-part series:
Get in touch with us on our social media channels and let us know how you have used these tools. Happy coding!
About the Author
Gokul Sarangaraju is a Senior Technical Account Manager at AWS. He helps customers adopt AWS services and provides guidance in AWS cost and usage optimization. His areas of expertise include delivering solutions using AWS CloudFormation, various other automation techniques. Outside of work, he enjoys playing volleyball and poker – Set, Spike, All-In! You can find him on twitter at @saranggx. | |
Luis Colon is a Senior Developer Advocate at AWS specializing in CloudFormation. Over the years he’s been a conference speaker, an agile methodology practitioner, open source advocate, and engineering manager. When he’s not chatting about all things related to infrastructure as code, DevOps, Scrum, and data analytics, he’s golfing or mixing progressive trance and deep house music. You can find him on twitter at @luiscolon1. |