Containers
Latest updates to AWS Fargate for Amazon ECS
Recently, we announced features to improve the configuration and metric gathering experience of your tasks deployed via AWS Fargate for Amazon ECS. Based off of customer feedback, we added the following features:
- Environment file support
- Deeper integration with AWS Secrets Manager using secret versions and JSON keys
- More granular network metrics, as well as additional data available via the task metadata endpoint
Throughout this post, we will dive into these updates and explain where they can bring value when deploying your containers on AWS Fargate for Amazon ECS. First we will start with deploying a simple demo application, and walk through each of these features.
What are we deploying?
We’re going to build a web app that will display the various features via separate routes. As we progress through the post, we will walk through each route in the application to help spell out the functionality behind that feature. The web application is built in Python using the Flask framework. Below is the application code, which will give an overview of what we expect each route to produce.
#!/usr/bin/env python3
from flask import Flask, render_template
from flask_nav import Nav
from flask_nav.elements import Navbar, View, Subgroup, Text
from flask_bootstrap import Bootstrap
from os import getenv, environ
from requests import get
import json
app = Flask(__name__)
Bootstrap(app)
nav = Nav(app)
nav.register_element('frontend', Navbar(
'ECS Demo',
View('Home', '.index'),
View('Environment Variables', '.env_vars'),
View('Secrets', '.secrets'),
View('Metadata', '.metadata'),
Text(getenv('DEPLOY_ENVIRONMENT', 'DEPLOY_ENV UNAVAILABLE')),
))
@app.route('/')
def index():
return render_template('index.html')
@app.route('/env-vars')
def env_vars():
environment_variables = {x: y for x,y in environ.items() if "DEPLOY_ENVIRONMENT" in x}
return render_template('env-vars.html', ev_dict=environment_variables)
@app.route('/secrets')
def secrets():
secret_kvs = {x: y for x,y in environ.items() if "SECRETS_MANAGER" in x}
return render_template('secrets.html', secret_dict=secret_kvs)
@app.route('/metadata')
def metadata():
metadata_endpoint = getenv('ECS_CONTAINER_METADATA_URI_V4')
json_response = json.loads(get("{}/task".format(metadata_endpoint)).text)
launch_type, container_arn, log_driver = json_response['LaunchType'], json_response['Containers'][0]['ContainerARN'], json_response['Containers'][0]['LogDriver']
metrics = json.loads(get("{}/stats".format(metadata_endpoint)).text)
return render_template(
'metadata.html', launch_type=launch_type, container_arn=container_arn,
log_driver=log_driver, rx=metrics['network_rate_stats']['rx_bytes_per_sec'],
tx=metrics['network_rate_stats']['tx_bytes_per_sec']
)
if __name__ == '__main__':
app().run(host='0.0.0.0')
And the Dockerfile is straightforward and simple:
We’re going to now deploy our Docker image and build our demo environment using the AWS Cloud Development Kit (CDK). The below code is going to create two ECS services, as well as the dependent infrastructure and resources such as the VPC network, IAM roles, ECS cluster, Application Load Balancer, and more. The CDK application file will be named cdk_app.py
.
#!/usr/bin/env python3
from aws_cdk import core, aws_ecs, aws_ecs_patterns, aws_ecr_assets, aws_s3, aws_secretsmanager
class FargateDemo(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
s3_bucket = aws_s3.Bucket.from_bucket_name(self, "EnvConfigBucket", bucket_name="ecs-demo-env-files")
sm_secret = aws_secretsmanager.Secret.from_secret_name_v2(self, "SecretJson", secret_name="ecs-demo")
container_image = aws_ecr_assets.DockerImageAsset(self, "Image", directory=".", exclude=["cdk.out"])
task_def = aws_ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=aws_ecs.ContainerImage.from_docker_image_asset(asset=container_image),
container_port=5000,
environment={
"TASK_DEF_SHARED_DEPLOY_ENVIRONMENT": "SAME_VALUE_SHARED_BETWEEN_ENVS",
},
secrets={
"SECRETS_MANAGER_SECRET_ALL_PREVIOUS_VERSION": aws_ecs.Secret.from_secrets_manager(secret=sm_secret),
"SECRETS_MANAGER_SECRET_ALL_CURRENT_VERSION": aws_ecs.Secret.from_secrets_manager(secret=sm_secret),
"SECRETS_MANAGER_SECRET_JSON_KEY": aws_ecs.Secret.from_secrets_manager(secret=sm_secret, field="SECRET_KEY_2"),
},
)
for deploy_env in ["test", "production"]:
fargate_ecs_service = aws_ecs_patterns.ApplicationLoadBalancedFargateService(
self, "FargateService{}".format(deploy_env),
cpu=256,
memory_limit_mib=512,
platform_version=aws_ecs.FargatePlatformVersion.VERSION1_4,
task_image_options=task_def
)
fargate_ecs_service.target_group.set_attribute(
key='deregistration_delay.timeout_seconds',
value='5'
)
cfn_task_definition = fargate_ecs_service.task_definition.node.default_child
cfn_task_definition.add_override(
"Properties.ContainerDefinitions.0.EnvironmentFiles",
[
{
"Type": "s3",
"Value": "{}/{}.env".format(s3_bucket.bucket_arn, deploy_env)
}
]
)
cfn_task_definition.add_override(
"Properties.ContainerDefinitions.0.Secrets.0.ValueFrom",
"{}::AWSPREVIOUS:".format(sm_secret.secret_arn)
)
_lb_url = "http://{}".format(fargate_ecs_service.load_balancer.load_balancer_dns_name)
core.CfnOutput(
self, "Output{}".format(deploy_env),
value=_lb_url,
export_name="EnvAppUrl{}".format(deploy_env)
)
s3_bucket.grant_read(fargate_ecs_service.task_definition.execution_role)
app = core.App()
FargateDemo(app, "fargate-features-demo")
app.synth()
Before we deploy the environment, we will create an S3 bucket as well as a secret in AWS Secrets Manager. We’ll explain more about how we’ll use these when we dive into the features.
# Create S3 Bucket
aws s3 mb s3://ecs-demo-env-files
# Create two environment files: production.env & test.env
echo -e "DEPLOY_ENVIRONMENT=TEST\nDEPLOY_ENVIRONMENT_FILE_NAME=test.env" > test.env
echo -e "DEPLOY_ENVIRONMENT=PRODUCTION\nDEPLOY_ENVIRONMENT_FILE_NAME=production.env" > production.env
# Copy files to Amazon S3 Bucket
aws s3 cp --recursive --exclude "*" --include "*.env" ./ s3://ecs-demo-env-files/
# Create secret with JSON object
aws secretsmanager create-secret \
--name ecs-demo \
--secret-string '{ "SECRET_KEY_1": "SECRET_VALUE_1", "SECRET_KEY_2": "SECRET_VALUE_2" }'
# Update the secret by adding a new SECRET_KEY_3 key to the JSON
aws secretsmanager update-secret \
--secret-id ecs-demo \
--secret-string '{ "SECRET_KEY_1": "SECRET_VALUE_1", "SECRET_KEY_2": "SECRET_VALUE_2", "SECRET_KEY_3": "SECRET_VALUE_3" }'
# Deploy our demo environment and applications
cdk deploy --app "python3 cdk_app.py"
# Store the application URLs in as environment variables for reference throughout the demo
prod_url=$(aws cloudformation describe-stacks --stack-name fargate-features-demo --query 'Stacks[].Outputs[?ExportName == `EnvAppUrlproduction`].OutputValue' --output text)
test_url=$(aws cloudformation describe-stacks --stack-name fargate-features-demo --query 'Stacks[].Outputs[?ExportName == `EnvAppUrltest`].OutputValue' --output text)
Our environment will take a few minutes to deploy. Once done, let’s begin walking through the features.
Environment files
https://github.com/aws/containers-roadmap/issues/371
A common practice when working with containers is to expose dynamic environment values to the application. These values can change depending on environment, update to a deployment, among other things. This is done by exposing those values as environment variables to the container at runtime. With Amazon ECS, there are two ways that this can be achieved:
- Add environment variable key value pairs to the task definition.
- Add a path to an environment file, which stores the environment variables, in the task definition.
With the first approach, your environment variables are stored in the task definition. If you want to change those values, a new version of the task definition needs to be created with the updated data, and then a service deployment or new task launch is required. This approach tightly couples the configuration of your task with the deployment. An example of a task definition defining environment variables looks like:
As more environment variables are needed, this could become cumbersome to manage in the task definition. There may also be other teams that want to expose environment variables to a container, but don’t have direct access to update the task definitions. This is where the second approach comes in.
Using an environment file decouples the environment variables from the task definition. This can be used for managing variables between environments like test and production, or to meet separation of duty concerns. When a change is made to the data inside an environment file, changes don’t have to be made directly to the task definition, as long as the name of the environment file they point to remains the same. This means that the next time a task is launched, it will have the changes reflected in the latest environment file. Ultimately, this enables teams to manage environment values separately from the service configurations.
Tutorial:
For our demonstration, we created two environment files and stored them in the same S3 bucket. We deployed our production and test services across two environments, with each of the services’ task definitions pointing to the environment file for that deployment environment (production.env and test.env). The service share a common environment variable between the two environments, which will be referenced directly in the task definition. The environment files are where we differentiate between the production and test environment variables.
Here are the snippets from the task definition that reference the environment file, as well as the environment variable for the test environment:
As well as the environment file contents for reference.
# test.env file contents
DEPLOY_ENVIRONMENT=TEST
DEPLOY_ENVIRONMENT_FILE_NAME=test.env
# production.env file contents
DEPLOY_ENVIRONMENT=PRODUCTION
DEPLOY_ENVIRONMENT_FILE_NAME=production.env
Grabbing the urls of each service, we will navigate to the /env-vars
route. In the application, it will print out all of the environment variables that have DEPLOY_ENVIRONMENT
in the key. The output should show the same value for the TASK_DEF_SHARED_DEPLOY_ENVIRONMENT
key, while showing different values for the DEPLOY_ENVIRONMENT*
variables, which came from the environment files based on the deployment environment.
Looking at the above images, we can clearly see that each of the Fargate services’ tasks sourced the common environment variable from the task definition, as well as the deploy environment specific environment files. That completes the environment files demo, let’s move on to the updated integration with AWS Secrets Manager.
AWS Secrets Manager integration
https://github.com/aws/containers-roadmap/issues/636
It’s inevitable that an application running in production will need to communicate with another service or data store that requires credentials that need to be kept secret. It’s a good practice to avoid hardcoding these secrets into your application, or as plaintext environment variables in the task definition. AWS Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and other resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
An integration with AWS Secrets Manager already existed with Amazon ECS, however, based on customer feedback, there was an opportunity for deeper integration between the two services. One of the requested features was enabling tasks to reference versions of a secret from AWS Secrets Manager. A key benefit of using AWS Secrets Manager is the rotation functionality built into the service. Whether it’s a native integration like Amazon RDS, or a custom AWS Lambda function that rotates the secret, it eliminates the need for humans to intervene when working with secrets. Some teams may be comfortable pointing to the latest version of a secret, while others may prefer to point to a specific version, and upgrade to the newer version at their own pace. This is now easy to achieve with Amazon ECS, regardless of which compute type is used (EC2 or Fargate). For more information on how to enable this integration, check out the documentation.
A common practice when storing secrets in AWS Secrets Manager is to store JSON objects that contain multiple key value pairs for an environment. This lead to another feature requested by customers, to enable task definitions to point an environment variable from a specific key from the JSON stored in AWS Secrets Manager. This eliminates the need to write custom custom code to parse the JSON.
Let’s demonstrate how this works in our application.
Tutorial:
In the demo below, the application will show what secrets are being exposed to the task. We are intentionally doing this to explain the different ways that one can integrate their ECS tasks with Secrets Manager. The secret is exposed to the container as an environment variable, however, we want to avoid storing secrets as plaintext in the task definition. The integration is enabled by simply using the secrets
parameter, which will reference the ARN of the secret in Secrets Manager. Below is a snippet from the container definition, which is part of the task definition. The configuration shows the key that will be exposed as the environment variable name, and the value that will be translated from an ARN to an actual secret at runtime.
As shown above, each secret reference is pointing to the same secret, but exposing the value in different ways. Let’s break it down:
SECRETS_MANAGER_SECRET_ALL_CURRENT_VERSION
: Since the valueFrom points directly to the ARN of the secret, this will always pull down the latest secret.SECRETS_MANAGER_SECRET_ALL_PREVIOUS_VERSION
: You may notice that the ARN has AWSPREVIOUS in the identifier. This is telling ECS that we want to point to the previous version of the latest secret deployed.SECRETS_MANAGER_SECRET_JSON_KEY
: To put it simply, we are referencing a specific key from the JSON stored in Secrets Manager.
For more information on how to reference secrets, check out the documentation.
Now that we explained how we are getting the secrets, let’s go back to the application and see how the secrets are exposed within it.
The result when done should produce the URL for the test application. While one wouldn’t want to expose their secrets in their application, the demo provides an effective way to visualize the different ways that one can reference secrets from Secrets Manager in ECS.
Let’s open the endpoint and see the result.
We can see that our Fargate task has the secrets exposed exactly how we requested. That wraps up the secrets section. Let’s move on to the task metadata updates.
Task metadata updates:
When running containers in Amazon ECS, there may be instances where the application or monitoring service needs to gather data about the task, or gather statistics around the container itself during runtime. To learn more about what is available from the task metadata endpoint, check out the documentation. With the latest update to the metadata service, tasks running on AWS Fargate for Amazon ECS can now access more detailed network metrics from the container, specifically around bytes transmitted per second (Tx and Rx). In addition, there are now three more fields available from the task metadata: launch type, container ARN, and details about the log driver being used.
The addition of the transmit and receive network metrics in the metadata endpoint increases visibility into the network performance of your Fargate tasks. In addition to the availability of those metrics via the metadata endpoint, with this latest update, Fargate task level network metrics can be visualized in CloudWatch Container Insights. This means that customers can derive meaningful network metrics from the tasks running in Fargate and take appropriate action as needed. Whether that’s enabling autoscaling on network traffic patterns, or just to better understand traffic patterns for the application, the customer has more visibility into network performance on their tasks running on AWS Fargate.
For our demo application, we call the metadata endpoint which is available to the container as an environment variable and parse through it to visualize the new fields. While this demo application most likely wouldn’t represent a production workload, it gives an example of how to access the data via the metadata endpoint. For example, the network metrics would be great for a sidecar monitoring container, or to run a calculation to determine the result of a healthcheck for the container health.
Finally, the addition of these network metrics being added into CloudWatch Container Insights gives you more visibility into the health of your containers running on Fargate or EC2-backed ECS tasks. In the image below, we gain the ability to visualize our Fargate services running in our ECS cluster, which now includes the network metrics.
Wrapping up
In this blog post, we demoed and highlighted the newest feature additions to AWS Fargate on Amazon ECS. To get started, this functionality is available to customers using platform version 1.4 for Fargate, or for EC2-backed tasks, update your ECS agent to version 1.43.0 by following the documentation.