AWS Compute Blog

Centralized Container Logs with Amazon ECS and Amazon CloudWatch Logs

September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details.

Containers make it easy to package and share applications but they often run on a shared cluster. So how do you access your application logs for debugging? Fortunately, Docker provides a log driver that lets you send container logs to a central log service, such as Splunk or Amazon CloudWatch Logs.

Centralized logging has multiple benefits: your Amazon EC2 instance’s disk space isn’t being consumed by logs and log services often include additional capabilities that are useful for operations. For example, CloudWatch Logs includes the ability to create metrics filters that can alarm when there are too many errors and integrates with Amazon Elasticsearch Service and Kibana to enable you to perform powerful queries and analysis. This post shows how to configure Amazon ECS and CloudWatch Logs.

Step 1: Create a CloudWatch Log group

Navigate to the CloudWatch console and choose Logs. On the Actions menu, choose Create log group.

Step 2: Create an ECS task definition

The following steps assume you already have an ECS cluster created. If you do not, go through the ECS first run wizard.

A task definition defines the containers you are running and the log driver options. Navigate to the ECS console, choose Task Definitions and Create new Task Definition. Set the task definition Name and choose Add container. Set the container name, image, memory, and cpu values. In the Storage and Logging section, choose the awslogs log driver. Set the awslogs-group with the name you set in step 1. Set the awslogs-region to the region in which your task will run. Set the awslogs-stream-prefix to a custom prefix that will identify the set of logs you are streaming, such as your application’s name.

The awslogs-stream-prefix was recently added to give you the ability to associate a log stream with the ECS task ID and container name. Previously, the log stream was named with the Docker container ID, which made it hard to associate with the task. If there was an error in a log, there was no direct way to find what container was having the problem. Now, the CloudWatch log stream name includes your custom prefix, the container name, and the task ID to make it simple to associate logs with a task’s containers.
Here is a sample task definition JSON of an NGINX server that displays a welcome message:

{
    "networkMode": "bridge",
    "taskRoleArn": null,
    "containerDefinitions": [
        {
            "memory": 300,
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [
                "sh",
                "-c"
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "awslogs-test",
                    "awslogs-region": "us-west-2",
                    "awslogs-stream-prefix": "nginx"
                }
            },
            "name": "simple-app",
            "image": "httpd:2.4",
            "command": [
                "/bin/sh -c \"echo 'Congratulations! Your application is now running on a container in Amazon ECS.'  > /usr/local/apache2/htdocs/index.html && httpd-foreground\""
            ],
            "cpu": 10
        }
    ],
    "family": "cw-logs-example"
}

Step 3: Run the task

In the ECS console, choose Clusters. Select your cluster, then choose the Tasks tab. Choose Run new task and in the Task definition list, select the task definition that you created in step 2. Choose Run Task.

You will see your task in the PENDING state. Select the task to open the detail view. Refresh your task’s detail view until the task gets to the RUNNING state.

Step 4: Generate logs

If you’re using the sample task definition, NGINX will have already sent an initialization message to the log stream. You can also connect with the web server to generate additional log messages.

Step 5: View the log

The task view now includes a link to the log stream. Select the link and navigate to the CloudWatch console. The log stream name includes the prefix that you specified in the task definition, the container name, and the ECS task ID (nginx/simple-app/600e016a-9301-4f81-90b2-6bfd0ad2d975). This makes it easy to find the log stream from the ECS task and find the task from the log stream.

Cleanup

When you are done, you can stop the task in the ECS console and remove the log stream in the CloudWatch console.

Conclusion

We hope you find these improvements useful. You can also use CloudWatch Logs for ECS agent and Docker logs. For more information, see the ECS documentation. If you have suggestions or questions, please comment below.