Containers

NEW – Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2

Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of the most requested features on the AWS Containers Roadmap and we are happy to announce its general availability.

It’s a well known security best practice in the industry that users should not “ssh“ into individual containers and that proper observability mechanisms should be put in place for monitoring, debugging, and log analysis. This announcement doesn’t change that best practice but rather it helps improve your application’s security posture. There are situations, especially in the early phases of the development cycle of an application, where a quick feedback loop is required. For example, if you are developing and testing locally, and you are leveraging docker exec, this new ECS feature will resonate with you. This feature would also be useful to get “break-glass” access to containers to debug high-severity issues encountered in production. To this point, it’s important to note that only tools and utilities that are installed inside the container can be used when “exec-ing” into it. In other words, if the netstat or heapdump utilities are not installed in the base image of the container, you won’t be able to use them.

Before the announcement of this feature, ECS users deploying tasks on EC2 would need to do the following to troubleshoot issues:

  • be granted ssh access to the EC2 instances
    • This alone is a big effort because it requires opening ports, distributing keys or passwords, etc.
  • locate the specific EC2 instance in the cluster where the task that needs attention was deployed
  • ssh into the EC2 instance
  • docker exec into the container to troubleshoot

This is a lot of work (and against security best practices) to simply exec into a container (running on an EC2 instance).

Furthermore, ECS users deploying tasks on Fargate did not even have this option because with Fargate there are no EC2 instances you can ssh into. With ECS on Fargate, it was simply not possible to exec into a container(s). One of the options customers had was to redeploy the task on EC2 to be able to exec into its container(s) or use Cloud Debugging from their IDE.

Please note that ECS Exec is supported via AWS SDKs, AWS CLI, as well as AWS Copilot. In the future, we will enable this capability in the AWS Console. Also, this feature only supports Linux containers (Windows containers support for ECS Exec is not part of this announcement).

In the next part of this post, we’ll dive deeper into some of the core aspects of this feature. These include an overview of how ECS Exec works, prerequisites, security considerations, and more. The last section of the post will walk through an example that demonstrates how to get direct shell access of an nginx container covering the aspects above.

In the walkthrough, we will focus on the AWS CLI experience. Refer to this documentation for how to leverage this capability in the context of AWS Copilot.

How ECS Exec works

ECS Exec leverages AWS Systems Manager (SSM), and specifically SSM Session Manager, to create a secure channel between the device you use to initiate the “exec“ command and the target container. The engineering team has shared some details about how this works in this design proposal on GitHub. The long story short is that we bind-mount the necessary SSM agent binaries into the container(s). In addition, the ECS agent (or Fargate agent) is responsible for starting the SSM core agent inside the container(s) alongside your application code. It’s important to understand that this behavior is fully managed by AWS and completely transparent to the user. That is, the user does not even need to know about this plumbing that involves SSM binaries being bind-mounted and started in the container. The user only needs to care about its application process as defined in the Dockerfile.

In the first release, ECS Exec allows users to initiate an interactive session with a container (the equivalent of a docker exec -it ) whether in a shell or via a single command. In the near future, we will enable ECS Exec to also support sending non-interactive commands to the container (the equivalent of a docker exec -t).

Because this feature requires SSM capabilities on both ends, there are a few things that the user will need to set up as a prerequisite depending on their deployment and configuration options (e.g. EC2 Vs. Fargate). Which brings us to the next section: prerequisites.

The prerequisites for ECS Exec

As we said, this feature leverages components from AWS SSM. As such, the SSM bits need to be in the right place for this capability to work. This is true for both the initiating side (e.g. your laptop) as well as the endpoint (e.g. the EC2 or Fargate instance where the container is running).

Client-side requirements

If you are using the AWS CLI to initiate the exec command, the only package you need to install is the SSM Session Manager plugin for the AWS CLI. Depending on the platform you are using (Linux, Mac, Windows) you need to set up the proper binaries per the instructions. Today, the AWS CLI v1 has been updated to include this logic. The AWS CLI v2 will be updated in the coming weeks. Remember also to upgrade the AWS CLI v1 to the latest version available. This version includes the additional ECS Exec logic and the ability to hook the Session Manager plugin to initiate the secure connection into the container.

Server-side requirements (Amazon EC2)

As described in the design proposal, this capability expects that the SSM components required are available on the host where the container you need to exec into is running (so that these binaries can be bind-mounted into the container as previously mentioned).

If you are using the Amazon vetted ECS optimized AMI, the latest version includes the SSM prerequisites already so there is nothing that you need to do. For this initial release we will not have a way for customers to bake the prerequisites of this new feature in their own AMI. We plan to add this flexibility after launch.

Server-side requirements (AWS Fargate)

If the ECS task and its container(s) are running on Fargate, there is nothing you need to do because Fargate already includes all the infrastructure software requirements to enable this ECS capability. Because the Fargate software stack is managed through so called “Platform Versions” (read this blog if you want have an AWS Fargate Platform Versions primer), you only need to make sure that you are using PV 1.4 (which is the most recent version and ships with the ECS Exec prerequisites).

Configuring the infrastructure for ECS Exec

Now that we have discussed the prerequisites, let’s move on to discuss how the infrastructure needs to be configured for this capability to be invoked and leveraged.

Configuring the logging options (optional)

In addition to logging the session to an interactive terminal (e.g. your laptop, AWS CloudShell or AWS Cloud9), ECS Exec supports logging the commands and commands output (to either or both):

This, along with logging the commands themselves in AWS CloudTrail, is typically done for archiving and auditing purposes.

Please note that, if your command invokes a shell (e.g. “/bin/bash"), you gain interactive access to the container. In that case, all commands and their outputs inside the shell session will be logged to S3 and/or CloudWatch. The shell invocation command along with the user that invoked it will be logged in AWS CloudTrail (for auditing purposes) as part of the ECS ExecuteCommand API call.

However, if your command invokes a single command (e.g. "pwd"), only the output of the command will be logged to S3 and/or CloudWatch and the command itself will be logged in AWS CloudTrail as part of the ECS ExecuteCommand API call. In case of an audit, extra steps will be required to correlate entries in the logs with the corresponding API calls in AWS CloudTrail. We intend to simplify this operation in the future.

It’s also important to notice that the container image requires script (part of util-linux) and cat (part of coreutils) to be installed in order to have command logs uploaded correctly to S3 and/or CloudWatch. In the walkthrough at the end of this blog, we will use the nginx container image, which happens to have this support already installed. Make sure your image has it installed.

These logging options are configured at the ECS cluster level. The new AWS CLI supports a new (optional) --configuration flag for the create-cluster and update-cluster commands that allows you to specify this configuration. In the walkthrough at the end of this post, we will have an example of a create-cluster command but, for background, this is how the syntax of the new executeCommandConfiguration option looks.

executeCommandConfiguration={kmsKeyId=string,\
                            logging=string,\
                            logConfiguration={cloudWatchLogGroupName=string,\
                                            cloudWatchEncryptionEnabled=boolean,\
                                            s3BucketName=string,\
                                            s3EncryptionEnabled=boolean,\
                                            s3KeyPrefix=string}}

The logging variable determines the behavior of the ECS Exec logging capability:

  • NONE: logging is disabled
  • DEFAULT: log to the configured awslogs driver (if the driver is not configured then no log will be saved)
  • OVERRIDE: log to the provided CloudWatch LogGroup and/or S3 bucket

Please refer to the AWS CLI documentation for a detailed explanation of this new flag.

Keep in mind that we are talking about logging the output of the exec session. This has nothing to do with the logging of your application. The application is typically configured to emit logs to stdout or to a log file and this logging is different from the exec command logging we are discussing in this post.

Configuring the task role with the proper IAM policy

The container runs the SSM core agent (alongside the application). This agent, when invoked, calls the SSM service to create the secure channel. Because of this, the ECS task needs to have the proper IAM privileges for the SSM core agent to call the SSM service. This is done by making sure the ECS task role includes a set of IAM permissions that allows to do this.

To be clear, the SSM agent does not run as a separate container sidecar. The SSM agent runs as an additional process inside the application container. The design proposal in this GitHub issue has more details about this.

In addition, the task role will need to have IAM permissions to log the output to S3 and/or CloudWatch if the cluster is configured for these options. If these options are not configured then these IAM permissions are not required.

The practical walkthrough at the end of this post has an example of this.

Configuring security and audit controls for ECS Exec

So far we have explored the prerequisites and the infrastructure configurations. We’ll now talk about the security controls and compliance support around the new ECS Exec feature.

IAM security controls

As you would expect, security is natively integrated and configured via IAM policies associated to principals (IAM users, IAM groups and IAM roles) that can invoke a command execution.

This control is managed by the new ecs:ExecuteCommand IAM action. The user permissions can be scoped at the cluster level all the way down to as granular as a single container inside a specific ECS task. Due to the highly dynamic nature of the task deployments, users can’t rely only on policies that point to specific tasks. Instead, we suggest to tag tasks and create IAM policies by specifying the proper conditions on those tags. Note that both ecs:ResourceTag/tag-key and aws:ResourceTag/tag-key condition keys are supported. An example of a scoped down policy to restrict access could look like the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecs:ExecuteCommand"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/tag-key": "tag-value"",
                    "StringEquals": {
                        "ecs:container-name": "<container_name>"
                    }
                }
            },
            "Resource":"arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
        }
    ]
}

Note that this policy would scope down an IAM principal to a be able to exec only into containers with a specific name and in a specific cluster. Additionally, you could have used a policy condition on tags, as mentioned above.

When we launch non-interactive commands support in the future, we will also provide a control to limit on the type of interactivity allowed (e.g. a user can only be allowed to execute “non-interactive” commands whereas another user can be allowed to execute both “interactive” and “non-interactive” commands).

Security and auditing

As we said at the beginning, allowing users to ssh into individual tasks is often considered an anti-pattern and something that would create concerns, especially in highly regulated environments. This is why, in addition to strict IAM controls, all ECS Exec requests are logged to AWS CloudTrail for auditing purposes.

It is important to understand that only AWS API calls get logged (along with the command invoked). For example, if you open an interactive shell section only the /bin/bash command is logged in CloudTrail but not all the others inside the shell. However, these shell commands along with their output would be be logged to CloudWatch and/or S3 if the cluster was configured to do so.

The walkthrough below has an example of this scenario.

Data channel encryption

The communication between your client and the container to which you are connecting is encrypted by default using TLS1.2. It is, however, possible to use your own AWS Key Management Service (KMS) keys to encrypt this data channel. The ECS cluster configuration override supports configuring a customer key as an optional parameter. When specified, the encryption is done using the specified key. Ultimately, ECS Exec leverages the core SSM capabilities described in the SSM documentation.

ECS Exec in action via the AWS CLI workflow

We have covered the theory so far. Let’s now dive into a practical example. In the following walkthrough, we will demonstrate how you can get an interactive shell in an nginx container that is part of a running task on Fargate. This example isn’t aimed at inspiring a real life troubleshooting scenario, but rather, it focuses on the feature itself. We are sure there is no shortage of opportunities and scenarios you can think of to apply these core troubleshooting features 🙂

First and foremost, make sure you have the “Client-side requirements” discussed above. That is, the latest AWS CLI version available as well as the SSM Session Manager plugin for the AWS CLI.

The next steps are aimed at deploying the task from scratch. If you are an experienced Amazon ECS user, you may apply the specific ECS Exec configurations below to your own existing tasks and IAM roles. If you are an AWS Copilot CLI user and are not interested in an AWS CLI walkthrough, please refer instead to the Copilot documentation. As a reminder, this feature will also be available via Amazon ECS in the AWS Management Console at a later time.

Our AWS CLI is currently configured with reasonably powerful credentials to be able to execute successfully the next steps.

Let’s start by creating a new empty folder and move into it. We also declare some variables that we will use later. These includes setting the region, the default VPC and two public subnets in the default VPC. Also note that bucket names need to be unique so make sure that you set a random bucket name in the export below (In my example, I have used ecs-exec-demo-output-3637495736)

export AWS_REGION=<xxxxxxx>
export VPC_ID=<vpc-xxxx>
export PUBLIC_SUBNET1=<subnet-xxxxx>
export PUBLIC_SUBNET2=<subnet-xxxxx>
export ECS_EXEC_BUCKET_NAME=ecs-exec-demo-output-xxxxxxxxxx

As a prerequisite to define the ECS task role and ECS task execution role, we need to create an IAM policy. Create a file called ecs-tasks-trust-policy.json and add the following content.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "ecs-tasks.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now, we can start creating AWS resources. These are prerequisites to later define and ultimately start the ECS task. These resources are:

  • KMS key to encrypt the ECS Exec data channel
  • ECS cluster
  • CloudWatch log group
    • this log group will contain two streams: one for the container stdout and one for the logging output of the new ECS Exec feature
  • S3 bucket (with an optional prefix) for the logging output of the new ECS Exec feature
  • Security group that we will use to allow traffic on port 80 to hit the nginx container
  • Two IAM roles that we will use to define the ECS task role and the ECS task execution role

These are the AWS CLI commands that create the resources mentioned above, in the same order. Please pay close attention to the new --configuration executeCommandConfiguration option in the ecs create-cluster command.

KMS_KEY=$(aws kms create-key --region $AWS_REGION)
KMS_KEY_ARN=$(echo $KMS_KEY | jq --raw-output .KeyMetadata.Arn)
aws kms create-alias --alias-name alias/ecs-exec-demo-kms-key --target-key-id $KMS_KEY_ARN --region $AWS_REGION
echo "The KMS Key ARN is: "$KMS_KEY_ARN 

aws ecs create-cluster \
    --cluster-name ecs-exec-demo-cluster \
    --region $AWS_REGION \
    --configuration executeCommandConfiguration="{logging=OVERRIDE,\
                                                kmsKeyId=$KMS_KEY_ARN,\
                                                logConfiguration={cloudWatchLogGroupName="/aws/ecs/ecs-exec-demo",\
                                                                s3BucketName=$ECS_EXEC_BUCKET_NAME,\
                                                                s3KeyPrefix=exec-output}}"

aws logs create-log-group --log-group-name /aws/ecs/ecs-exec-demo --region $AWS_REGION

aws s3api create-bucket --bucket $ECS_EXEC_BUCKET_NAME --region $AWS_REGION --create-bucket-configuration LocationConstraint=$AWS_REGION 

ECS_EXEC_DEMO_SG=$(aws ec2 create-security-group --group-name ecs-exec-demo-SG --description "ECS exec demo SG" --vpc-id $VPC_ID --region $AWS_REGION) 
ECS_EXEC_DEMO_SG_ID=$(echo $ECS_EXEC_DEMO_SG | jq --raw-output .GroupId)
aws ec2 authorize-security-group-ingress --group-id $ECS_EXEC_DEMO_SG_ID --protocol tcp --port 80 --cidr 0.0.0.0/0 --region $AWS_REGION 
  
aws iam create-role --role-name ecs-exec-demo-task-execution-role --assume-role-policy-document file://ecs-tasks-trust-policy.json --region $AWS_REGION
aws iam create-role --role-name ecs-exec-demo-task-role --assume-role-policy-document file://ecs-tasks-trust-policy.json --region $AWS_REGION

Note that the two IAM roles do not yet have any policy assigned. This is what we will do:

  • For the ECS task execution role, we will simply attach the existing standard AWS managed policy (AmazonECSTaskExecutionRolePolicy)
  • For the ECS task role, we need to craft a policy that allows the container to open the secure channel session via SSM and log the ECS Exec output to both CloudWatch and S3 (to the LogStream and to the bucket created above)

Create a file called ecs-exec-demo-task-role-policy.json and add the following content. Please make sure you fix:

  • <AWS_REGION>
  • <ACCOUNT_ID>
  • <ECS_EXEC_BUCKET_NAME> (whose value is in the ECS_EXEC_BUCKET_NAME variable)
  • <KMS_KEY_ARN> created above (whose value is in the KMS_KEY_ARN variable)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:<AWS_REGION>:<ACCOUNT_ID>:log-group:/aws/ecs/ecs-exec-demo:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::<ECS_EXEC_BUCKET_NAME>/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetEncryptionConfiguration"
            ],
            "Resource": "arn:aws:s3:::<ECS_EXEC_BUCKET_NAME>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": "<KMS_KEY_ARN>"
        }
    ]
}

Please note that these IAM permissions needs to be set at the ECS task role level (not at the ECS task execution role level). This is because the SSM core agent runs alongside your application in the same container. It’s the container itself that needs to be granted the IAM permission to perform those actions against other AWS services.

It’s also important to remember that the IAM policy above needs to exist along with any other IAM policy that the actual application requires to function. For example, if your task is running a container whose application reads data from Amazon DynamoDB, your ECS task role needs to have an IAM policy that allows reading the DynamoDB table in addition to the IAM policy that allows ECS Exec to work properly.

Now we can execute the AWS CLI commands to bind the policies to the IAM roles.

aws iam attach-role-policy \
    --role-name ecs-exec-demo-task-execution-role \
    --policy-arn "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
    
aws iam put-role-policy \
    --role-name ecs-exec-demo-task-role \
    --policy-name ecs-exec-demo-task-role-policy \
    --policy-document file://ecs-exec-demo-task-role-policy.json

We are ready to register our ECS task definition. Create a file called ecs-exec-demo.json with the following content. Make sure you fix:

  • <ACCOUNT_ID>
  • <AWS_REGION>
{"family": "ecs-exec-demo",
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/ecs-exec-demo-task-execution-role",
    "taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/ecs-exec-demo-task-role",
    "containerDefinitions": [
        {"name": "nginx",
            "image": "nginx",
            "linuxParameters": {
                "initProcessEnabled": true
            },            
            "logConfiguration": {
                "logDriver": "awslogs",
                    "options": {
                       "awslogs-group": "/aws/ecs/ecs-exec-demo",
                       "awslogs-region": "<AWS_REGION>",
                       "awslogs-stream-prefix": "container-stdout"
                    }
            }
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "256",
    "memory": "512"
}

Note how the task definition does not include any reference or configuration requirement about the new ECS Exec feature, thus, allowing you to continue to use your existing definitions with no need to patch them. As a best practice, we suggest to set the initProcessEnabled parameter to true to avoid SSM agent child processes becoming orphaned. However, this is not a requirement.

The following command registers the task definition that we created in the file above.

aws ecs register-task-definition \
    --cli-input-json file://ecs-exec-demo.json \
    --region $AWS_REGION

Let’s launch the Fargate task now! We are going to use some of the environment variables we set above in the previous commands. Make sure they are properly populated. Also note that, in the run-task command, we have to explicitly opt-in to the new feature via the --enable-execute-command option. This will instruct the ECS and Fargate agents to bind mount the SSM binaries and launch them along the application. Similarly, you can enable the feature at ECS Service level by using the same --enable-execute-command flag with the create-service command. If a task is deployed or a service is created without the --enable-execute-command flag, you will need to redeploy the task (with run-task) or update the service (with update-service) with these opt-in settings to be able to exec into the container.

aws ecs run-task \
    --cluster ecs-exec-demo-cluster  \
    --task-definition ecs-exec-demo \
    --network-configuration awsvpcConfiguration="{subnets=[$PUBLIC_SUBNET1, $PUBLIC_SUBNET2],securityGroups=[$ECS_EXEC_DEMO_SG_ID],assignPublicIp=ENABLED}" \
    --enable-execute-command \
    --launch-type FARGATE \
    --tags key=environment,value=production \
    --platform-version '1.4.0' \
    --region $AWS_REGION

The run-task command should return the full task details and you can find the task id from there. Search for the taskArn output. The task id represents the last part of the ARN.

"taskArn": "arn:aws:ecs:AWS_REGION:ACCOUNT_ID:task/ecs-exec-demo-cluster/*ef6260ed8aab49cf926667ab0c52c313*"

Note we have also tagged the task with a particular key-pair. In this example, we will not leverage it but, as a reminder, you can use tags to create IAM control conditions if you want.

Query the task by using the task id until the task is successfully transitioned into RUNNING (make sure you use the task id gathered from the run-task command).

aws ecs describe-tasks \
    --cluster ecs-exec-demo-cluster \
    --region $AWS_REGION \
    --tasks ef6260ed8aab49cf926667ab0c52c313

Confirm that the "ExecuteCommandAgent" in the task status is also RUNNING and that "enableExecuteCommand" is set to true.

With the feature enabled and appropriate permissions in place, we are ready to exec into one of its containers.

For the purpose of this walkthrough, we will continue to use the IAM role with the Administration policy we have used so far. However, remember that “exec-ing” into a container is governed by the new ecs:ExecuteCommand IAM action and that that action is compatible with conditions on tags.

Let’s execute a command to invoke a shell.

aws ecs execute-command  \
    --region $AWS_REGION \
    --cluster ecs-exec-demo-cluster \
    --task ef6260ed8aab49cf926667ab0c52c313 \
    --container nginx \
    --command "/bin/bash" \
    --interactive

The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.


Starting session with SessionId: ecs-execute-command-0122b68a67f39258e
This session is encrypted using AWS KMS.
root@ip-172-31-32-237:/# hostname
ip-172-31-32-237.ap-southeast-1.compute.internal
root@ip-172-31-32-237:/#
root@ip-172-31-32-237:/# ls
bin   dev                  docker-entrypoint.sh  home  lib64           media  opt   root  sbin  sys  usr
boot  docker-entrypoint.d  etc                   lib   managed-agents  mnt    proc  run   srv   tmp  var
root@ip-172-31-32-237:/#
root@ip-172-31-32-237:/# echo "This page has been created with ECS Exec" > /usr/share/nginx/html/index.html
root@ip-172-31-32-237:/#
root@ip-172-31-32-237:/# exit
exit


Exiting session with sessionId: ecs-execute-command-0122b68a67f39258e.

Note the command above includes the --container parameter. For tasks with a single container this flag is optional. However, for tasks with multiple containers it is required.

As you can see above, we were able to obtain a shell to a container running on Fargate and interact with it. Note that, other than invoking a few commands such as hostname and ls, we have also re-written the nginx homepage (the index.html file) with the string “This page has been created with ECS Exec.” This task has been configured with a public IP address and, if we curl it, we can see that the page has indeed been changed.

$ curl http://13.212.126.134/
This page has been created with ECS Exec
$

As a reminder, only tools and utilities that are installed and available inside the container can be used with ECS Exec.

We could also simply invoke a single command in interactive mode instead of obtaining a shell as the following example demonstrates. In this case, I am just listing the content of the container root directory using ls.

aws ecs execute-command  \
    --region $AWS_REGION \
    --cluster ecs-exec-demo-cluster \
    --task 1234567890123456789 \
    --container nginx \
    --command "ls" \
    --interactive

The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.


Starting session with SessionId: ecs-execute-command-00167f6ecbc18ee7e
bin   docker-entrypoint.d   home   managed-agents  opt   run   sys  var
boot  docker-entrypoint.sh  lib    media           proc  sbin  tmp
dev   etc                   lib64  mnt             root  srv   usr


Exiting session with sessionId: ecs-execute-command-00167f6ecbc18ee7e.

The ls command is part of the payload of the ExecuteCommand API call as logged in AWS CloudTrail. Note the sessionId and the command in this extract of the CloudTrail log content. The sessionId and the various timestamps will help correlate the events.

     "requestParameters": {
        "cluster": "ecs-exec-demo-cluster",
        "container": "nginx",
        "command": "ls",
        "interactive": true,
        "task": "3b3b695a6d104ef5ae31fdb596f27429"
    },
    "responseElements": {
        "clusterArn": "arn:aws:ecs:ap-southeast-1:123456789012:cluster/ecs-exec-demo-cluster",
        "containerArn": "arn:aws:ecs:ap-southeast-1:123456789012:container/6c5790cb-7b68-4bab-9b12-aa6e880e00fa",
        "containerName": "nginx",
        "interactive": true,
        "session": {
            "sessionId": "ecs-execute-command-00167f6ecbc18ee7e",
            "streamUrl": "wss://ssmmessages.ap-southeast-1.amazonaws.com/v1/data-channel/ecs-execute-command-00167f6ecbc18ee7e?role=publish_subscribe",
            "tokenValue": "HIDDEN_DUE_TO_SECURITY_REASONS"
        },
        "taskArn": "arn:aws:ecs:ap-southeast-1:123456789012:task/ecs-exec-demo-cluster/3b3b695a6d104ef5ae31fdb596f27429"
    },

This is the output logged to the S3 bucket for the same ls command:

This is the output logged to the CloudWatch log stream for the same ls command:

Hint: if something goes wrong with logging the output of your commands to S3 and/or CloudWatch, it is possible you may have misconfigured IAM policies. In general, a good way to troubleshoot these problems is to investigate the content of the file /var/log/amazon/ssm/amazon-ssm-agent.log inside the container.

This concludes the walkthrough that demonstrates how to execute a command in a running container in addition to audit which user accessed the container using CloudTrail and log each command with output to S3 or CloudWatch Logs.

[Update] If you experience any issue using ECS Exec, we have released a script that checks if your configurations satisfy the prerequisites. You can download the script here.

Tearing down the environment

Run the following commands to tear down the resources we created during the walkthrough. Make sure that the variables resolve properly and that you use the correct ECS task id.


aws ecs stop-task --cluster ecs-exec-demo-cluster --region $AWS_REGION --task <your task id> 
aws ecs delete-cluster --cluster ecs-exec-demo-cluster --region $AWS_REGION

aws logs delete-log-group --log-group-name /aws/ecs/ecs-exec-demo --region $AWS_REGION

# Be careful running this command. This will delete the bucket we previously created
aws s3 rm s3://$ECS_EXEC_BUCKET_NAME --recursive
aws s3api delete-bucket --bucket $ECS_EXEC_BUCKET_NAME

aws iam detach-role-policy --role-name ecs-exec-demo-task-execution-role --policy-arn "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
aws iam delete-role --role-name ecs-exec-demo-task-execution-role

aws iam delete-role-policy --role-name ecs-exec-demo-task-role --policy-name ecs-exec-demo-task-role-policy
aws iam delete-role --role-name ecs-exec-demo-task-role 

aws kms schedule-key-deletion --key-id $KMS_KEY_ARN --region $AWS_REGION

aws ec2 delete-security-group --group-id $ECS_EXEC_DEMO_SG_ID --region $AWS_REGION

Conclusions

In this post, we have discussed the release of ECS Exec, a feature that allows ECS users to more easily interact with and debug containers deployed on either Amazon EC2 or AWS Fargate. We are eager for you to try it out and tell us what you think about it, and how this is making it easier for you to debug containers on AWS and specifically on Amazon ECS.

Our partners are also excited about this announcement and some of them have already integrated support for this feature into their products. Customers may require monitoring, alerting, and reporting capabilities to ensure that their security posture is not impacted when ECS Exec is leveraged by their developers and operators. For more information please refer to the following posts from our partners:

Aqua: Aqua Supports New Amazon ECS exec Troubleshooting Capability
Datadog: Datadog monitors ECS Exec requests and detects anomalous user activity
SysDig: Running commands securely in containers with Amazon ECS Exec and Sysdig
ThreatStack: Making debugging easier on Fargate
TrendMicro: Cloud One – Conformity Rules Support Amazon ECS Exec

Please keep a close eye on the official documentation to remain up to date with the enhancements we are planning for ECS Exec. This feature is available starting today in all public regions including Commercial, China, and AWS GovCloud via API, SDKs, AWS CLI, AWS Copilot CLI, and AWS CloudFormation.

TAGS: , ,
Massimo Re Ferre

Massimo Re Ferre

Massimo is a Senior Principal Technologist at AWS. He has been working on containers since 2014 and is now part of the DECS (Developers, Events, Containers, Serverless) organization at AWS. Massimo has a blog at https://it20.info and his Twitter handle is @mreferre.

Saloni Sonpal

Saloni Sonpal

Saloni is a Product Manager in the AWS Containers Services team. She focuses on all things AWS Fargate. With her launches at Fargate and EC2, she has continually improved the compute experiences for AWS customers. Prior to that, she has had years of experience as a Program Manager and Developer at Azure Database services and Microsoft SQL Server. She is a creative problem solver and loves taking on new challenges.