Containers

Connecting to an interactive shell on your containers running in AWS Fargate using AWS Copilot

Since AWS Fargate launched in 2017 many developers have adopted the serverless compute model for containers. Instead of managing EC2 instances to run their containers, these developers are able to think of scaling in terms of container size and container count. Over time AWS Fargate has gained more and more features that make it capable of running all the same workloads you can run directly on EC2. One of those new features is “ECS exec”. With ECS exec you can open an interactive shell to one of your containers. You can learn more about ECS exec on the official launch blog.

One of our core goals with ECS is to provide a powerful API with lots of underlying capabilities that can serve customer needs at any scale. We also build user friendly tools on top of that underlying API to make it easier to consume. AWS Copilot is one of those tools. It provides a developer friendly command line experience for ECS + Fargate. AWS Copilot guides developers through the process of building, releasing, and operating their containerized application. AWS Copilot now supports “ECS exec” as well. With Copilot and ECS exec you can easily open an interactive shell from your local machine, connected to one of your remote containers in AWS Fargate.

Opening an interactive shell is hard

Securely establishing a connection and controlling a remote machine is challenging. First of all, any protocol that allows remote control of another machine is something that attackers on the public internet will attempt to abuse. So the security of this protocol must be hardened to prevent unintended usage.

Additionally, a security focused organization likely wants to limit access for their own employees as well. They may allow developers to access some services in some environments. But other services that have sensitive customer information like passwords should probably be off limits. When engineers are allowed to access part of the system there should be an auditable log of the commands that were run, to guard against internal misuse as well.

If you can open an interactive shell then it is possible to make mutations, such as installing a new package or modifying a system setting. Ideally engineers would only be able to access the ephemeral containers themselves. After connecting to a container and running commands within it you could discard that container completely and launch a fresh replacement that has not been touched yet. This way there are no persistent mutations to the system.

In summary there are a few problems that need to be solved. We want to protect your infrastructure from attackers on the public internet. We want to reduce the access surface area for your own engineers and create an audit log of commands that were run. And we want to prevent any long term drift from persistent mutations.

Introducing ECS exec

ECS exec solves these hard problems for you. It is built on top of AWS Systems Manager (SSM). When you launch an ECS exec enabled task the SSM agent is automatically injected into your container when the task is starting up. If the task’s IAM role is configured to allow ECS exec then this agent is able to open a connection back to SSM. When you want to connect to your container you can run the SSM session manager plugin on your local machine to connect to the SSM service, and through that service to the destination container.

This neatly solves the issue of access control. There is no direct access from the public internet to your tasks, and your EC2 host or Fargate task does not have to accept any inbound connections at all. Instead you connect from your local machine to AWS SSM and then SSM is able to use the connection that your application container itself has opened back to SSM.

You can easily narrow the surface area for your own engineers by controlling which tasks are SSM enabled and have an IAM role that allows them to open a connection to SSM. You can also control which engineers have access to open connections from their local development machines to SSM. This all makes it very easy to grant certain engineers access to connect to certain tasks, but not all tasks. All the commands that they run are added to the ECS task logs so that you can see what commands were run, and even their outputs.

The issue of persistent mutations is solved by ensuring that the connection is to an ephemeral container, not to the underlying host running the container. Any commands that you run using the interactive shell are running inside the container. When you are done the container can be discarded to erase any changes that were made.

If you’d like to learn more about the underlying technology that makes ECS exec work, please read the public container roadmap proposal for this feature, or the launch blog that goes into more depth on how to configure this feature manually using the ECS API.

AWS Copilot makes ECS exec easy

There are a number of components and IAM roles that go into making ECS exec secure. There are client side requirements on your local dev machine, as well as server side requirements in the task definition. All these things must be configured properly. We know that this can be challenging, especially when you just want an easy way to open an interactive shell in your container. For this reason we are launching ECS exec with day one support in AWS Copilot.

AWS Copilot automatically configures your applications for secure access via ECS exec. You can then connect to your application containers with a simple one step command. Let’s take a look at the developer experience.

First I am going to start with a simple NGINX test application.

Once the service is deployed I am given an address for the service. I can point my browser at this address and see the default NGINX page for this sample Copilot application.

Now let’s use ECS exec to get a shell inside the container and change the content of this page.

All I have to type is copilot svc exec and Copilot will automatically connect me to this container in AWS Fargate.

If necessary AWS Copilot can automatically install the SSM session manager plugin for me. A couple seconds later I have shell inside the container running on AWS Fargate. I can type ls to list the contents of the filesystem and start exploring. I can take a look at the NGINX configuration and find the HTML file that is being served by the container.

I want to modify the HTML page that NGINX is serving. To make things easier I can run a few commands to install a text editor inside the container:

apt-get update
apt-get install vim
vim /usr/share/nginx/html/index.html

After making my changes to the page I can refresh my browser and I see the new text that I added to the HTML.

Of course this change is only temporary. The Vim package that I installed and the HTML modification that I made only exist in the ephemeral filesystem of the running container. If I stop the running container and launch a replacement then the service will go back to its original state as defined in the container image. There will be no Vim installed and I will see the original HTML content again. So this isn’t the ideal way to change the HTML content of a webpage. But ECS exec is a great way to run debug commands and do other tasks that may require you to inspect what is going on inside of a running container.

There is one other feature that I really appreciate. If I run copilot svc logs then I can see a log of everything that happened in the container. I see my NGINX access logs as well as the commands that I ran via ECS exec, and their output:

This provides a clear, auditable history of the commands that I ran inside the live container on AWS Fargate, as well as their output.

Conclusion

We have designed ECS exec to help you securely open a connection and get a shell inside your running containers, on both ECS + EC2, and ECS + Fargate. With AWS Copilot we have made the experience as simple as possible. With a single command you can access your Copilot launched containers.

For further reading on this please refer to: