Containers

AWS Copilot: an application-first CLI for containers on AWS

On July 9, 2020, we introduced AWS Copilot, a new command line interface (CLI) to build, release, and operate production ready containerized applications on Amazon Elastic Container Service (Amazon ECS) and AWS Fargate. In this post, I walk you through the design tenets of the CLI, why we chose them, how they map to Copilot features, and our vision for how the CLI will evolve in the future.

1. Users think in terms of architecture, not of infrastructure.

Developers creating a new microservice shouldn’t have to specify virtual private clouds, load balancer settings, or complex pipeline configuration. They may not know anything about other AWS services. They should be able to specify what “kind” of service it is and how it fits into their overall architecture; the infrastructure should be generated from that.

In the blog post “Containers and infrastructure as code, like peanut butter and jelly,” Clare talked about how we think the next phase of infrastructure as code (IaC) is architecture as code. Instead of defining individual AWS resources, with Copilot you declare what type of containerized service you want to run. The building blocks, such as an Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Container Registry (Amazon ECR), or load balancer, are generated from that to satisfy your architecture.

When running copilot init or copilot svc init, Copilot prompts for the “service type” that you want to create. The result is a declarative manifest file that can be version controlled and holds the most common parameters to configure your service’s architecture. For example, the “Load Balanced Web Service” pattern abstracts an Application Load Balancer’s primitives such as the listener rule and the target group behind two simple fields: http.path and http.healthcheck.

Another way that Copilot makes getting started with containers on AWS delightful is by taking a Dockerfile as input. Copilot parses the file for instructions such as HEALTHCHECK or EXPOSE to automatically fill these values in the manifest. During deployments, the manifest is translated to an AWS CloudFormation template providing you full transparency of the resources being created.

2. Create modern applications by default.

Modern applications can consist of one or more services, jobs, and supporting resources like databases. Copilot ensures that all parts of an application are wired together securely and sensibly.

In Copilot, an application is just a namespace for a collection of related services. To achieve the architecture of your application, Copilot provides deep integrations between services in a deployment environment.

For example, multiple “Load Balanced Web” services in an environment will share the same Application Load Balancer but will have different paths. A service discovery namespace is always created for private inter-service communication. And finally, Copilot injects the necessary environment variables so that your code can leverage these endpoints.

In the screenshot above, the “vote-site” service is deployed in two environments “test” and “prod-pdx.” The service is reachable at vote-site.$COPILOT_SERVICE_DISCOVERY_ENDPOINT via service discovery or you can make a request to $COPILOT_LB_DBS to forward requests from the load balancer.

To setup supporting resources such as an Amazon DynamoDB table, you can run copilot storage init. Copilot will add the minimal permissions to your ECS TaskRole to allow it to access the database and inject environment variables for your service code to connect to it.

3. Deliver applications continuously.

While AWS Copilot can be used to manually deploy changes to an application, we always help customers to move to CI/CD by helping them set up and manage pipelines.

The final piece for provisioning services with best practices is creating a Continuous Delivery (CD) pipeline. We know that CD results in an increase in software throughput and quality therefore we added pipeline tooling directly into Copilot.

With Copilot, you can create multiple deployment environments by running copilot env init. The CLI will ask you to choose a named profile for the account and region of the environment, enabling you to spread them across accounts and regions. Copilot creates an Amazon ECR repository for each service and unique environment regions in your application. A separate repository per region per service removes the data transfer cost and minimizes blast-radius.

Once you have environments setup, copilot pipeline init will create an AWS CodePipeline pipeline for your application. From then on, you can just push your changes to an upstream git repository and the pipeline will trigger. During the “Build” stage, the container image is built from your Dockerfile and pushed to the Amazon ECR repositories, guaranteeing that the same image is used across environments.

The screenshot above shows a CodePipeline pipeline with 3 services: “vote-admin,” “vote-api,” and “vote-site” being deployed to two environments: “test” and “prod-pdx” created using Copilot.

4. Operations is part of the workflow.

Modeling, provisioning, and deploying applications are only part of the application lifecycle for the developer. The CLI must support workflows around troubleshooting and debugging to help when things go wrong.

To close the loop on application development, Copilot can monitor the health of your application and helps you troubleshoot during incidents. Copilot has copilot svc status and copilot svc logs as operational commands to view the health of your service and list logs.

What’s coming?

Copilot is in preview phase right now. We’re continuing to work on a number of enhancements before we make it generally available. We have a public roadmap, but I want to give you a sense of where we’re heading in this post.

You can expect more architecture patterns added to Copilot. For example, we’re currently designing a scheduled task pattern. By far, what excites me the most is how we can make these integrations easier. Let’s take the pattern for an asynchronous service that consumes messages from a queue. Traditionally, this requires us to define an Amazon Simple Queue Service (Amazon SQS) queue subscribed to Amazon Simple Notification Service (Amazon SNS) topic with the right permissions. With Copilot, we can shift the developer experience to say subscribe my service A to events from another service B about topic T and the infrastructure is generated from that.

While the manifest files are supposed to cover the most common use cases for a service type, we recognize that there will always be scenarios where some fields are missing. We’re planning on adding task definition overrides to the manifests as a “break glass” mechanism. We’re also currently working on importing existing infrastructure like an existing VPC to your environments.

We’re only at the beginning for our operational commands. There is so much to explore with regards to tracing, service meshes, automatically creating metrics, alarms, and dashboards for your services. We’d love to hear your thoughts on how we can make debugging your application easier.

Get involved!

Copilot is fully open source! Checkout out GitHub repository for discussion via issues and contribution via pull requests.
You can learn more about how to install and get started with Copilot from our documentation.

TAGS: , ,
Efe Karakus

Efe Karakus

Efe Karakus is a Senior Software Development Engineer working on the developer experience for containers on AWS.