Containers

Developing an application based on multiple microservices using AWS Copilot and AWS Fargate

Introduction

On July 9, 2020, we introduced AWS Copilot, a new command line interface (CLI) to build, release, and operate production ready containerized applications on Amazon Elastic Container Service (Amazon ECS) and AWS Fargate. In this post, we walk you through how to communicate between microservices with service discovery using  AWS Copilot.

You can also refer to an earlier post, Introducing AWS Copilot, in which Nathan Peck explained some of the fundamental concepts of AWS Copilot.

To illustrate, we will build a fully functional application named “emoji-race” to display the most popular emojis on Twitter. The application will be composed of a “tracker”Backend Service that subscribes to a stream of Twitter emojis and stores the counts in memory, and an “api” frontend service that will accept requests from the internet and query the “tracker” service to send the top results back to the users.

Here is a screenshot of the “emoji-race” application, displaying the most popular emojis on Twitter.

 

emoji-api-browser

Prerequisites

In order to implement the instructions laid out in this post, you will need the following:

Architecture

AWS Copilot provides a simple declarative set of commands, including examples and guided experiences built in to help customers deploy quickly. AWS Copilot automates each step in the deployment lifecycle including pushing to a registry, creating a task definition, and creating a cluster.

ArchitectureFigure 1 – Architecture

Steps

Here are the steps we’ll follow to implement the above architecture:

  • Create and configure the AWS Cloud9 environment
    1. Install AWS Copilot
    2. Configure AWS CLI
  • Building the application
    1. Clone the GitHub repository
    2. Create the “emoji-race” application
    3. Create the Backend Service
    4. Create the Load Balanced Web Service

Create and configure the AWS Cloud9 environment

1. Install AWS Copilot:

Log into the AWS Management Console and search for Cloud9 in the search bar. Click Cloud9 and create an AWS Cloud9 environment in the us-east-1 region based on Amazon Linux. Installing the AWS Copilot CLI currently requires you to download our binary from the GitHub releases page manually. In order to see the latest release of AWS Copilot, please use the link at: https://github.com/aws/copilot-cli/releases. At the time of writing this blog post, the current release is v0.3.0

Launch the AWS Cloud9 IDE. In a new terminal session, copy and paste these commands :

sudo curl -Lo /usr/local/bin/copilot https://github.com/aws/copilot-cli/releases/download/v0.3.0/copilot-linux-v0.3.0
sudo chmod +x /usr/local/bin/copilot 
copilot --help

2. Configure AWS CLI:

It is recommended to configure a default profile using the aws configure command as described here.

Building the application

1. Clone the GitHub repository:

git clone https://github.com/efekarakus/emoji-race
cd emoji-race

2. Create the “emoji-race” application:

An application is just a namespace for a collection of related services. In our situation, we’ll create the “emoji-race” app.

Run copilot app init

copilot app init
What would you like to name your application? [? for help] emoji-race

Provide an application name: emoji-race

Application name: emoji-race
✔ Created the infrastructure to manage services under application emoji-race.

✔ The directory copilot will hold service manifests for application emoji-race.

Recommended follow-up actions:
Run copilot init to add a new service to your application.

3. Create the Backend Service:

While initializing services, Copilot asks us what type of microservice we want to create. A “Load Balanced Web Service” is an internet-facing service that’s behind an Application Load Balancer, orchestrated by Amazon ECS on AWS Fargate. Whereas, a “Backend Service” is a private service that cannot be reached from the internet, but can be communicated to via service discovery within your Virtual Private Cloud (VPC).

We’ll first start by building our “tracker” service, which is a Node.js server subscribed to the Emojitracker stream to collect real-time Twitter emoji counts in memory. You can find the source code and the Dockerfile for the service here: https://github.com/efekarakus/emoji-race/tree/mainline/tracker.

We run copilot init and choose “Backend Service” as the type since we don’t want anybody from the internet to access this data.

co-pilot init

Figure 2 – copilot init – create a service

What do you want to name this Backend Service? [? for help] tracker

Type in tracker and choose tracker/Dockerfile as the Dockerfile as shown in Figure 3.

copilot init - choosing a Dockerfile for your service

Figure 3 – copilot init – choosing a Dockerfile for your service

Copilot will create a manifest file under the copilot/tracker folder as indicated in Figure 4. Copilot will now prompt us to create a “test” environment.

copilot manifest file

Figure 4 – copilot manifest file

Enter “y”  to create a test environment and then deploy the service to it.

Would you like to deploy a test environment? [? for help] (y/N) y

copilot test environment

Figure 5 – copilot test environment

Copilot will create a “test” environment using CloudFormation as shown in Figure 5. This environment will contain all the shared infrastructure between services such as a VPC, an ECS Cluster, load balancers, and a Service Discovery Namespace for private service-to-service communication.

copilot tracker service docker image

Figure 6 – copilot tracker service docker image

Once the “test” environment is created, Copilot will create a docker image for the “tracker” microservice and push it to Amazon ECR.

copilot service stack resources

Figure 7 – copilot service stack resources

Finally, the service deployment will kick off to create the ECS service.

Each new service created with Copilot will get its own endpoint under the Service Discovery namespace and follows the naming pattern: : {service name}.{application name}.local. For example tracker.emoji-race.local can be used to resolve to a private IP address of the “tracker” service within the VPC.

Once the deployment is finished, we can run copilot svc show --name tracker to see the configuration information for the deployed service as shown in Figure 8.

copilot svc show

Figure 8 – copilot svc show

4. Create the Load Balanced Web Service

Next, we need to create our “api” service to accept requests from the internet. Since the information is stored in the “tracker” service, our new “api” service needs to communicate with it. We can leverage the service discovery endpoint that Copilot injects as an environment variable in our code:

// in api/main.go

func (s *server) routes() {
    s.router.HandleFunc("/", s.handleEmojis())
}

func (s *server) handleEmojis() http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        // Query "tracker" service via service discovery.
        endpoint := fmt.Sprintf("http://tracker.%s:3000/", os.Getenv("COPILOT_SERVICE_DISCOVERY_ENDPOINT"))
        resp, err := http.Get(endpoint)
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        defer resp.Body.Close()
        body, _ := ioutil.ReadAll(resp.Body)
        w.WriteHeader(http.StatusOK)
        w.Write(body)
    }
}

Now that we know how to handle the code to communicate between our two services, we’ll run copilot init to deploy our “api” service.

Since we have already created an application earlier, Copilot will recognize that and it will print a message as :
"Your workspace is registered to application emoji-race."

copilot - create a frontend microservice

Figure 9 – copilot – create a frontend microservice

Using the up and down arrows on your keyboard, choose “Load Balanced Web Service” to create the Load Balanced Web Service.

What do you want to name this Load Balanced Web Service? [? for help] api

It will prompt you for a name for this microservice, use “api“ and hit enter.

Copilot will prompt you to pick up the relevant Dockerfile. Choose “api/Dockerfile” from the list and hit enter.

Which Dockerfile would you like to use for api? [Use arrows to move, type to filter, ? for more help]
> api/Dockerfile
tracker/Dockerfile

Copilot will create a manifest file as “copilot/api/manifest.yml.” At this point, Copilot will give an option of adding this microservice to the test environment. Choose “y.”

Within a few minutes, Copilot will create a Docker image for the api microservice and then will push it to Amazon ECR. Also Copilot will deploy this Docker image into the “test” environment.

copilot api service deployment

Figure 10 – copilot api service deployment

Once the deployment is finished, you can view the complete application by copying and pasting the Application Load Balancer url of the deployed api shown in Figure 10 inside your browser window.

We can also access the application from the internet by using curl as shown below.

Access the application using curl

Figure 11 – Access the application using curl

Conclusion

In this post we showed how we can use AWS Copilot to:

  • Manually deploy multiple containerized microservices on Amazon ECS and AWS Fargate.
  • Use Service Discovery that Copilot enables by default to communicate between services.

Copilot also provides operational commands such as copilot svc logs and copilot svc status to show the health of your services. Finally, with Copilot, you also get an easy way of creating a continuous delivery pipeline to release your microservices to multiple AWS accounts and regions.

To learn more, check out this blog post: Automatically deploying your container application with AWS Copilot.

Irshad Buchh

Irshad Buchh

Irshad A Buchh is a Principal Solutions Architect at Amazon Web Services (AWS), specializing in driving the widespread adoption of Amazon's cloud computing platform. He collaborates closely with AWS Global Strategic ISV and SI partners to craft and execute effective cloud strategies, enabling them to fully leverage the advantages of cloud technology. By working alongside CIOs, CTOs, and architects, Irshad assists in transforming their cloud visions into reality, providing architectural guidance and expertise throughout the implementation of strategic cloud solutions.

Efe Karakus

Efe Karakus

Efe Karakus is a Senior Software Development Engineer working on the developer experience for containers on AWS.