In this module, you will build the container image for your monolithic node.js application and push it to Amazon Elastic Container Registry. Start Building

deployment to amazon ecr

Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment.

architecture overview

Speed
Launching a container with a new release of code can be done without significant deployment overhead. Operational speed is improved, because code built in a container on a developer’s local machine can be easily moved to a test server by simply moving the container. At build time, this container can be linked to other containers required to run the application stack.

Dependency Control & Improved Pipeline
A Docker container image is a point in time capture of an application's code and dependencies. This allows an engineering organization to create a standard pipeline for the application life cycle. For example:

  1. Developers build and run the container locally.
  2. Continuous integration server runs the same container and executes integration tests against it to make sure it passes expectations.
  3. The same container is shipped to a staging environment where its runtime behavior can be checked using load tests or manual QA.
  4. The same container is shipped to production.

Being able to build, test, ship, and run the exact same container through all stages of the integration and deployment pipeline makes delivering a high quality, reliable application considerably easier.

Density & Resource Efficiency
Containers facilitate enhanced resource efficiency by allowing multiple heterogeneous processes to run on a single system. Resource efficiency is a natural result of the isolation and allocation techniques that containers use. Containers can be restricted to consume certain amounts of a host's CPU and memory. By understanding what resources a container needs and what resources are available from the underlying host server, you can right-size the compute resources you use with smaller hosts or increase the density of processes running on a single large host, increasing availability and optimizing resource consumption.

Flexibility
The flexibility of Docker containers is based on their portability, ease of deployment, and small size. In contrast to the installation and configuration required on a VM, packaging services inside of containers allows them to be easily moved between hosts, isolated from failure of other adjacent services, and protected from errant patches or software upgrades on the host system. 

Time to Complete: 20 minutes

Services Used:


For the first part of this tutorial, you will build the Docker container image for your monolithic node.js application and push it to Amazon Elastic Container Registry (Amazon ECR). Select each step number to expand the section.

break-the-monolith
  • Step 1. Get Setup

    In the next few steps, you are going to be using Docker, Github, Amazon Elastic Container Service (Amazon ECS), and Amazon ECR to deploy code into containers. To complete these steps, ensure you have the following tools.

    1. Have an AWS account: If you don't already have an account with AWS, you can sign up here. All the exercises in this tutorial are designed to be covered under AWS Free Tier.
      ⚐ Note: Some of the services you will be using may require your account to be active for more than 12 hours. If you experience difficulty with any services and have a newly created account, please wait a few hours and try again.
    2. Install Docker: You will use Docker to build the image files that will run in your containers. Docker is an open source project. You can download it for Mac or for Windows.
      After Docker is installed, you can verify it is running by entering Docker --version in the terminal. The version number should display, for example: Docker version 19.03.5, build 633a0ea.
    3. Install the AWS CLI:
      • You will use the AWS Command Line Interface (AWS CLI) to push the images to Amazon ECR. You can learn about and download AWS CLI here.
      • After AWS CLI is installed, verify it is running by entering aws --version in the terminal. The version number should display, for example: aws-cli/1.16.217 Python/2.7.16 Darwin/18.7.0 botocore/1.12.207.
      • If you already have AWS CLI installed, run the following command in the terminal to ensure it is updated to the latest version: pip install awscli --upgrade --user
      • If you have never used AWS CLI before, you may need to configure your credentials.
    4. Have a text editor: If you don't already have a text editor for coding, install one to your local environment. Atom is a simple, open-source text editor from GitHub that is popular with developers.
  • Step 2. Download & Open the Project

    Download the code from GitHub: Navigate to https://github.com/awslabs/amazon-ecs-nodejs-microservices and select Clone or Download to download the GitHub repository to your local environment. You can also use GitHub Desktop or Git to clone the repository.

    Open the project files: Start Atom, select Add Project Folder, and select the folder where you saved the repository amazon-ecs-nodejs-microservices. This will add the entire project into Atom so you can easily work with it.

    In your project folder, you should see folders for infrastructure and services. Infrastructure holds the AWS CloudFormation infrastructure configuration code you will use in the next step. Services contains the code that forms the node.js application.

    Take a few minutes to review the files and familiarize yourself with the different aspects of the application, including the database db.json, the server server.jspackage.json, and the application Dockerfile.

    microservices project
  • Step 3. Provision a Repository

    Create the repository:

    • Navigate to the Amazon ECR console.
    • On the Repositories page, select Create Repository.
    • On the Create repository page, enter the following name for your repository: api.
      ⚐ Note: Under Tag immutability, leave the default settings.
    • Select Create repository.

    After the repository is created, you receive a confirmation message and the repository address is listed under URI. The repository address is in the following format: [account-ID].dkr.ecr.[region].amazonaws.com/[repo-name]. The [account-ID], [region], and [repo-name] will be specific to your setup.

    ⚐ Note: You will need the repository address throughout this tutorial.

  • Step 4. Build & Push the Docker Image

    Access your terminal and navigate to the following directory: ~/amazon-ecs-nodejs-microservices/2-containerized/services/api.

    Use the terminal to authenticate Docker log in:

    1. Run one of the following commands, depending on which version of AWS CLI you have (To identify the version, run aws --version. If needed, configure your credentials.):
      • If you have AWS CLI version 1.x, then run:
        $(aws ecr get-login --no-include-email --region [your-region])
        Replace [your-region], for example: $(aws ecr get-login --no-include-email --region us-west-2)
      • If you have AWS CLI version 2.x, then run:
        aws ecr get-login-password --region [your-region] | docker login --username AWS --password-stdin [your-AWS-account-ID].dkr.ecr.[your-region].amazonaws.com
        Replace [your-region] and [your-AWS-account-ID], for example: aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com
        If authentication is successful, you will receive the confirmation message: Login Succeeded.
    2. To build the image, run the following command in the terminal: docker build -t api .
      ⚐ Note: The period (.) after api is needed.
    3. After the build completes, tag the image so you can push it to the repository: docker tag api:latest [account-ID].dkr.ecr.[region].amazonaws.com/api:v1 
      ⚐ Note: Replace the [account-ID] and [region] placeholders with your specific information.
      ⚐ Pro tip: The :v1 represents the image build version. Every time you build the image, you should increment this version number. If you were using a script, you could use an automated number, such as a time stamp to tag the image. This is a best practice that allows you to easily revert to a previous container image build in the future.
    4. Push the image to Amazon ECR by running: docker push [account-id].dkr.ecr.[region].amazonaws.com/api:v1
      ⚐ Note: replace the [account-ID] and [region] placeholders with your specific information.

    If you navigate to your Amazon ECR repository, you should see your image tagged v1.

    Amazon ECR image tag