In this module, you will deploy your node.js application as a set of interconnected services behind an Application Load Balancer (ALB). Then, you will use the ALB to seamlessly shift traffic from the monolith to the microservices. Start Building

This is the process that you will follow to deploy the microservices and safely transition the application's traffic away from the monolith.

architecture overview
  1. Switch the Traffic
    This is the starting configuration. The monolithic node.js app running in a container on Amazon ECS.
  2. Start Microservices
    Using the three container images you built and pushed to Amazon ECR in the previous module, you will start up three microservices on your existing Amazon ECS cluster.
  3. Configure the Target Groups
    Like in Module 2, you will add a target group for each service and update the ALB Rules to connect the new microservices.
  4. Shut Down the Monolith
    By changing one rule in the ALB, you will start routing traffic to the running microservices. After traffic reroute has been verified, shut down the monolith.

Follow the step-by-step instructions below to deploy the microservices. Select each step number to expand the section.

break-the-monolith
  • Step 1. Write Task Definitions for your Services

    You will deploy three new services to the cluster that you launched in Module 2. Like Module 2, you will write Task Definitions for each service.

    ⚐ NOTE: It is possible to add multiple containers to a single task definition. This means you could run all three microservices as different containers from a single service. However, this approach is still monolithic since each container would need to scale linearly with the service. Your goal is to have three independent services. Each service requires its own task definition running a container with the image for that respective service.

    You can either create these task definitions from the Amazon ECS console, or speed things up by writing them as JSON. To write the task definition as a JSON file, follow these steps:

    1. From the Amazon Elastic Container Services console, under Amazon ECS, select Task definitions.
    2. In the Task Definitions page, select the Create new Task Definition button.
    3. In the Select launch type compatibility page, select the EC2 option and then select Next step.
    4. In the Configure task and container definitions page, scroll to the Volumes section and select the Configure via JSON button.
    5. Copy and paste the following code snippet into the JSON field, replacing the existing code.
      Remember to replace the [service-name], [account-ID], [region], and [tag] placeholders.
    6. Select Create.

    ⚐ Note: The following parameters are used for the task definition:

    • Name = [service-name: posts, threads, and users] 
    • Image = [Amazon ECR repository image URL]:latest 
    • cpu = 256 
    • memory = 256 
    • Container Port = 3000 
    • Host Post = 0
    {
        "containerDefinitions": [
            {
                "name": "[service-name]",
                "image": "[account-id].dkr.ecr.[region].amazonaws.com/[service-name]:[tag]",
                "memoryReservation": "256",
                "cpu": "256",
                "essential": true,
                "portMappings": [
                    {
                        "hostPort": "0",
                        "containerPort": "3000",
                        "protocol": "tcp"
                    }
                ]
            }
        ],
        "volumes": [],
        "networkMode": "bridge",
        "placementConstraints": [],
        "family": "[service-name]"
    }

    ♻ Repeat the steps to create a task definition for each service:

    • posts
    • threads
    • users
  • Step 2. Configure the Application Load Balancer: Target Groups

    As in Module 2, configure a target group for each service (posts, threads, and users). A target group allows traffic to correctly reach a specified service. You will configure the target groups using AWS CLI. However, before proceeding, ensure you have the correct VPC name that is being used for this tutorial:

    • Navigate to the Load Balancer section of the EC2 Console.
    • Select the checkbox next to the appropriate load balancer, select the Description tab, and locate the VPC attribute (in this format: vpc-xxxxxxxxxxxxxxxxx).
      ⚐ Note: You will need the VPC attribute when you configure the target groups.

    Configure the Target Groups

    In your terminal, enter the following command to create a target group for each service (posts, threads, and users). In addition, you will create a target group (drop-traffic) to keep traffic from reaching your monolith after your microservices are fully running. Remember to replace the following placeholders: [region], [service-name], and [vpc-attribute].

    Service names: posts, threads, users, and drop-traffic

    aws elbv2 create-target-group --region [region] --name [service-name] --protocol HTTP --port 80 --vpc-id [vpc-attribute] --healthy-threshold-count 2 --unhealthy-threshold-count 2 --health-check-timeout-seconds 5 --health-check-interval-seconds 6
    target groups
  • Step 3. Configure Listener Rules

    The listener checks for incoming connection requests to your ALB in order to route traffic appropriately.

    Right now, all four of your services (monolith and your three microservices) are running behind the same load balancer. To make the transition from monolith to microservices, you will start routing traffic to your microservices and stop routing traffic to your monolith.

    Access the listener rules

    Update Listener Rules

    There should only be one listener listed in this tab. Take the following steps to edit the listener rules:

    • Under the Rules column, select View/edit rules.
    • On the Rules page, select the plus (+) button.
      The option to Insert Rule appears on the page. 
    • Use the following rule template to insert the necessary rules which include one to maintain traffic to the monolith and one for each microservice:
      • IF Path = /api/[service-name]* THEN Forward to [service-name]
        For example: IF Path = /api/posts* THEN Forward to posts
      • Insert the rules in the following order:
        • api: /api* forwards to api
        • users: /api/users* forwards to users
        • threads: /api/threads* forwards to threads
        • posts: /api/posts* forwards to posts
    • Select Save.
    • Select the back arrow at the top left corner of the page to return to the load balancer console.
    Configure Application Load Balancer Listener Rules
  • Step 4. Deploy your Microservices

    Deploy the three microservices (posts, threads, and users) to your cluster. Repeat the following steps for each of your three microservices:

    • Navigate to the Amazon ECS console and select Clusters from the left menu bar.
    • Select the cluster BreakTheMonolith-Demo, select the Services tab then select Create.
    • On the Configure service page, edit the following parameters (and keep the default values for parameters not listed below):
      • For the Launch type, select EC2.
      • For the Task Definition, select the Enter a value button to automatically select the highest revision value.
        For example: api:1 
      • For the Service name, enter a service name (posts, threads, or users).
      • For the Number of tasks, enter 1
    • Select Next step.
    • On the Configure network page, Load balancing section, do the following:
      • For the Load balancer type, select Application Load Balancer.
      • For the Service IAM role, select BreakTheMonolith-Demo-ECSServiceRole.
      • For the Load balancer name, verify that the appropriate load balancer is selected.
      • In the Container to load balance section, select the Add to load balancer button and make the following edits:
        • For the Production listener port, set to 80:HTTP.
        • For the Target group name, select the appropriate group: (posts, threads, or users)
      • In the Service discovery section, Enable service discovery integration option, clear the checkmark. This option should not be enabled.
    • Select Next step.
    • On the Set Auto Scaling page, select Next step.
    • On the Review page, select Create Service.
    • Select View Service.

    It should only take a few seconds for all your services to start. Double check that all services and tasks are running and active before you proceed.

    Amazon ECS Deploy Microservices
  • Step 5. Switch Over Traffic to your Microservices

    Your microservices are now running, but all traffic is still flowing to your monolith service. To reroute traffic to the microservices, take the following steps to update the listener rules:

    • Navigate to the Load Balancers section of the EC2 Console.
    • Select the checkbox next to the appropriate load balancer to see the Load Balancer details.
    • Select the Listeners tab.
      There should only be one listener listed.
    • Under the Rules column, select View/edit rules.
    • On the Rules page, select the minus (-) button from the top menu.
    • Delete the first rule (/api* forwards to api) by selecting the checkbox next to the rule.
    • Select Delete.
    • Update the default rule to forward to drop-traffic:
      • Select the edit (pencil) button from the top menu.
      • Select the edit (pencil) icon next to the default rule (HTTP 80: default action).
      • Selec the edit (pencil) icon in the THEN column to edit the Forward to.
      • In the Target group field, select drop-traffic.
      • Select the Update button.

    See the following screenshot for an example of the updated rules.

    Amazon EC2 Switch Over Traffic to the Microservices

    Disable the monolith: With traffic now flowing to your microservices, you can disable the monolith service.

    • Navigate back to the Amazon ECS cluster BreakTheMonolith-Demo-ECSCluster.
    • In the Services tab, select the checkbox next to api and select Update.
    • On the Configure service page, locate Number of tasks and enter 0.
    • Select Skip to review.
    • Select Update Service.

    Amazon ECS will now empty connections from containers the service has deployed on the cluster then stop the containers. If you refresh the Deployments or Tasks lists after about 30 seconds, you will see that the number of tasks will drop to 0. The service is still active, so if you needed to roll back for any reason, you could simply update it to deploy more tasks.

    Optionally, you can delete the api service. In the Services tab, select the checkbox next to api, select Delete, and confirm the deletion.

    You have now fully transitioned your node.js from the monolith to microservices, without any downtime!

  • Step 6. Validate your Deployment

    Find your service URL: This is the same URL that you used in Module 2 of this tutorial.

    First, verify that you have at least one registered target running in the Application Load Balancer's target group:

    • Navigate to the Target Groups section of the EC2 console.
    • Select the checkbox next to api to see the details.
    • Select the Targets tab and verify that at least one registered target is active and the Status shows healthy.

    Then obtain the DNS name:

    • Navigate to the Load Balancers section of the EC2 console.
    • Select the checkbox next to the appropriate load balancer to see the Load Balancer details.
    • In the Description tab, locate the DNS name and select the copy icon at the end of the URL. 
    • Paste the DNS name into a new browser tab or window.

    You should see a message 'Ready to receive requests'.

    See the values for each microservice: Your ALB routes traffic based on the request URL. To see each service, simply add the service name to the end of your DNS name:

    • http://[DNS name]/api/users
    • http://[DNS name]/api/threads
    • http://[DNS name]/api/posts
    see the values for each microservice

    ⚐ NOTE: These URLs perform exactly the same as when the monolith is deployed. This is very important because any APIs or consumers that would expect to connect to this app will not be affected by the changes you made. Going from monolith to microservices required no changes to other parts of your infrastructure.

    You can also use tools such as Postman for testing your APIs.