AWS Compute Blog

Using Load Balancers on Amazon Lightsail

This post was written by Robert Zhu, Principal Developer Advocate at AWS. 

As your web application grows, it needs to scale up to main performance and improve availability. A Load Balancer is an important tool in any developer’s tool belt. In this post, I show how to load balance a simple Node.js web application using the Amazon Lightsail Load Balancer. I chose Node.js as an example, but the concepts and techniques I discuss work with any other web application stack such as PHP, Djago, .net, or even plain old HTML. To demonstrate these concepts, I deploy a load balancer that splits traffic between two Amazon Lightsail instances using round-robin load balancing. Compared to other load balancing solutions on AWS, such as our Application Load Balancer and Network Load Balancer, the Lightsail Load Balancer is simpler, easier to use, and has a fixed cost of $18/month, regardless of how much bandwidth you use or how many connections are open.

Launch the Cluster

For the demo, first create a cluster consisting of two Lightsail instances. To do so, navigate to the Lightsail console, and launch two micro Lightsail instances using the Ubuntu 18.04 image.

After you select your Region and OS, select an instance plan and identify your instance. Make sure that your instances have enough memory and compute to run your application. For this demo, the smallest instance size suffices.

Instance plan and name

 

Once your instances are ready, click each instance and open an SSH session and run the following commands:

 

sudo apt-get update

sudo apt-get -y install nodejs npm

git clone https://github.com/robzhu/colornode

cd colornode && npm install

 

The above commands update APT package registry, and install Node.js and npm, in order to run the application. When installing npm, select “yes” when you see the following prompt:

npm prompt

Once installation is complete, you can run “node -v” and “npm -v” to make sure everything works. While this is an older version of Node.js, you should be able to use it without any issues. Remember to run these steps for both instances.

The application

Let’s take a tour of the application. Here’s the main.js source file:

 

const color = require("randomcolor")();

const name = require("./randomName");

const app = require("express")();

app.get("/", (_, res) => {

  res.send(`

    <html>

      <body bgcolor=${color}>

      <h1>${name}</h1>

    </html>

  `);

});


app.get("/health", (_, res) => {

  res.status(200).send("I'm OK");

});

app.listen(80, () => console.log(`${name} started on port 80`));

 

Upon startup, the application generates a random name and color. It then uses express-js to serve responses to requests at two routes: “/” and “/health”. When a client requests the “/” route, you respond with an HTML snippet that contains the name and color of the application. When a client requests the “/health” route, you respond with HTTP status code 200. Later, I use the health route to enable Lightsail to determine the health of a node and whether the load balancer should continue routing requests to that node.

 

You can start the application by running “sudo node main.js”. Do this for both instances, and you should see two distinct webpages at each instance’s IP address:

images of two instance urls

 

Launch the Load Balancer

To create a load balancer, open the Network tab in the Lightsail console and click Create load balancer.

Networking tab of Lightsail console

On the next page, name your load balancer and click Create load balancer. After that, a page shows where you can add target instances to the load balancer:

target instances

From the dropdown menu under Target instances, select each instance and attach it to the load balancer, like so:

 

target instances 2

image of first loadbalancer

Once both instances are attached, copy the DNS name for your load balancer and paste it into a new browser session. When the load balancer receives the browser request, it routes the request to one of the two instances. As you refresh the browser, you should see the returned response alternate between the two instances. This should clearly illustrate the functionality of the load balancer.

Health Checks

Over time, your instances inevitably exhibit faulty behavior due to application errors, network partitions, reboots, etc. Load balancers need to detect faulty nodes in order to avoid routing traffic to them. The simplest way to implement a load balancer health check is to expose an HTTP endpoint on your nodes. By default, the Lightsail Load Balancer performs health checks by issuing an HTTP GET request against the node IP address directly. However, recall that the application serves a simple HTTP 200 response at the “/health” route. You can configure the Load Balancer to use this route to perform health checks on our nodes instead.

 

Within your Lightsail console, open your load balancer, click Customize health checking, and enter health for the route:

health check screenshot

 

Now, the Lightsail load balancer uses http://{instanceIP}/health to perform the health check. You can further customize your load balancer by using a custom domain and adding SSL termination.

 

The custom health check feature allows us to implement deeper health check logic. For example, if you want to make sure that the entire application stack is running, you can write a test value to the database and read it back. That would give our application a much higher assurance of functionality, rather than returning Status Code 200 at an HTTP endpoint.

 

Conclusion

In summary, you created a Lightsail Load Balancer and used it to route traffic between two Lightsail instances serving our demo application. If one of these instances crashes or becomes unreachable, the load balancer will automatically route traffic to the remaining healthy nodes in the cluster based on a customizable HTTP health check. In comparison to other AWS load balancing solutions, the Lightsail Load Balancer is simpler, easier, and uses a fixed-pricing model. Please leave feedback in the comments or reach out to me on Twitter or email!

 

In a follow-up post, I walk through some more advanced concepts:

  • SSL termination
  • Custom Domains
  • Deep health check
  • Websocket connections

 

About the Author

Robert Zhu is a principal developer advocate at AWS.

Email: robzhu@amazon.com

Twitter: @rbzhu