Using Load Balancers on Amazon Lightsail
This post was written by Robert Zhu, Principal Developer Advocate at AWS.
As your web application grows, it needs to scale up to main performance and improve availability. A Load Balancer is an important tool in any developer’s tool belt. In this post, I show how to load balance a simple Node.js web application using the Amazon Lightsail Load Balancer. I chose Node.js as an example, but the concepts and techniques I discuss work with any other web application stack such as PHP, Djago, .net, or even plain old HTML. To demonstrate these concepts, I deploy a load balancer that splits traffic between two Amazon Lightsail instances using round-robin load balancing. Compared to other load balancing solutions on AWS, such as our Application Load Balancer and Network Load Balancer, the Lightsail Load Balancer is simpler, easier to use, and has a fixed cost of $18/month, regardless of how much bandwidth you use or how many connections are open.
Launch the Cluster
After you select your Region and OS, select an instance plan and identify your instance. Make sure that your instances have enough memory and compute to run your application. For this demo, the smallest instance size suffices.
Once your instances are ready, click each instance and open an SSH session and run the following commands:
Once installation is complete, you can run “node -v” and “npm -v” to make sure everything works. While this is an older version of Node.js, you should be able to use it without any issues. Remember to run these steps for both instances.
Let’s take a tour of the application. Here’s the main.js source file:
Upon startup, the application generates a random name and color. It then uses express-js to serve responses to requests at two routes: “
/” and “
/health”. When a client requests the “
/” route, you respond with an HTML snippet that contains the name and color of the application. When a client requests the “
/health” route, you respond with HTTP status code 200. Later, I use the health route to enable Lightsail to determine the health of a node and whether the load balancer should continue routing requests to that node.
You can start the application by running “
sudo node main.js”. Do this for both instances, and you should see two distinct webpages at each instance’s IP address:
Launch the Load Balancer
To create a load balancer, open the Network tab in the Lightsail console and click Create load balancer.
On the next page, name your load balancer and click Create load balancer. After that, a page shows where you can add target instances to the load balancer:
From the dropdown menu under Target instances, select each instance and attach it to the load balancer, like so:
Once both instances are attached, copy the DNS name for your load balancer and paste it into a new browser session. When the load balancer receives the browser request, it routes the request to one of the two instances. As you refresh the browser, you should see the returned response alternate between the two instances. This should clearly illustrate the functionality of the load balancer.
Over time, your instances inevitably exhibit faulty behavior due to application errors, network partitions, reboots, etc. Load balancers need to detect faulty nodes in order to avoid routing traffic to them. The simplest way to implement a load balancer health check is to expose an HTTP endpoint on your nodes. By default, the Lightsail Load Balancer performs health checks by issuing an HTTP GET request against the node IP address directly. However, recall that the application serves a simple HTTP 200 response at the “/health” route. You can configure the Load Balancer to use this route to perform health checks on our nodes instead.
Within your Lightsail console, open your load balancer, click Customize health checking, and enter health for the route:
The custom health check feature allows us to implement deeper health check logic. For example, if you want to make sure that the entire application stack is running, you can write a test value to the database and read it back. That would give our application a much higher assurance of functionality, rather than returning Status Code 200 at an HTTP endpoint.
In summary, you created a Lightsail Load Balancer and used it to route traffic between two Lightsail instances serving our demo application. If one of these instances crashes or becomes unreachable, the load balancer will automatically route traffic to the remaining healthy nodes in the cluster based on a customizable HTTP health check. In comparison to other AWS load balancing solutions, the Lightsail Load Balancer is simpler, easier, and uses a fixed-pricing model. Please leave feedback in the comments or reach out to me on Twitter or email!
In a follow-up post, I walk through some more advanced concepts:
- SSL termination
- Custom Domains
- Deep health check
- Websocket connections
About the Author
Robert Zhu is a principal developer advocate at AWS.