Containers
Getting started with AWS App Mesh and Amazon EKS
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice.
——–
In this blog post we explain service mesh usage in containerized microservices and walk you through a concrete example of how to get started with AWS App Mesh with Amazon EKS.
Increasingly, AWS customers adopt microservices to build scalable and resilient applications, reducing time-to-market. When moving from a monolithic to a microservices architecture, you break an app into a smaller set of microservices that are easier to develop and operate. You can use Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) to make it easier to run, upgrade, and monitor containerized microservices in containers at scale.
Services meshes such as AWS App Mesh help you to connect services, monitor your application’s network, and control the traffic flow. When an application is running within a service mesh, the application services are run alongside proxies which form the data plane of the mesh. The microservice process executes the business logic and the proxy is responsible for service discovery, observability, network encryption, automatic retries, and traffic shaping. App Mesh standardizes the way your services communicate, giving you consistent visibility and network traffic controls for all your containerized microservices. It has two core components: a fully managed control plane that configures the proxies and a data plane consisting of Envoy proxies, running as sidecar containers.
Using App Mesh with EKS
Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS without needing to operate your own Kubernetes cluster. You can use App Mesh to implement a service mesh for applications running on EKS. We make using App Mesh with EKS straightforward with the AWS App Mesh Controller For K8s. This is an open source project helping you to manage App Mesh resources using the Kubernetes API. That is, you use, for example, kubectl
to configure App Mesh, as we will show you in the hands-on part, below.
You can use App Mesh to connect services running in EKS with those running on ECS, EC2, and even in your datacenter using AWS Outposts. In this article, we’ll focus on working with App Mesh and EKS. First, let’s review some App Mesh features that can enhance microservices running in Kubernetes.
Network controls
App Mesh allows you to control the flow of traffic between services, which can help you experiment with new features. You can use this feature to divert a portion of traffic to a different version of your service. Kubernetes doesn’t allow you define a split for requests distribution between multiple Deployments. With App Mesh, you can create rules to distribute traffic between different versions of a service using simple ratios.
App Mesh traffic controls can also make version rollouts significantly safer by enabling canary deployments. In this strategy, you create a new Kubernetes deployment with fewer pods alongside your old deployment and divert a small share of traffic to the new deployment. If the new version performs well, you gradually increase traffic to the new deployment until it ends up serving all the traffic.
You can also use App Mesh to improve application resiliency by implementing a connection timeout policy or configuring automatic retries in the proxy.
Observability
Observability is a property of a system that determines how well its state can be inferred from knowledge of external outputs. In the microservices context, these external outputs are service metrics, traces, and logs. Metrics show the behavior of a system over time. Logs make it easy to troubleshoot by providing causation for potential errors. Distributed traces can help us debug, identify problematic components in the application by providing details for a specific point in time, and understand application workflow within and among microservices.
You can measure the health of your application by configuring App Mesh to generate metrics such as total requests, create access logs and traces. As service traffic passes through Envoy, Envoy inspects it and generates statistics, creates access logs, and adds HTTP headers to outbound requests, which can be used to generate traces. Metrics and traces can be forwarded to aggregation services like Prometheus and X-Ray daemon, which can then be consumed to analyze the system’s behavior. Since App Mesh uses Envoy, it is also compatible with a wide range of AWS partner and open source tools for monitoring microservices.
Encryption in transit
Microservices communicate with each other over the network, which means they may pass classified data over the network. Many customers want to encrypt traffic between services. App Mesh can help you with that – it can encrypt traffic between services using a TLS certificate, and you don’t need to handle TLS negotiation and termination in your application code.
You can use your own certificates to encrypt the traffic, or you can use the AWS Certificate Manager. If you choose the latter, ACM will automatically renew certificates that are nearing the end of their validity, and App Mesh will automatically distribute the renewed certificates.
App Mesh Concepts
To use App Mesh, you will need to create a Mesh
. A mesh acts as a logical boundary in which all the microservices will reside. You can think of it as a “neighborhood” that comprises your microservices:
The next component is the Virtual Service
. Virtual Services act as virtual pointers to your applications and are the service names your applications use to reach the endpoints defined in your mesh. In a microservices architecture, each microservice will be a Virtual Service and will have a virtualServiceName
. An App Mesh Virtual Service is not the same as a Kubernetes Service.
A Virtual Service represents an application, but an application can also have multiple versions. For example, an application can have two different versions: an internal and a public-facing. Each version is represented by a Virtual Node
. As shown in the image above, a Virtual Service can have just one Virtual Node, or it can have multiple Virtual Nodes if the application has multiple versions. If a Virtual Service has multiple Virtual Nodes, you will define how traffic is routed between multiple Virtual Nodes using a Virtual Router
.
Virtual Routers handles traffic routing based on specific rules; these rules are called Virtual Routes. A Virtual Router needs to have at least one Virtual Route. The routing logic can be based on different criteria such as HTTP headers, URL paths, or gRPC service and method names. You can also use Virtual Routers to implement retry logic and error handling.
App Mesh With EKS In Action
In this tutorial, you will create AWS App Mesh components and deploy them using a sample application called Yelb. After placing the Yelb app into a service mesh, you will create a new version of the Yelb application server and use App Mesh Virtual Routes to shift traffic between the two versions of the app.
Yelb allows users to vote on a set of alternatives like restaurants and dynamically updates pie charts based on the votes. Additionally, Yelb keeps track of the number of page views and prints the hostname of the yelb-appserver
instance serving the API request upon a vote or a page refresh. Yelb components include:
- A frontend called
yelb-ui
is responsible for vending the JS code to the browser. - An application server named
yelb-appserver
, a Sinatra application that reads and writes to a cache server (redis-server
) and a Postgres backend database (yelb-db
). - Redis stores the number of page views and Postgres stores the votes.
Yelb’s architecture looks like this:
NOTE: Yelb’s configuration uses ephemeral storage for all the containers. Running databases in this way is only done for demonstration purposes.
To follow along, you will need to have an environment with some tooling. We used an AWS Cloud9 instance to run this tutorial and if you want to create a Cloud9 instance in your account, follow the steps in the EKS Workshop from the chapter Create a Workspace to Update IAM Settings for your Workspace.
1. Set Up The Infrastructure
To run this tutorial ,you need to install some specific tools:
Start by cloning the GitHub repository:
If you are using a Cloud9 instance, run the following commands to install the required tools mentioned above:
You will use a CloudFormation template to create a VPC including a Security Group and two ECR repositories. The baseline.sh
script deploys the CloudFormation Stack and creates the base infrastructure with a VPC, Public and Private Subnets and IAM Policy. So, to kick off things, execute it:
Note that the above script takes around five minutes to complete.
2. Create The EKS Cluster
To create the EKS cluster, run the following command which will take some 15 minutes to finish:
Once completed, you can test the cluster connectivity like so:
3. Deploy A Demo App
In order to deploy our demo app called Yelb execute the following:
To get the URL of the load balancer for carrying out testing in your browser, use the following command:
Note that the URL of the public load balancer is available via the EXTERNAL-IP
field. You may have to wait a few minutes for DNS propagation. When you open said URL in your browser of choice, the result should look as follows:
4. Meshify The Demo App
To start creating the App Mesh resources and add the Yelb app into a Mesh, the first thing you need to do is install the AWS App Mesh Controller. This controller allows you to configure App Mesh resources using kubectl
. If you’d like you can also use the App Mesh console for configuration, in this tutorial we will use kubectl
. Once completed, the resulting setup looks as follows:
You will be using Helm to install the App Mesh Controller. Helm is an open source project that makes it easier to define, install and upgrade applications into a Kubernetes cluster. So you need to add the Amazon EKS Helm chart repository to Helm:
Next, you create a namespace for the App Mesh controller that is looking after the custom resources:
And now it’s time for you to install the App Mesh controller with:
To confirm that the App Mesh controller is running by listing the pods in the appmesh-system
namespace:
We deploy the Yelb application in the yelb
namespace and use the same name for the mesh. You need to add two labels to the yelb
namespace: mesh
and appmesh.k8s.aws/sidecarInjectorWebhook
. These labels instruct the controller to inject and configure the Envoy proxies in the pods:
Great! Now we’re in a position to create the mesh, using:
NOTE: The namespaceSelector
parameter matches Kubernetes namespaces with the label mesh
and the value yelb
.
If you want to, you can also use the AWS console to validate that the mesh was created properly:
After creating the Service Mesh, you have to create all the App Mesh components for every Yelb component. You’ll be using the YAML files in the infrastructure/appmesh_template
directory to create the Virtual Nodes, Virtual Routers, Routes, and Virtual Services. Apply these configurations using the following command:
The App Mesh controller is configured to inject Envoy sidecar containers, but it hasn’t done so yet. This is because sidecars are only injected when a Pod is created. So, delete the existing pods using kubectl -n yelb delete pods --all.
This will make trigger the creation of new pods with the Envoy sidecars. To validate that the controller has worked fine, check the number of containers running in each pod:
Notice that every pod in this namespace now has two containers, so let’s have a look at it using:
You can now go back to Yelb’s web interface and make sure that you can access it. Any recorded votes are now lost since we deleted the pods in the previous step.
5. Traffic Shaping With A New App Version
Now that Yelb is meshified, go ahead and create a new version of the yelb-appserver
. We now use App Mesh to send the traffic to this new app version of the application. To do so, create a new container image with the updated code and push it to an ECR repository using the following command:
Next, create a new Virtual Node
that will represent this new app version:
Also, we need a to create a new deployment using the manifest generated by build-appserver-v2.sh
:
You should be able to see a new version of the yelb-appserver
running by listing the pods in the yelb
namespace:
Now you configure the App Mesh Virtual Route to send 50% of the traffic to version v2
and 50% to the current one. Note that this is for demonstration purposes, for production use it is advisable to roll new versions out more granularly.
The architecture diagram below shows the environment with two versions of the yelb-appserver
running at the same time:
To change the Virtual Route, run the following command:
After modifying the Virtual Route, I can reload the Yelb page a couple of times and see that some of my requests are being served by the old version of the yelb-appserver
while others see the new version. You can find out the version by looking at the App Server
field, where the old version will bring the hostname of the yelb-appserver container and the new version will show ApplicationVersion2:
Finally, let’s change the Virtual Route to route all traffic to the newest version of yelb-appserver
:
You can see that after changing the Virtual Route again, the yelb-appserver-v2
deployment handles all requests:
6. Cleaning Up
In order to cleanup all the resources created during the execution of this tutorial, run the cleanup script with the following command:
Note that if you followed these steps using a Cloud9 instance, refer to the cleanup steps for the Cloud9 instance as described in the EKS Workshop.
Next Steps & Conclusion
You will find Weave Flagger helpful if you are interested in automating canary deployments. Flagger allows you to promote canary deployments using AWS App Mesh automatically. It uses Prometheus metrics to determine canary deployment success or failure and uses App Mesh’s routing controls to shift traffic between the current and canary deployment automatically.
Further, some useful links if you want to dive deeper into the topic:
- Check out the aws-app-mesh-examples repo on GitHub.
- The App Mesh Developer Guide via the docs contain more tips and tricks.
In this post we went through the fundamentals of App Mesh and showed how to place an existing Kubernetes application into a mesh using the open source App Mesh Controller for K8s. You also learned how you can try different deployment techniques by using Virtual Routes to split traffic between two versions of an application. In the next blog, we will show you how you can use App Mesh Virtual Gateways to provide connectivity inside and outside the mesh.
You can track upcoming features via the App Mesh roadmap and experiment with new features using the App Mesh preview channel. Last but not least: do check out appmeshworkshop.com to learn more App Mesh in a hand-on fashion and join us on the App Mesh Slack community to share experiences and discuss with the team and your peers.