AWS Compute Blog
Introducing AWS App Mesh – service mesh for microservices on AWS
AWS App Mesh is a service mesh that allows you to easily monitor and control communications across microservices applications on AWS. You can use App Mesh with microservices running on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Kubernetes running on Amazon EC2.
Today, App Mesh is available as a public preview. In the coming months, we plan to add new functionality and integrations.
Why App Mesh?
Many of our customers are building applications with microservices architectures, breaking applications into many separate, smaller pieces of software that are independently deployed and operated. Microservices help to increase the availability and scalability of an application by allowing each component to scale independently based on demand. Each microservice interacts with the other microservices through an API.
When you start building more than a few microservices within an application, it becomes difficult to identify and isolate issues. These can include high latencies, error rates, or error codes across the application. There is no dynamic way to route network traffic when there are failures or when new containers need to be deployed.
You can address these problems by adding custom code and libraries into each microservice or by using open source tools that manage communications for each microservice. However, these solutions can be hard to install, difficult to update across teams, and complex to manage for availability and resiliency.
AWS App Mesh implements a new architectural pattern that helps solve many of these challenges and provides a consistent, dynamic way to manage the communications between microservices. With App Mesh, the logic for monitoring and controlling communications between microservices is implemented as a proxy that runs alongside each microservice, instead of being built into the microservice code. The proxy handles all of the network traffic into and out of the microservice and provides consistency for visibility, traffic control, and security capabilities to all of your microservices.
You use App Mesh to model how all of your microservices connect and App Mesh automatically computes and sends the appropriate configuration information to each microservice proxy. This gives you standardized, easy-to-use visibility and traffic controls across your entire application. App Mesh uses Envoy, an open source proxy. This makes it compatible with a wide range of AWS partner and open source tools for monitoring microservices.
Using App Mesh, you can export observability data to multiple AWS and third-party tools, including Amazon CloudWatch, AWS X-Ray, or any third-party monitoring and tracing tool that integrates with Envoy. You can configure new traffic routing controls to enable dynamic blue/green canary deployments for your services.
Getting started
Let’s look at a sample application with two services, where service A receives traffic from the internet and uses service B for some backend processing. We want to dynamically route traffic between services B and B’, a new version of B deployed to act as the canary.
First, you will create a mesh, a namespace that groups the microservices we want to communicate.
Next, you create virtual nodes to represent services in the mesh. A virtual node represents a specific microservice version. In this example, service A and B participate in the mesh and the traffic to service B is managed using App Mesh.
Now, you will deploy the services with the required Envoy proxy, mapping each to a node in the mesh.
After you have defined the virtual nodes, you can define how the traffic flows between these nodes. To do this, you define a virtual router and routes.
A virtual router logically groups all the routes that define your communications traffic. After you create a virtual router, you create routes to direct traffic appropriately. These routes include which connection requests the route should accept, the traffic definition, and the weighted amount of traffic to send. All of these changes to adjust traffic between services is computed and sent dynamically to the appropriate proxies by App Mesh to execute your deployment.
You now have a virtual router set up that accepts all traffic from virtual node A sending to the existing version of service B, as well some traffic to the new version, B’.
Exporting metrics, logs, and traces
One of benefits about placing a proxy in front of every microservice is that you can automatically capture metrics, logs, and traces about the communication between your services. App Mesh enables you to easily collect and export this data to the tools of your choice. Envoy is already integrated with several tools like Prometheus and Datadog.
During the preview, we are adding support for AWS services such as Amazon CloudWatch and AWS X-Ray. We have a many more integrations planned as well.
Available now
AWS App Mesh is available as a public preview and you can start using it today in the North Virginia, Ohio, Oregon, and Ireland AWS Regions. During the preview, we plan to add new features and want to hear your feedback. You can check out our GitHub repository for examples and our roadmap.
— Nate