Containers

Migrating from AWS App Mesh to Amazon VPC Lattice

After careful consideration, we have made the decision to discontinue AWS App Mesh, effective September 30th, 2026. Until this date, existing AWS App Mesh customers will be able to use the service as normal, including creating new resources and onboarding new accounts via the AWS CLI and AWS CloudFormation. Additionally, AWS will continue to provide critical security and availability updates to AWS App Mesh during this period. However, starting from September 24th, 2024, new customers will be unable to onboard to AWS App Mesh.

As the adoption of microservice architectures continues to grow, managing the complexity of modern distributed applications has become a challenge for many organizations. Amazon VPC Lattice is a fully managed service, announced at re:invent 2022, that enables consistent connectivity, security, and monitoring of communications between various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, and Amazon Elastic Compute Cloud (Amazon EC2). While AWS App Mesh provides advanced traffic management and observability features for services within Kubernetes clusters. VPC Lattice simplifies application networking by eliminating the need to manage sidecar proxies.

In this blog post, we will explore how VPC Lattice can help simplify the management of complex, distributed applications and provide guidance for Amazon EKS customers on migrating from App Mesh to VPC Lattice.

For Amazon ECS customers using App Mesh, we recommend reading the post Migrating from AWS App Mesh to Amazon ECS Service Connect.

Comparing App Mesh to VPC Lattice

App Mesh and VPC Lattice both provide traffic management and application-aware networking, but they differ in their underlying concepts and architectures. Understanding these differences is crucial before undertaking a migration.

Firstly to provide a logical boundary of resources, VPC Lattice has a concept called Service Network, comparable to the Service Mesh in App Mesh. Secondly to represent a microservice within your application in VPC Lattice you create a Service, equivalent to a Virtual Service in App Mesh. Target Groups in VPC Lattice align with Virtual Nodes in App Mesh; tying the service to a group of identical Kubernetes Pods. Finally, a Listener and Listener Rules in VPC Lattice, are similar to a Virtual Router and Routes in App Mesh, defining how traffic is routed to the Target Groups within a Service. The diagram below shows the comparison between the different components of each service.

Translating App Mesh resources to VPC Lattice resources.

Translating App Mesh resources to VPC Lattice resources.

Architecturally, App Mesh relies on a self-managed Envoy proxy running as a sidecar container with each Pod for traffic routing. In contrast, VPC Lattice provides a managed control plane and data plane, eliminating the need for additional components within your Pods. For observability, App Mesh requires you to install the Amazon CloudWatch Agent with Prometheus Metrics Collection, which forwards the metrics to Amazon CloudWatch, while VPC Lattice provides built-in metrics in CloudWatch.

When running applications on Amazon EKS with App Mesh, it’s common to use Kubernetes Ingress, the AWS Load Balancer Controller, and App Mesh Virtual Gateways to expose applications outside the cluster. However, exposing applications across AWS Accounts or VPCs often requires additional networking resources like AWS Transit Gateways. VPC Lattice natively solves this problem by incorporating Load Balancing and AWS Resource Access Manager (RAM), allowing Kubernetes Services to be accessed from other AWS resources running in different AWS Accounts.

Furthermore, VPC Lattice implements the Gateway API to attach Kubernetes resources to VPC Lattice objects. Additionally, VPC Lattice supports AWS Identity and Access Management (IAM) authentication through Auth Policies, enabling coarse-grained authorization for your microservices.

With regards to pricing, with App Mesh you pay for additional compute resources dedicated to envoy proxies. However, the cost of VPC Lattice is determined on the basis of the number of services provisioned, data processing charges for traffic to and from each service, and the number of HTTP requests (for HTTP/HTTPS listeners only) that each service receives. To learn more, visit the VPC Lattice pricing page.

Migration Strategies

When migrating applications from App Mesh to VPC Lattice, you have several strategies to choose from, including in-place, canary, and blue/green deployments. The appropriate strategy will depend on your application’s requirements, such as the need for zero downtime or the ability to schedule maintenance windows.

The in-place migration strategy involves replacing the existing Kubernetes Pods instrumented with App Mesh with new Pods. This approach is suitable for applications that can tolerate downtime during the migration process, as each pod is recycled to remove the Envoy sidecar container.

Alternatively, the blue/green deployment strategy involves deploying a second copy of the application in a new namespace, configured for VPC Lattice, while the original deployment remains operational with App Mesh. This approach allows you to gradually migrate traffic from the App Mesh to the VPC Lattice without downtime while both environments run simultaneously.

Migration Walk Through

In this section, we will provide a high-level summary of the steps to migrate a sample application from App Mesh to AWS VPC Lattice using an in-place migration strategy. Detailed step-by-step instructions are included later in the blog.

The application we’ll be using throughout this walk through is a multi-tiered demo application called the polyglot demo. This application is composed of three microservices:

  1. Frontend – A user interface for a product catalog.
  2. Product Catalog Backend – A REST API service that provides a list of items stored in the catalog.
  3. Catalog Detail Backend – A REST API service that provides additional information for each item, including the version number and vendor names.

The following diagram shows the Polyglot Demo frontend user interface, which includes an architecture diagram of the traffic flow between the different services. The Frontend service calls the Product Catalog service, which in turn makes calls to the Catalog Detail service.

Screenshot showing product-catalog service network in AWS management console

Screenshot showing product-catalog service network in AWS management console

Migration Steps

In this section, we outline the key steps involved in migrating an existing application from App Mesh to VPC Lattice.

Step 1: Deploy the Amazon EKS Cluster and sample application

To get started, we create an EKS Cluster with eksctl. This will serve as the foundation for our demonstration. Once the cluster is created, we deploy a polyglot demo application to showcase App Mesh and VPC Lattice’s capabilities. Finally, we will conduct tests on the polyglot demo application to ensure everything is functioning as needed.

Step 2: Configure the Sample Application with App Mesh

We install the AWS App Mesh Controller and the relevant Kubernetes Custom Resource Definitions (CRDs). These components allow us to create App Mesh resources through the Kubernetes APIs. Next, we will implement the polyglot demo application with App Mesh, creating Virtual Services and Virtual Nodes. To highlight a real-world scenario, we will deploy two versions of the Prod Catalog service and demonstrate an active canary rollout during the VPC Lattice migration. Finally, we will test the application to reconfirm the configuration is correct. This environment is now ready to start a migration.

Step 3: Create the foundational VPC Lattice resources

We will install the AWS Gateway API Controller and relevant Kubernetes CRDs. Similar to the App Mesh controller, this will allow us to create VPC Lattice resources through the Kubernetes API. Following that, we will create the core VPC Lattice components required for the migration. These include a VPC Lattice GatewayClass and Gateway in the cluster, which map to a VPC Lattice Service Network, as well as TargetGroupPolicies and HTTPRoutes to create traffic rules for our sample application.

Step 4: Migrating the sample application to VPC Lattice

For an in-place migration, we need to remove the existing App Mesh components from the Pods, such as the Envoy Proxy, and then configure the application with the new VPC Lattice endpoints. To do so, first we annotate the Kubernetes namespace to prevent the App Mesh controller from manipulating our Pods. Next, we redeploy the polyglot demo application with the VPC Lattice endpoints. Finally, we will test the polyglot demo application to verify that it functions correctly through VPC Lattice.

Step 5: Exposing the application and implementing canary deployment

With VPC Lattice, you need to use Elastic Load Balancers (ELBs) to allow traffic into the application. We will use a Network Load Balancer to expose our sample application, so in this step we remove the App Mesh Virtual Gateway and create a new Network Load Balancer with the AWS Load Balancer Controller. Finally, we will redeploy the second version of Prod Catalog and, through weighted routing, distribute traffic between both sets of Pods with VPC Lattice HTTPRoutes.

Reviewing VPC Lattice Resources

After migrating to VPC Lattice from App Mesh, various resources related to VPC Lattice that were provisioned in your account and can be viewed in the VPC Lattice console.

  1. We created a VPC Lattice Service Network as the logical boundary of resources.
Screenshot showing product-catalog service network in AWS management console

Screenshot showing product-catalog service network in AWS management console

2. We created three VPC Lattice Services, one for each tier of the application, with Kubernetes HTTPRoutes.

A screenshot of the VPC Lattice services in the AWS console

A screenshot of the VPC Lattice services in the AWS console

3. We created three VPC Lattice Target Groups, one attached to each VPC Lattice Service. The routing rules and Health Checks for each Target Group were configured with the TargetGroupPolicy resources in Kubernetes.

A screenshot of the VPC Lattice target groups in the AWS console

A screenshot of the VPC Lattice target groups in the AWS console

4. Finally, using VPC Lattice, we distributed the traffic between two versions of the prod detail microservice by updating the HTTPRoute for the service. The backend rules in the YAML snippet below shows the weighted routing for the application.

rules:
    - backendRefs:
        - name: proddetail
          namespace: prodcatalog-ns-lattice
          kind: Service
          port: 3000
          weight: 50
        - name: proddetail2
          namespace: prodcatalog-ns-lattice
          kind: Service
          port: 3000
          weight: 50

The following diagram shows the architecture of the application after it has been migrated from App Mesh to VPC Lattice.

The polygot demo application instrumented with VPC Lattice

The polygot demo application instrumented with VPC Lattice

Hands on instructions

To replicate this example migration, please find the step-by-step instructions in this GitHub Repository.

Conclusion

In this post, we explored VPC Lattice, an application networking service for distributed applications. We compared its features and resources with App Mesh and discussed migration strategies for existing App Mesh deployments. VPC Lattice offers several advantages over App Mesh including: multi-account networking, simplified configuration, improved observability and seamless integration with other AWS services.

For more information, refer to the following VPC Lattice resources and blogs:

User guide

API reference

FAQs

Pricing

Quotas

Implement AWS IAM authentication with Amazon VPC Lattice and Amazon EKS

Build secure multi-account multi-VPC connectivity for your applications with Amazon VPC Lattice

Secure Cross-Cluster Communication in EKS with VPC Lattice and Pod Identity IAM Session Tags

Piyush Shukla

Piyush Shukla

Piyush Shukla is a Technical Account Manager at AWS with over 14 years of industry experience. He is a Technical Field Community (TFC) member in Containers and focuses on its adoption across industry verticals. His expertise also spans on cloud operations for AWS users.

Hardeep Singh Tiwana

Hardeep Singh Tiwana

Hardeep Singh Tiwana is Sr Technical Account Manager with AWS. He is a seasoned IT professional with over 2 decades of experience in the industry. His expertise spans a wide range of technologies, from pre-cloud era systems to cutting-edge containers and cloud computing platforms. His skills encompass system analysis, design, testing, implementation, and troubleshooting across various environments. Hardeep has more than 7 years of experience with containers and Kubernetes. He is dedicated to advancing container adoption across diverse industry verticals, enabling businesses to explore efficiently and scale with agility.