Containers
Cross Amazon EKS cluster App Mesh using AWS Cloud Map
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice.
——–
Overview
In this article, we are going to explore how to use AWS App Mesh across Amazon EKS (EKS) clusters. App Mesh is a service mesh that lets you control and monitor services spanning two clusters deployed in the same VPC. We’ll demonstrate this by using two EKS clusters within a VPC and an App Mesh that spans the clusters using Cloud Map. This example shows how EKS deployments can use AWS Cloud Map for service-discovery when using App Mesh.
We will use two EKS clusters in a single VPC to explain the concept of cross cluster mesh using AWS Cloud Map. The diagram below illustrates the big picture. This is intentionally meant to be a simple example for clarity, but in the real world the App Mesh can span multiple different container clusters like ECS, Fargate, Kubernetes on EC2 etc.
In this example, there are two EKS clusters within a VPC and a mesh spanning both clusters. The setup has AWS Cloud Map services and three EKS Deployments as described below. The front container will be deployed in Cluster 1 and the color containers will be deployed in Cluster 2. The Goal is to have a single Mesh across the clusters using AWS Cloud Map based service discovery,
Clusters
We will spin up two EKS clusters in the same VPC for simplicity and configure a Mesh as we deploy the clusters components.
Deployments
There are two deployments of colorapp, blue and red. Pods of both these deployments are registered behind virtual service colorapp.appmesh-demo.pvt.aws.local. Blue pods are registered with the mesh as colorapp-blue virtual-node and red pods as colorapp-red virtual-node. These virtual-nodes are configured to use AWS Cloud Map as service-discovery, hence the IP addresses of these pods are registered with the AWS Cloud Map service with corresponding attributes. Additionally, a colorapp virtual-service is defined that routes traffic to blue and red virtual-nodes.
Front app acts as a gateway that makes remote calls to colorapp. Front app has single deployment with pods registered with the mesh as front virtual-node. This virtual-node uses colorapp virtual-service as backend. This configures Envoy injected into front pod to use App Mesh’s Endpoint Discovery Service(EDS) to discover colorapp endpoints.
Mesh
App Mesh components will be deployed from one of the two clusters. It does not really matter where you deploy it from. It will have various components deployed. A virtual node per service and a Virtual Service, which will have a router with routes tied (provider) to route traffic between red and blue equally. We will use a custom CRD, mesh controller and mesh inject components that will handle the mesh creation using the standard kubectl. This will auto inject proxy sidecars on pod creation.
Note: You can use native App Mesh API calls instead, to deploy the App Mesh components, if you prefer.
AWS Cloud Map
As we create the mesh, we will use service discovery attributes, which will automatically create the DNS records in the namespace that we have pre-created. The front application in the first cluster will leverage this DNS entry in AWS Cloud Map to talk to the colorapp on the second cluster.
So, Lets get started…
Prerequisites
In order to successfully carry out the base deployment:
- Make sure to have newest AWS CLI installed, that is, version 1.16.268 or above.
- Make sure to have
kubectl
installed, at least version1.11
or above. - Make sure to have
jq
installed. - Make sure to have
aws-iam-authenticator
installed, required for eksctl - Install eksctl, for example, on macOS with
brew tap weaveworks/tap
andbrew install weaveworks/tap/eksctl
, and make sure it’s on at least on version0.1.26
.
Note that this walkthrough assumes throughout to operate in the us-east-1
Region.
Cluster provisioning
Create an EKS cluster with eksctl
using the following command:
eksctl create cluster --name=eksc2 --nodes=3 --alb-ingress-access
--region=us-east-1 --ssh-access --asg-access --full-ecr-access
--external-dns-access --appmesh-access --vpc-cidr 172.16.0.0/16
--auto-kubeconfig
#[✔] EKS cluster "eksc2-useast1" in "us-east-1" region is ready
Once cluster creation is complete open another tab and create another EKS cluster with eksctl
using the following command:
Note: Use the public and private subnets created as part of cluster2 in this command. See this for more details.
eksctl create cluster --name=eksc1 --nodes=2 --alb-ingress-access
--region=us-east-1 --ssh-access --asg-access --full-ecr-access
--external-dns-access --appmesh-access --auto-kubeconfig
--vpc-private-subnets=<comma seperated private subnets>
--vpc-public-subnets=<comma seperated public subnets>
#[✔] EKS cluster "eksc1" in "us-east-1" region is ready
When completed, update the KUBECONFIG
environment variable in each tab according to the eksctl
output, respectively:
Run the following in respective tabs.
export KUBECONFIG=~/.kube/eksctl/clusters/eksc1
export KUBECONFIG=~/.kube/eksctl/clusters/eksc2
You have now setup the two clusters and pointing kubectl to respective clusters. Congratulations.
Deploy App Mesh custom components
In order to automatically inject App Mesh components and proxies on pod creation, we need to create some custom resources on the clusters. We will use helm for that. We need install tiller on both the clusters and then use helm to run the following commands on both clusters for that.
Code base
>> git clone https://github.com/aws/aws-app-mesh-examples (https://github.com/aws/aws-app-mesh-examples).git
>> cd aws-app-mesh-examples/walkthroughs/howto-k8s-cross-cluster
Install Helm
>>brew install kubernetes-helm
Install tiller
Using helm requires a server-side component called tiller installed on the cluster. Follow the instructions in the documentation to install tiller on both the clusters.
Verify tiller install
>>kubectl get po -n kube-system | grep -i tiller
tiller-deploy-6d65d78679-whwzn 1/1 Running 0 5h35m
Install App Mesh Components
Run the following set of commands to install the App Mesh controller and Injector components.
helm repo add eks https://aws.github.io/eks-charts
kubectl create ns appmesh-system
kubectl apply -f https://raw.githubusercontent.com/aws/eks-charts/master/stable/appmesh-controller/crds/crds.yaml
helm upgrade -i appmesh-controller eks/appmesh-controller --namespace appmesh-system
helm upgrade -i appmesh-inject eks/appmesh-inject --namespace appmesh-system --set mesh.create=true --set mesh.name=global
Opitionally add x-ray tracing
helm upgrade -i appmesh-inject eks/appmesh-inject --namespace appmesh-system --set tracing.enabled=true --set tracing.provider=x-ray
We are now ready to deploy our front and colorapp applications to respective clusters along with the App Mesh, which will span both clusters.
Deploy services and mesh constructs
1. You should be in the walkthrough/howto-k8s-cross-cluster folder, all commands will be run from this location.
2. Your account id:
export AWS_ACCOUNT_ID=<your_account_id>
3. Region, e.g., us-east-1
export AWS_DEFAULT_REGION=us-east-1
4. ENVOY_IMAGE environment variable is set to App Mesh Envoy, see Envoy.
export ENVOY_IMAGE=...
5. VPC_ID environment variable is set to the VPC where Kubernetes pods are launched. VPC will be used to set up
private DNS namespace in AWS using create-private-dns-namespace API. To find out VPC of EKS cluster, you can
use aws eks describe-cluster
. See below for reason why AWS Cloud Map PrivateDnsNamespace is required.
export VPC_ID=...
6. CLUSTER environment variables to export kube configuration
export CLUSTER1=eksc1
export CLUSTER2=eksc2
Deploy
./deploy.sh
Verify deployment
On Cluster 1
>>kubectl get all -n appmesh-demo
NAME READY STATUS RESTARTS AGE
pod/front-7d7bc9458f-g2hnx 3/3 Running 0 5h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/front LoadBalancer 10.100.145.29 af3c595c8fb3b11e987a30ab4de89fc8-1707174071.us-east-1.elb.amazonaws.com 80:31646/TCP 5h23m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/front 1/1 1 1 5h23m
NAME DESIRED CURRENT READY AGE
replicaset.apps/front-7d7bc9458f 1 1 1 5h23m
NAME AGE
mesh.appmesh.k8s.aws/global 5h
>>kubectl get all -n appmesh-system
NAME READY STATUS RESTARTS AGE
pod/appmesh-controller-84d46946b9-5lj7f 1/1 Running 0 5h27m
pod/appmesh-inject-5d8b86846-67fc6 1/1 Running 0 5h26m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/appmesh-inject ClusterIP 10.100.75.167 <none> 443/TCP 5h27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/appmesh-controller 1/1 1 1 5h27m
deployment.apps/appmesh-inject 1/1 1 1 5h27m
NAME DESIRED CURRENT READY AGE
replicaset.apps/appmesh-controller-84d46946b9 1 1 1 5h27m
replicaset.apps/appmesh-inject-5d8b86846 1 1 1 5h26m
replicaset.apps/appmesh-inject-7bb9f6d7b8 0 0 0 5h27m
NAME AGE
mesh.appmesh.k8s.aws/global 5h
On Cluster 2
>>kubectl get all -n appmesh-demo
NAME READY STATUS RESTARTS AGE
pod/colorapp-blue-7b6dbc5c97-wrsp9 3/3 Running 0 13h
pod/colorapp-red-59b577f5bc-mnh9q 3/3 Running 0 13h
pod/curler-5bd7c8d767-kcw55 1/1 Running 1 9h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/colorapp-blue 1/1 1 1 13h
deployment.apps/colorapp-red 1/1 1 1 13h
deployment.apps/curler 1/1 1 1 9h
NAME DESIRED CURRENT READY AGE
replicaset.apps/colorapp-blue-7b6dbc5c97 1 1 1 13h
replicaset.apps/colorapp-red-59b577f5bc 1 1 1 13h
replicaset.apps/curler-5bd7c8d767 1 1 1 9h
NAME AGE
mesh.appmesh.k8s.aws/appmesh-demo 13h
mesh.appmesh.k8s.aws/global 1d
NAME AGE
virtualnode.appmesh.k8s.aws/colorapp-blue 13h
virtualnode.appmesh.k8s.aws/colorapp-red 13h
virtualnode.appmesh.k8s.aws/front 13h
NAME AGE
virtualservice.appmesh.k8s.aws/colorapp.appmesh-demo.pvt.aws.local 13h
>>kubectl get all -n appmesh-system
NAME READY STATUS RESTARTS AGE
pod/appmesh-controller-84d46946b9-8k8bj 1/1 Running 0 27h
pod/appmesh-inject-5d8b86846-zqrn6 1/1 Running 0 27h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/appmesh-inject ClusterIP 10.100.123.159 <none> 443/TCP 27h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/appmesh-controller 1/1 1 1 27h
deployment.apps/appmesh-inject 1/1 1 1 27h
NAME DESIRED CURRENT READY AGE
replicaset.apps/appmesh-controller-84d46946b9 1 1 1 27h
replicaset.apps/appmesh-inject-5d8b86846 1 1 1 27h
replicaset.apps/appmesh-inject-7bb9f6d7b8 0 0 0 27h
NAME AGE
mesh.appmesh.k8s.aws/appmesh-demo 13h
mesh.appmesh.k8s.aws/global 1d
Note that 3/3 on the application pods indicate that the sidecar containers have been injected.
Note also, that the mesh components like Virtual Service, the Router and the routes and the virtual nodes have been created, as well. You may verify this by going to the App Mesh console.
Verify AWS Cloud Map
As a part of deploy command we have pushed the images to ECR, created a namespace in AWS Cloud Map and created the mesh and the DNS entries by virtue of adding the service discovery attributes.
You may verify this, with the following command:
aws servicediscovery discover-instances --namespace appmesh-demo.pvt.aws.local
--service-name colorapp
This should resolve to the backend service.
Test the application
The front service in cluster1 has been exposed as a load balancer and can be used directly.
>>kubectl get svc -n appmesh-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front LoadBalancer 10.100.145.29 af3c595c8fb3b11e987a30ab4de89fc8-1707174071.us-east-1.elb.amazonaws.com 80:31646/TCP 5h47m
>>curl af3c595c8fb3b11e987a30ab4de89fc8-1707174071.us-east-1.elb.amazonaws.com/color
blue
You can also test it using a simple curler pod, like so:
>>kubectl -n appmesh-demo run -it curler --image=tutum/curl /bin/bash
root@curler-5bd7c8d767-x657t:/#curl front/color
blue
Note: For this to work, you need to open port 8080 on security group applied on the cluster2 Node group to the cluster1’s security group. See screenshots provided below.
Security groups
Inbound access
Great! You have successfully tested the service communication across clusters using the App Mesh and AWS Cloud Map.
Lets make a few requests and check that our X-Ray side car is indeed capturing traces.
Verify X-Ray console
Run the following command from a curler pod within Cluster1.
$ for ((n=0;n<200;n++)); do echo "$n: $(curl front/color)"; done
1: red
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4 100 4 0 0 205 0 --:--:-- --:--:-- --:--:-- 210
2: blue
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3 100 3 0 0 136 0 --:--:-- --:--:-- --:--:-- 142
......
......
196: blue
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4 100 4 0 0 236 0 --:--:-- --:--:-- --:--:-- 250
197: blue
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3 100 3 0 0 180 0 --:--:-- --:--:-- --:--:-- 187
198: red
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4 100 4 0 0 212 0 --:--:-- --:--:-- --:--:-- 222
199: blue
Let’s take a look at our X-Ray console:
Traces:
Service Map:
You will notice that the requests are intercepted by the Envoy proxy. The proxy is essentially a side car container, which is deployed alongside the application containers.
Summary
In this walkthrough, we created two EKS clusters within a VPC, created frontend service in one cluster and backend services in the other cluster. We created an AWS App Mesh that spans both clusters and leveraged AWS Cloud Map to discover services so they could communicate. This can be expanded to multiple other clusters not necessarily EKS, but a mix of ECS, EKS, Fargate, EC2 etc.
Resources
AWS App Mesh Documentation
AWS CLI
AWS Cloud Map
Currently available AWS Regions for App Mesh
Envoy Image
Envoy documentation
EKS