AWS Compute Blog

Application Tracing on Kubernetes with AWS X-Ray

This post was contributed by Christoph Kassen, AWS Solutions Architect

With the emergence of microservices architectures, the number of services that are part of a web application has increased a lot. It’s not unusual anymore to build and operate hundreds of separate microservices, all as part of the same application.

Think of a typical e-commerce application that displays products, recommends related items, provides search and faceting capabilities, and maintains a shopping cart. Behind the scenes, many more services are involved, such as clickstream tracking, ad display, targeting, and logging. When handling a single user request, many of these microservices are involved in responding. Understanding, analyzing, and debugging the landscape is becoming complex.

AWS X-Ray provides application-tracing functionality, giving deep insights into all microservices deployed. With X-Ray, every request can be traced as it flows through the involved microservices. This provides your DevOps teams the insights they need to understand how your services interact with their peers and enables them to analyze and debug issues much faster.

With microservices architectures, every service should be self-contained and use the technologies best suited for the problem domain. Depending on how the service is built, it is deployed and hosted differently.

One of the most popular choices for packaging and deploying microservices at the moment is containers. The application and its dependencies are clearly defined, the container can be built on CI infrastructure, and the deployment is simplified greatly. Container schedulers, such as Kubernetes and Amazon Elastic Container Service (Amazon ECS), greatly simplify deploying and running containers at scale.

Running X-Ray on Kubernetes

Kubernetes is an open-source container management platform that automates deployment, scaling, and management of containerized applications.

This post shows you how to run X-Ray on top of Kubernetes to provide application tracing capabilities to services hosted on a Kubernetes cluster. Additionally, X-Ray also works for applications hosted on Amazon ECS, AWS Elastic Beanstalk, Amazon EC2, and even when building services with AWS Lambda functions. This flexibility helps you pick the technology you need while still being able to trace requests through all of the services running within your AWS environment.

The complete code, including a simple Node.js based demo application is available in the corresponding aws-xray-kubernetes GitHub repository, so you can quickly get started with X-Ray.

The sample application within the repository consists of two simple microservices, Service-A and Service-B. The following architecture diagram shows how each service is deployed with two Pods on the Kubernetes cluster:

  1. Requests are sent to the Service-A from clients.
  2. Service-A then contacts Service-B.
  3. The requests are serviced by Service-B.
  4. Service-B adds a random delay to each request to show different response times in X-Ray.

To test out the sample applications on your own Kubernetes cluster use the Dockerfiles provided in the GitHub repository, build the two containers, push them to a container registry and apply the yaml configuration with kubectl to your Kubernetes cluster.

Prerequisites

If you currently do not have a cluster running within your AWS environment, take a look at Amazon Elastic Container Service for Kubernetes (Amazon EKS), or use the instructions from the Manage Kubernetes Clusters on AWS Using Kops blog post to spin up a self-managed Kubernetes cluster.

Security Setup for X-Ray

The nodes in the Kubernetes cluster hosting web application Pods need IAM permissions so that the Pods hosting the X-Ray daemon can send traces to the X-Ray service backend.

The easiest way is to set up a new IAM policy allowing all worker nodes within your Kubernetes cluster to write data to X-Ray. In the IAM console or AWS CLI, create a new policy like the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "xray:PutTraceSegments",
                "xray:PutTelemetryRecords"
            ],
            "Resource": [
                "arn:aws:iam::000000000000:instance-profile/nodes.k8s.cluster.local "
            ]
        }
    ]
}

Adjust the AWS account ID within the resource. Give the policy a descriptive name, such as k8s-nodes-XrayWriteAccess.

Next, attach the policy to the instance profile for the Kubernetes worker nodes. Therefore, select the IAM role assigned to your worker instances (check the EC2 console if you are unsure) and attach the IAM policy created earlier to it. You can attach the IAM permissions directly from the command line with the following command:

aws iam attach-role-policy --role-name k8s-nodes --policy-arn arn:aws:iam::000000000000:policy/k8s-nodes-XrayWriteAccess

Build the X-Ray daemon Docker image

The X-Ray daemon is available as a single, statically compiled binary that can be downloaded directly from the AWS website.

The first step is to create a Docker container hosting the X-Ray daemon binary and exposing port 2000 via UDP. The daemon is either configured via command line parameters or a configuration file. The most important option is to set the listen port to the correct IP address so that tracing requests from application Pods can be accepted.

To build your own Docker image containing the X-Ray daemon, use the Dockerfile shown below.

# Use Amazon Linux Version 1
FROM amazonlinux:1

# Download latest 2.x release of X-Ray daemon
RUN yum install -y unzip && \
    cd /tmp/ && \
    curl https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-2.x.zip > aws-xray-daemon-linux-2.x.zip && \
    unzip aws-xray-daemon-linux-2.x.zip && \
    cp xray /usr/bin/xray && \
    rm aws-xray-daemon-linux-2.x.zip && \
    rm cfg.yaml

# Expose port 2000 on udp
EXPOSE 2000/udp

ENTRYPOINT ["/usr/bin/xray"]

# No cmd line parameters, use default configuration
CMD [''] 

This container image is based on Amazon Linux which results in a small container image. To build and tag the container image run docker build -t xray:latest.

Create an Amazon ECR repository

Create a repository in Amazon Elastic Container Registry (Amazon ECR) to hold your X-Ray Docker image. Your Kubernetes cluster uses this repository to pull the image from upon deployment of the X-Ray Pods.

Use the following CLI command to create your repository or alternatively the AWS Management Console:

aws ecr create-repository --repository-name xray-daemon

This creates a repository named xray-daemon for you and prints the repository URI used when pushing the image.

After the container build completes, push it to your ECR repository with the following push commands for the repository that you just created.

docker tag xray-daemon:latest 000000000000.dkr.ecr.eu-west-1.amazonaws.com/xray-daemon:latest
docker push 000000000000.dkr.ecr.eu-west-1.amazonaws.com/xray-daemon:latest

The resulting repository should look similar to the one shown in the following screenshot.

You can automate this processes using AWS CodeBuild and AWS CodePipeline. For instructions how to build Docker containers and push them to ECR automatically, see Docker Sample for AWS CodeBuild.

Deploy the X-Ray daemon to Kubernetes

Make sure that you configure the kubectl tool properly for your cluster to be able to deploy the X-Ray Pod onto your Kubernetes cluster. After the X-Ray Pods are deployed to your Kubernetes cluster, applications can send tracing information to the X-Ray daemon on their host. The biggest advantage is that you do not need to provide X-Ray as a sidecar container alongside your application. This simplifies the configuration and deployment of your applications and overall saves resources on your cluster.

To deploy the X-Ray daemon as Pods onto your Kubernetes cluster, run the following from the cloned GitHub repository:

kubectl apply -f xray-k8s-daemonset.yaml

This deploys and maintains an X-Ray Pod on each worker node, which is accepting tracing data from your microservices and routing it to one of the X-Ray pods. When deploying the container using a DaemonSet, the X-Ray port is also exposed directly on the host. This way, clients can connect directly to the daemon on your node. This avoids unnecessary network traffic going across your cluster.

Connecting to the X-Ray daemon

To integrate application tracing with your applications, use the X-Ray SDK for one of the supported programming languages:

  • Java
  • Node.js
  • .NET (Framework and Core)
  • Go
  • Python

The SDKs provide classes and methods for generating and sending trace data to the X-Ray daemon. Trace data includes information about incoming HTTP requests served by the application, and calls that the application makes to downstream services using the AWS SDK or HTTP clients.

By default, the X-Ray SDK expects the daemon to be available on 127.0.0.1:2000. This needs to be changed in this setup, as the daemon is not part of each Pod but hosted within its own Pod.

The deployed X-Ray DaemonSet exposes all Pods via the Kubernetes service discovery, so applications can use this endpoint to discover the X-Ray daemon. If you deployed to the default namespace, the endpoint is:

xray-service.default

Applications now need to set the daemon address either with the AWS_XRAY_DAEMON_ADDRESS environment variable (preferred) or directly within the SDK setup code:

AWSXRay.setDaemonAddress('xray-service.default:2000');

To set up the environment variable, include the following information in your Kubernetes application deployment description YAML. That exposes the X-Ray service address via the environment variable, where it is picked up automatically by the SDK.

env:
- name: AWS_XRAY_DAEMON_ADDRESS 
  value: xray-service.default

 

Sending tracing information to AWS X-Ray

Sending tracing information from your application is straightforward with the X-Ray SDKs. The example code below serves as a starting point to instrument your application with traces. Take a look at the two sample applications in the GitHub repository to see how to send traces from Service A to Service B. The diagram below visualizes the flow of requests between the services.

Because your application is running within containers, enable both the EC2Plugin and ECSPlugin, which gives you information about the Kubernetes node hosting the Pod as well as the container name. Despite the name ECSPlugin, this plugin gives you additional information about your container when running your application on Kubernetes.

var app = express();

//...

var AWSXRay = require('aws-xray-sdk');
AWSXRay.config([XRay.plugins.EC2Plugin, XRay.plugins.ECSPlugin]);

app.use(AWSXRay.express.openSegment('defaultName'));  //required at the start of your routes

app.get('/', function (req, res) {
  res.render('index');
});

app.use(AWSXRay.express.closeSegment());  //required at the end of your routes / first in error handling routes

For more information about all options and possibilities to instrument your application code, see the X-Ray documentation page for the corresponding SDK information.

The picture below shows the resulting service map that provides insights into the flow of requests through the microservice landscape. You can drill down here into individual traces and see which path each request has taken.

From the service map, you can drill down into individual requests and see where they originated from and how much time was spent in each service processing the request.

You can also view details about every individual segment of the trace by clicking on it. This displays more details.

On the Resources tab, you will see the Kubernetes Pod picked up by the ECSPlugin, which handled the request, as well as the instance that Pod was running on.

Summary

In this post, I shared how to deploy and run X-Ray on an existing Kubernetes cluster. Using tracing gives you deep insights into your applications to ease analysis and spot potential problems early. With X-Ray, you get these insights for all your applications running on AWS, no matter if they are hosted on Amazon ECS, AWS Lambda, or a Kubernetes cluster.