Containers
Tracing an AWS App Runner service using AWS X-Ray with OpenTelemetry
Introduction
AWS App Runner is a fully managed service that developers can use to quickly deploy containerized web applications and APIs at scale with little to no infrastructure experience. You can start with source code or a container image. App Runner will fully manage all infrastructure, including servers, networking, and load balancing, for your application. App Runner provides a service URL that receives HTTPS requests to your application. App Runner can also configure a continuous deployment (CD) pipeline.
Many App Runner customers build distributed applications using a microservices architecture. With distributed architectures, customers face a variety of operational issues, such as having proper insight into the underlying services and identifying the root cause of performance issues and errors. In addition to requiring insight into the health of an active service, developers require complete visibility of their overall application. Typically customers accomplish this by gathering observability data, such as metrics, logs, and traces, to gain necessary clarity.
App Runner provides comprehensive visibility into services with detailed build, deployment, and runtime logs. In addition, App Runner provides an array of predefined metrics by integrating with Amazon CloudWatch.
Tracing with AWS X-Ray
App Runner now supports tracing as part of its observability suite. With this release, you can take advantage of adding tracing to App Runner services without having to configure and set up the required sidecars or agents. You can trace your containerized applications in AWS X-Ray by instrumenting applications with the AWS Distro for OpenTelemetry (ADOT).
ADOT is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. Part of the Cloud Native Computing Foundation (CNCF), OpenTelemetry provides open-source APIs, libraries, and agents to collect distributed traces and metrics for application monitoring.
If your application is already instrumented for tracing with OpenTelemetry, you need only to enable tracing. You can do this by turning on the option when creating your App Runner service, or any time later, then monitoring service activity in X-Ray or CloudWatch.
If you haven’t instrumented your application, enabling the tracing feature is still a one-time step by using the OpenTelemetry SDK. Use the SDK to configure your application for auto-instrumentation to collect traces. This is the quickest approach that doesn’t require changing your application code.
For any traced request to the App Runner service, you can view detailed information about the request and response. You can also view details about calls that the application makes to downstream AWS resources, microservices, databases, and HTTP web APIs. X-Ray provides a complete view of requests as they travel through the application and a map of the application’s underlying components. Use X-Ray to analyze applications in both development and production. Applications can include simple three-tier applications to complex microservices applications that consist of thousands of services. To correlate application performance data with underlying infrastructure data, ADOT also collects metadata from AWS resources and managed services. This correlation reduces the time that it takes to solve a problem.
To enable tracing, App Runner includes a top-level resource called observability configuration. Use this observability configuration resource to create or update a service and enable or disable X-Ray tracing for services. For more information about observability configuration, see Configuring observability for your service in the AWS App Runner Developer Guide.
See how this works in practice with a web application that interacts with a database.
Getting started with tracing
This example provides instructions for creating an App Runner service with tracing enabled. The example shows a Python web application that interacts with an Amazon DynamoDB database.
Step 1: Instrument and build your application
Before creating an App Runner service for your application, instrument your code and build an image.
- Ensure that you meet the following prerequisites:
- You can access the AWS Management Console.
- AWS Command Line Interface (AWS CLI) and Docker are installed in your development environment.
- Instrument your application using OpenTelemetry.Depending on the specific ADOT SDK that you use, ADOT supports two approaches: auto-instrumentation and manual instrumentation. For instructions, see Tracing with the AWS Distro for OpenTelemetry Python Auto-Instrumentation and X-Ray or Tracing with the AWS Distro for OpenTelemetry Python Manual-Instrumentation and X-Ray. The auto-instrumentation approach requires only a simplified configuration file setup without having to change your Python application code. I will use this approach to set up.App Runner supports tracing for both a source container image and a source code repository. For more information on instrumenting both types of service sources, see Instrument your application for tracing in the AWS App Runner Developer Guide. This example uses an existing container image. Update only two files:
requirements.txt
andDockerfile
:
requirements.txt - Build the container image, and get an Amazon ECR Public image URI. For more information about how to build an ECR Public image, see Amazon ECR Public images in the Amazon ECR User Guide.
Step 2: Set up an IAM role
App Runner uses an AWS Identity and Access Management (IAM) role to authorize access to other AWS services. In this case, App Runner uses the role to allow the application to send tracing data to X-Ray.
- Create an IAM role called
movies-app-instance-role
. For the role trust policy, follow the instructions for instance roles in How App Runner works with IAM in the App Runner Developer Guide.apprunner-role.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "tasks.apprunner.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
aws iam create-role --role-name movies-app-instance-role --assume-role-policy-document file://apprunner-role.json
- Attach a policy that allows App Runner to send tracing data to X-Ray.
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess --role-name movies-app-instance-role
- Attach a policy that allows App Runner to integrate with DynamoDB. The example uses a DynamoDB database called
movies-app
.aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess --role-name movies-app-instance-role
Alternatively, you can use the AWS Management Console to create your role.
Step 3: Create and configure your App Runner service
Now that your code is instrumented and you’ve created the necessary role, create your App Runner service, and configure it to enable tracing.
- Sign in to the AWS Management Console.
- Navigate to the App Runner Services page.
- Choose Create service.
- For repository type, choose Container registry.
- For Provider, choose Amazon ECR Public and enter the URI to the public image that you built in Step 1.
- For Deployment settings, keep the default setting, Manual.
- Choose Next.
- In the Service settings section, for Service name, enter movies_app.
- In the Security section, for Instance role, choose the service role that was created earlier, movies-app-instance-role.
- For AWS KMS key, choose Use an AWS-owned key.
- In the Observability section, enable Tracing with AWS X-Ray.You can also use AWS CLI to create an observability configuration and link it to your service to enable or disable tracing. For more information, see ObservabilityConfiguration in the AWS App Runner API Reference.
- Keep all the other default values.
- Choose Next.
- Review your configuration, and then choose Create & deploy.
- On the service dashboard page, in the Service overview section, monitor the service status. The service is live and ready to process requests when the status turns to Running.To verify that tracing is enabled, choose the Observability tab.
You should see Tracing On.
Step 4: Send requests to your application
To access your service, choose the Default domain URL that you see on the service dashboard. In this example, it’s a movie database API application. Use it to perform GET
, POST
, PUT
, and DELETE
operations.
Step 5: View tracing data in X-Ray
- On the Observability tab, choose View service map.
You are redirected to the CloudWatch console. You can see that HTTP requests and AWS SDK requests are instrumented. The following image shows an X-Ray service map with traces collected from the example application. - Choose the App Runner Service node on the map.
The CloudWatch console shows service metrics, including Latency, Requests, and Faults. - Choose View traces.
The console shows the CloudWatch Traces page. - To view the service map, analytics, and X-Ray insights, choose the latest trace ID. This information can help you to detect issues.
Suppose that you’ve made a few API calls. Some of them returned
500/4xx Bad Request
. To determine which group of requests caused the service 500/4xx responses, query the X-Ray traces and filter by the HTTP status code. To troubleshoot the cause of the error, on the Traces list, narrow the results to the group of bad requests by HTTP method. Then review the details of each trace segment.
Availability and pricing
The App Runner tracing feature is available in all AWS Regions where App Runner is offered. For more information, see the Regional Services List. No additional cost is incurred for using this feature. You pay the standard price for compute resources that App Runner provisions for you and for X-Ray use. You can set up X-Ray tracing with the AWS Management Console, AWS CLI, AWS SDKs, or AWS CloudFormation. For more information, see Tracing for your App Runner application with X-Ray in the AWS App Runner Developer Guide.
Conclusion
I’m excited to bring you the observability configuration capability that enables App Runner application tracing. This tracing provides more visibility into how your services run and integrate with other systems and has been one of the most customer-requested features for App Runner. I look forward to receiving your feedback on the App Runner roadmap on GitHub.
Follow me on Twitter @pymhq