Tracing an AWS App Runner service using AWS X-Ray with OpenTelemetry


AWS App Runner is a fully managed service that developers can use to quickly deploy containerized web applications and APIs at scale with little to no infrastructure experience. You can start with source code or a container image. App Runner will fully manage all infrastructure, including servers, networking, and load balancing, for your application. App Runner provides a service URL that receives HTTPS requests to your application. App Runner can also configure a continuous deployment (CD) pipeline.

Many App Runner customers build distributed applications using a microservices architecture. With distributed architectures, customers face a variety of operational issues, such as having proper insight into the underlying services and identifying the root cause of performance issues and errors. In addition to requiring insight into the health of an active service, developers require complete visibility of their overall application. Typically customers accomplish this by gathering observability data, such as metrics, logs, and traces, to gain necessary clarity.

App Runner provides comprehensive visibility into services with detailed build, deployment, and runtime logs. In addition, App Runner provides an array of predefined metrics by integrating with Amazon CloudWatch.

Tracing with AWS X-Ray

App Runner now supports tracing as part of its observability suite. With this release, you can take advantage of adding tracing to App Runner services without having to configure and set up the required sidecars or agents. You can trace your containerized applications in AWS X-Ray by instrumenting applications with the AWS Distro for OpenTelemetry (ADOT).

ADOT is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. Part of the Cloud Native Computing Foundation (CNCF), OpenTelemetry provides open-source APIs, libraries, and agents to collect distributed traces and metrics for application monitoring.

If your application is already instrumented for tracing with OpenTelemetry, you need only to enable tracing. You can do this by turning on the option when creating your App Runner service, or any time later, then monitoring service activity in X-Ray or CloudWatch.

If you haven’t instrumented your application, enabling the tracing feature is still a one-time step by using the OpenTelemetry SDK. Use the SDK to configure your application for auto-instrumentation to collect traces. This is the quickest approach that doesn’t require changing your application code.

Diagram of App Runner service showing how OpenTelemetry SDK hands requests

For any traced request to the App Runner service, you can view detailed information about the request and response. You can also view details about calls that the application makes to downstream AWS resources, microservices, databases, and HTTP web APIs. X-Ray provides a complete view of requests as they travel through the application and a map of the application’s underlying components. Use X-Ray to analyze applications in both development and production. Applications can include simple three-tier applications to complex microservices applications that consist of thousands of services. To correlate application performance data with underlying infrastructure data, ADOT also collects metadata from AWS resources and managed services. This correlation reduces the time that it takes to solve a problem.

To enable tracing, App Runner includes a top-level resource called observability configuration. Use this observability configuration resource to create or update a service and enable or disable X-Ray tracing for services. For more information about observability configuration, see Configuring observability for your service in the AWS App Runner Developer Guide.

See how this works in practice with a web application that interacts with a database.

Getting started with tracing

This example provides instructions for creating an App Runner service with tracing enabled. The example shows a Python web application that interacts with an Amazon DynamoDB database.

Step 1: Instrument and build your application

Before creating an App Runner service for your application, instrument your code and build an image.

  1. Ensure that you meet the following prerequisites:
    1. You can access the AWS Management Console.
    2. AWS Command Line Interface (AWS CLI) and Docker are installed in your development environment.
  2. Instrument your application using OpenTelemetry.Depending on the specific ADOT SDK that you use, ADOT supports two approaches: auto-instrumentation and manual instrumentation. For instructions, see Tracing with the AWS Distro for OpenTelemetry Python Auto-Instrumentation and X-Ray or Tracing with the AWS Distro for OpenTelemetry Python Manual-Instrumentation and X-Ray. The auto-instrumentation approach requires only a simplified configuration file setup without having to change your Python application code. I will use this approach to set up.App Runner supports tracing for both a source container image and a source code repository. For more information on instrumenting both types of service sources, see Instrument your application for tracing in the AWS App Runner Developer Guide. This example uses an existing container image. Update only two files: requirements.txt and Dockerfile:



    RUN yum install python3.7 -y && curl -O && python3 && yum update -y
    COPY . /app
    WORKDIR /app
    RUN pip3 install -r requirements.txt
    RUN opentelemetry-bootstrap --action=install
    CMD OTEL_PROPAGATORS=xray OTEL_PYTHON_ID_GENERATOR=xray opentelemetry-instrument python3
    EXPOSE 8080

    Auto-instrumentation uses the opentelemetry-instrument executable functions as a wrapper to automatically initialize the instrumentors. The opentelemetry-bootstrap command installs the instrumentors and starts the provided application.

    OTEL_PYTHON_ID_GENERATOR=xray ensures that spans use an Id format compatible with the X-Ray backend.

    OTEL_PROPAGATORS=xray allows the span context to propagate downstream when the application makes calls to external services.

    The value of the OTEL_RESOURCE_ATTRIBUTES environment variable that I am configuring is the service name movies_app. You can see the effect of this on the X-Ray Service Map.

  3. Build the container image, and get an Amazon ECR Public image URI. For more information about how to build an ECR Public image, see Amazon ECR Public images in the Amazon ECR User Guide.

Step 2: Set up an IAM role

App Runner uses an AWS Identity and Access Management (IAM) role to authorize access to other AWS services. In this case, App Runner uses the role to allow the application to send tracing data to X-Ray.

  1. Create an IAM role called movies-app-instance-role. For the role trust policy, follow the instructions for instance roles in How App Runner works with IAM in the App Runner Developer Guide.


      "Version": "2012-10-17",
      "Statement": [
          "Effect": "Allow",
          "Principal": {
            "Service": ""
          "Action": "sts:AssumeRole"
    aws iam create-role --role-name movies-app-instance-role --assume-role-policy-document file://apprunner-role.json
  2. Attach a policy that allows App Runner to send tracing data to X-Ray.
    aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess --role-name movies-app-instance-role 
  3. Attach a policy that allows App Runner to integrate with DynamoDB. The example uses a DynamoDB database called movies-app.
    aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess --role-name movies-app-instance-role 

    Alternatively, you can use the AWS Management Console to create your role.

    The movies-app-instance-role summary page n IAM ui

Step 3: Create and configure your App Runner service

Now that your code is instrumented and you’ve created the necessary role, create your App Runner service, and configure it to enable tracing.

  1. Sign in to the AWS Management Console.
  2. Navigate to the App Runner Services page.
  3. Choose Create service.
  4. For repository type, choose Container registry.
  5. For Provider, choose Amazon ECR Public and enter the URI to the public image that you built in Step 1.
  6. For Deployment settings, keep the default setting, Manual.
  7. Choose Next.UI representation of the App Runn service configuration
  8. In the Service settings section, for Service name, enter movies_app.The Configure Service screen
  9. In the Security section, for Instance role, choose the service role that was created earlier, movies-app-instance-role.
  10. For AWS KMS key, choose Use an AWS-owned key.Security section of the UI
  11. In the Observability section, enable Tracing with AWS X-Ray.Observability section of the UI pageYou can also use AWS CLI to create an observability configuration and link it to your service to enable or disable tracing. For more information, see ObservabilityConfiguration in the AWS App Runner API Reference.
  12. Keep all the other default values.
  13. Choose Next.
  14. Review your configuration, and then choose Create & deploy.
  15. On the service dashboard page, in the Service overview section, monitor the service status. The service is live and ready to process requests when the status turns to Running.To verify that tracing is enabled, choose the Observability tab.
    You should see Tracing On.movies_app in App Runner verification screen

Step 4: Send requests to your application

To access your service, choose the Default domain URL that you see on the service dashboard. In this example, it’s a movie database API application. Use it to perform GET, POST, PUT, and DELETE operations.

Step 5: View tracing data in X-Ray

  1. On the Observability tab, choose View service map.
    You are redirected to the CloudWatch console. You can see that HTTP requests and AWS SDK requests are instrumented. The following image shows an X-Ray service map with traces collected from the example application.Diagram of client to movies_app to DynamoDB relationship
  2. Choose the App Runner Service node on the map.
    The CloudWatch console shows service metrics, including Latency, Requests, and Faults.CloudWatch Service Map UI page showing movies_app
  3. Choose View traces.
    The console shows the CloudWatch Traces page.
  4. To view the service map, analytics, and X-Ray insights, choose the latest trace ID. This information can help you to detect issues.

    Suppose that you’ve made a few API calls. Some of them returned 500/4xx Bad Request. To determine which group of requests caused the service 500/4xx responses, query the X-Ray traces and filter by the HTTP status code. To troubleshoot the cause of the error, on the Traces list, narrow the results to the group of bad requests by HTTP method. Then review the details of each trace segment.

Availability and pricing

The App Runner tracing feature is available in all AWS Regions where App Runner is offered. For more information, see the Regional Services List. No additional cost is incurred for using this feature. You pay the standard price for compute resources that App Runner provisions for you and for X-Ray use. You can set up X-Ray tracing with the AWS Management Console, AWS CLI, AWS SDKs, or AWS CloudFormation. For more information, see Tracing for your App Runner application with X-Ray in the AWS App Runner Developer Guide.


I’m excited to bring you the observability configuration capability that enables App Runner application tracing. This tracing provides more visibility into how your services run and integrate with other systems and has been one of the most customer-requested features for App Runner. I look forward to receiving your feedback on the App Runner roadmap on GitHub.

Follow me on Twitter @pymhq

Yiming Peng

Yiming Peng

Yiming Peng is a Senior Software Development Engineer at AWS Containers. He is the founding engineer of AWS App Runner and now the tech lead of Data Plane, with a focus on providing customers seamless Auto Scaling, Request-Routing, Load Balancing, Networking, Observability etc. capabilities. Yiming closely works on App Runner, ECS Fargate. He is passionated in Cloud-Native, Container Compute, Serverless and Open-Source. You can find him in Github & Twitter @pymhq.