Networking & Content Delivery

Amazon CloudFront introduces Server Timing headers

Introduction

Amazon CloudFront has recently announced a new feature, Server Timing headers, which provides detailed performance information, such as whether content was served from cache when a request was received, how the request was routed to the CloudFront edge location, and how much time elapsed during each stage of the connection and response process.

Server Timing headers provide additional metadata in the form of HTTP headers in viewer responses and can be inspected or consumed by client-side application code. You can use Server Timing headers to gain more granular insights when troubleshooting CloudFront performance, to inspect CloudFront behavior, and to collect and aggregate metrics across user-requested transactions, such as cache misses, first-byte latency, and last-byte latency.

In this blog, you will learn how to enable Server Timing headers, and understand more about common use cases where this feature can be used to help improve your user experience.

Enabling Amazon CloudFront Server Timing headers

In this section, you will learn how to enable Server Timing headers on a CloudFront distribution via AWS Management Console. First, create a new, or update an existing, custom response header policy under CloudFront > Policies > Response headers. In the Response Headers policy, you can enable the Server Timing headers feature. The two figures below show examples of enabling the feature and how your policy will look once the feature is enabled.

After the Response header policy has been created, you will need to add it to your CloudFront Behaviors.

Figure 1: Response headers policy section to enable Server Timing headers

Figure 2: Response headers policy configuration showing Server Timing headers enabled

Server Timing headers will allow you to include valuable metrics which provide additional performance information on requests that are going through CloudFront. You can use the information contained in these headers for a variety of implementations; below are some of the use cases you may wish to consider:

  • Understanding the server-side influence on the request/response performance for Dynamic Requests
    • In case of Dynamic Requests, CloudFront requires communication with the origin. Server Timing Header will provide information about how long this communication is taking and the server communication’s influence on the overall performance.
    • Some of the server-side communication metrics include DNS resolution timing, TCP Connection, and First-Byte Latency (FBL). These metrics can then be used to take actions that will improve your workload’s performance.
    • If DNS resolution is taking too long, you can increase TTL (Time to Live) to reduce subsequent delays.
    • In case TCP Connections are slow, you can increase Keep-Alive timeouts to help reduce round trip times.
    • In case FBL is high, the origin might require additional resource allotment to address requests faster.
  • Knowing more about where requests were routed within the CloudFront’s infrastructure for cached content
    • With this new feature, you can get visibility into the specific layer where a request has been served from, whether that was an edge location (EDGE), a Regional edge cache (REC), or Origin Shield.
    • These metrics are particularly useful if you wish to understand whether adding Origin Shield would be useful, what actions you can take to better serve content that is getting refresh-hits, and having an overall picture how the requests are being served.

Anatomy of Server Timing headers

HTTP requests served by CloudFront traverse upstream where they may be requested from various layers of CloudFront such as edge locations, Regional edge caches, and Origin Shield or the origin if it’s a cache miss. Then, the response is sent downstream to the viewer through the same layers. For more details about CloudFront layers, watch this talk from AWS New York Summit in 2019.

Server Timing headers will capture information and metrics from the upstream flow and return it in a specific response header. The below Server Timing header example is returned in a response:

server-timing: cdn-cache-hit,cdn-pop;desc=”IAD89-C1″,cdn-rid;desc=”qx0z2Nquy2s3jjH3leHZI7k10X9sN9t5ZmIsjNCqvnJ2uOCjZmyFbQ==”,cdn-hit-layer;desc=”EDGE”

From this information, you can conclude that the request was a cache hit (cdn-cache-hit), on the first layer of CloudFront caching (cdn-hit-layer;desc=”EDGE” ), the edge layer in IAD89-C1 PoP. In fact, cdn-hit-layer metric can have three possible values: EDGE referring to the edge location where the request landed, REC referring to the Regional edge cache behind that edge location, and Origin Shield refers to the Origin Shield layer if that option is enabled in the CloudFront distribution.

Let’s now consider a request that resulted in a cache miss and went to the origin:

server-timing: cdn-upstream-layer;desc=”REC”,cdn-upstream-dns;dur=0,cdn-upstream-connect;dur=195,cdn-upstream-fbl;dur=366,cdn-cache-miss,cdn-pop;desc=”IAD89-C3″,cdn-rid;desc=”bjEUzYyv7e3FyYoK93Tw0MNYhNV2zVTMbjFO8g-Tr5aEW108VkzM-w==”

Notice that you now have new metrics with the cdn-upstream prefix that explains which CloudFront layer connected to the origin (cdn-upstream-layer;desc=”REC”), and the performance of various parts of the request:

  • cdn-upstream-dns;dur=0 Indicates the duration spent by CloudFront resolving the domain name of the origin (in milliseconds). A value of zero (0.0) for this metric implies that the DNS result was cached, or an existing connection was reused.
  • cdn-upstream-connect;dur=195 Indicates the duration (in milliseconds) it took to establish a TCP connection and a TLS session with the origin. A value of zero (0.0) for this metric may imply an existing connection was reused.
  • cdn-upstream-fbl;dur=366 Indicates the duration (in milliseconds) between the completion of the origin request and the first byte received in the origin response. The “FBL” acronym represents “First Byte Latency”.
    For complete list of server timing metrics, please refer to the documentation here.

Conclusion

In this post, we introduced Server Timing headers, an observability feature that helps you diagnose and maintain availability and performance of your CloudFront resources. Server Timing headers are an addition the suite of tools CloudFront provides several tools to monitor your CloudFront resources using standard and real time logs, alarms and console reports and how to investigate potential issues. See the CloudFront Developer Guide for details.

Server Timing headers are available for immediate use in all CloudFront distributions. You can enable Server Timing headers through the CloudFront Console  or the AWS SDK . There is no additional fee for using the CloudFront Server Timing headers.