Networking & Content Delivery
Improve your website performance with Amazon CloudFront
For consumer-facing websites, the speed at which the site loads directly impacts the user’s browsing experience and the success of your business. If your website takes a long time to load, your users might abandon it before completing their transaction, affecting your revenue. You can use a content delivery network (CDN) like Amazon CloudFront to improve the performance of your website by securely delivering data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. To learn more about edge networking with AWS, click here.
To improve performance, you can simply configure your website’s traffic to be delivered over CloudFront’s globally distributed edge network by setting up a CloudFront distribution. In addition, CloudFront offers a variety of optimization options. In this post, you will learn how to fine-tune your website’s performance even further by leveraging these optimization options available on CloudFront.
Benefit from CloudFront’s native acceleration
Amazon CloudFront has a global network of edge locations and intelligent software to deliver content to users across the world from the location that is closest to them or has the best performance. Parts of your website like HTML, images, stylesheets, and JavaScript files are served from cached copies stored in CloudFront edge locations and regional edge caches. For the parts that are not cached locally, such as newly updated content or API dynamic requests, CloudFront fetches them from your origin servers over a fast and optimized path thanks to persistent connections over the AWS privately managed global network. With its support for all HTTP methods including GET, PUT and POST, CloudFront can accelerate your entire website.
CloudFront continuously implements new technologies in internet communication protocols and makes them available to you. Features such as TLS session resumption, TCP fast open, OCSP stapling, S2N, and request collapsing are enabled by default without requiring any configuration from you. In addition, there are optimizations that you can choose to activate for your specific CloudFront distribution, such as:
- Enable HTTP/2 to use the same domain and the same TCP connection to download all of your website’s components. By enabling this, you don’t need to implement domain sharding, a legacy way to improve download performance, and you reduce the number of required DNS resolutions for your users.
- Enable HTTP to HTTPS redirection at the edge to save a round trip to the origin.
- Fine-tune origin timeouts. CloudFront allows you to configure two timeouts to optimize the connection to your origin. The read timeout specifies the amount of time CloudFront waits for a response from your custom origin. The default value is 30 seconds, but if you have certain actions on your website that take longer for processing on the backend, such as payment processing for an order, you should configure a higher read timeout so that your users are able to complete their transaction successfully in one request. The keep-alive idle timeout specifies the maximum amount of time that CloudFront will maintain an idle connection with your origin server before closing the connection. The default keep-alive idle timeout is 5 seconds, but you can configure a higher value up to 60 seconds, if your origin servers also support it. This is particularly useful when serving dynamic content over CloudFront because even though every request is forwarded to the origin, you can avoid creating a new connection every time.
Define your caching strategy
For web applications, you can leverage a combination of CloudFront and your users browsers to cache content closer to your users. The standard way of controlling these caches is through the Cache-Control HTTP header that is sent by your origin, where you can define how long an object is cached by setting a time to live (TTL).
CloudFront honors Cache-Control headers. This means that if an object has a Cache-Control header in the response, CloudFront caches the object at the edge location for the duration of time specified in the Cache-Control max-age directive. If there is no Cache-Control header, CloudFront caches the object for the duration of the Default TTL specified in the applicable cache behavior. In addition, you can define boundaries on the least and the most amount of time any object is cached by CloudFront by using the Minimum TTL and Maximum TTL fields of the cache behavior to protect against excessively short or long TTLs from being set unintentionally.
Our recommendation is to use the Cache-Control header as the primary method to define the desired caching behavior for the various objects that make up your website. For example:
- If the value of max-age in the Cache-Control header falls between the minimum TTL and maximum TTL you define, CloudFront caches the object for the time specified in max-age.
- If the value of max-age is less than the minimum TTL you define, CloudFront caches the object for the value of the minimum TTL.
- If the value of max-age is greater than the maximum TTL you define, CloudFront caches the object for the value of the maximum TTL.
Let’s understand how you can define an optimal caching strategy for your website using the following example of an e-commerce website.
Typically, a single page of a shopping website will have a combination of static and dynamic components that can fall in one of four main categories, each one requiring a caching policy that meets its unique characteristics:
- Private cacheable content, such as the signed-in user’s name. This data is only relevant to one user and rarely changes, which makes it ideal for caching in the user’s browser but not in CloudFront. For example, if you want to cache this information on the user’s browser for an hour, you can send a Cache-Control: private, max-age=3600 header in the HTTP response. The private directive here indicates that the response is intended for a single user and should not be cached by any shared cache such as a CDN like Amazon CloudFront.
- Private dynamic content, such as the number of items in the user’s cart. This data is also relevant to a single user, but it can change during a browsing session. This means it is not a good candidate for caching. You can disable caching for this information on both the browser and CloudFront by sending Cache-Control: no-store header.
- Shared static content, such as a product image on the homepage. This image will be served to many users and it does not change frequently. This makes it ideal for caching on both CloudFront and the browser for long durations. For example, if you want to cache this image for a week, you can send a Cache-Control: public, max-age=604800 header in the response. If your origin does not return Cache-Control headers, you can create a cache behavior on CloudFront matching the URL path of such images and configure a default TTL of 604800 seconds (7 days).When you want to display a different product image, we recommend versioning URLs. For example, if the current image is referenced by the path /products/product-x-v1.jpg, create the new image with the new path /products/product-x-v2.jpg and reference it in your homepage HTML. This lets CloudFront know that the object has changed, and thus it should not serve the previously cached image.
- Shared mutable content, such as the HTML of the homepage. This content is requested by many users and can change over time, for example when your developers release a new feature or you change your product listing. According to your needs, you can use different TTLs for the browser cache and CloudFront to balance between how fast you want to update content for users, and how much traffic you want to offload from your origin. For example, you can use TTL = 0 for the browser cache, to force the user to get the latest content from CloudFront, and set a short TTL, for example 10 minutes, to cache the latest version at edge locations to serve to your users.You have two different techniques for differentiating TTLs between CloudFront and the browser. The first is by sending Cache-Control headers with directives that target each cache separately. For example, Cache-Control: max-age=0, s-maxage=600 tells the browser to not cache the object, but instructs CloudFront to cache it for 10 minutes. The second technique is by sending Cache-Control: max-age=0 and adding your desired TTL in the Minimum TTL field of your CloudFront distribution’s applicable cache behavior.For short TTLs on CloudFront, you can add the ETag header to the response to identify a version of a resource using a hash of the resource contents. When CloudFront gets a request for a resource that is present in the cache but expired, it sends a request to the origin with the If-None-Match header containing the existing object’s ETag value. If the resource did not change on the origin, the origin can reply with a 304 Not Modified response telling CloudFront that the existing content is still fresh and can be served from cache. This mechanism avoids having to fetch the full resource from the origin when it has not changed. In the case of the homepage HTML example, this can ensure that CloudFront provides the benefits of caching, with the responsiveness of automatically detecting changes at a designated interval. Note that if you are using S3 as origin, ETag header is added automatically to responses from S3.
By using techniques such as object versioning, Cache-Control, and ETag, you can control how content gets evicted from CloudFront’s cache. You can also use invalidations to directly remove the cached objects from edge caches. However, keep in mind that invalidating files on CloudFront does not purge them from your users browser caches.
Your caching strategy should also account for how to cache HTTP 4xx and 5xx errors from your origin. Despite best efforts in making your website highly available, sometimes errors occur on your origin, and CloudFront provides you options for graceful error handling. By default, when your origin returns a 4xx or 5xx status code, CloudFront caches these error responses for five minutes and, after that interval, submits the next request for the object to your origin to see whether the problem that caused the error has been resolved and the requested object is now available. However, CloudFront allows you to configure custom TTLs to cache specific errors for different lengths of time. For example, you can configure CloudFront to cache certain 5xx error for 10 seconds if you do not want your users to be served a cached error response for the default period of 5 minutes. Note that when CloudFront has a cached object that is expired, and its attempt to get a fresh object from the origin returns a 5xx error, CloudFront serves the stale (cached but expired) object instead of the error.
If a user navigates to a URL on your website that they are not authorized to access, or does not exist, instead of serving them 403 or 404 error pages you can serve them a custom error page. For example, a catalog of the most popular items on your shopping website, or cute puppies with a 200 OK response code to keep your user experience positive and customized to your brand rather than relying on the browser’s default error handling. This response page can be hosted on Amazon S3 for example, so that it can be served to the user even if the origin is unable to respond. This provides some simple last resort redundancy and improved availability.
Improve your cache hit ratio
When optimizing your website for performance, your aim should be to maximize the amount of content that can be delivered from the CloudFront cache for as long as possible. This is called your cache hit ratio. To understand how to improve your cache hit ratio, let’s first look at how CloudFront identifies and stores cached objects, and how you can use this to your advantage with a few simple changes in the CloudFront configuration and your application.
By default, CloudFront creates a unique cache key for an object based on the path and the Accept-Encoding header in the viewer’s request. When you configure a CloudFront cache behavior to forward request metadata such as a header, a cookie, or a query string, this additional data is added to the cache key. This means that CloudFront creates a different cache key for every variation of the forwarded header or cookie or query string and caches it as a unique object, even if your origin serves the same content for some of those values. Knowing this, you can improve the caching for your website’s content on CloudFront by following these recommendations:
- If you have specific URLs on your website whose response varies based on header or cookie or query string values, create a specific cache behavior in CloudFront to match these URLs, and configure the forwarded values accordingly to minimize what you forward in the default cache behavior.
- Forward only a whitelist of the specific headers or cookies or query strings that actually matter to your origin for serving different content based on their value, instead of forwarding all headers, cookies, or query strings.
- Leverage the headers sent to your origin by CloudFront. For example, if you need to know your user’s device type to serve customized content, you may be determining it by forwarding the User-Agent header. However, there are thousands of variations of user-agent strings and including them in CloudFront’s requests to your origin creates distinct cache keys for each variation and reduce your global cache hit ratio. Instead, you can choose to not forward the User-Agent header and use custom headers provided by CloudFront to identify device type.
- When you need to forward a value that has may variations, consider normalizing those variations using Lambda@Edge to reduce the number of unique variations. Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to your users, which reduces latency, improve performance, and allows you to programmatically modify the requests or responses that CloudFront handles.
Conclusion and next steps
We have seen several different ways to optimize the performance of your website using CloudFront. There are even more things that you can do on the application level to optimize page load speeds further. For example, front end optimization techniques include compression, minification, image optimizations, and removing render blocking JavaScript. Another option is using Lambda@Edge to run some of your application logic distributed at the edge to reduce the latency for your users.
To evaluate the impact of these changes on your website performance it is important to have data points. Ideally, you do real user monitoring to measure the performance as experienced by your own users. Additionally, you can analyze CloudFront access logs using Amazon Athena or other HTTP server log analysis tools of your choice to gain more insights on your website’s performance in near real time, and make further customizations as needed.
Blog: Using AWS Client VPN to securely access AWS and on-premises resources | ||
Learn about AWS VPN services | ||
Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures |