Networking & Content Delivery

Serving compressed WebGL websites using Amazon CloudFront, Amazon S3 and AWS Lambda

In this post, you will learn how to deliver compressed WebGL websites to your end users. When requested webpage objects are compressed, the transfer size is reduced, leading to faster downloads, lower cloud storage fees, and lower data transfer fees. Improved load times also directly influence the viewer experience and retention, which will help you improve website conversion and discoverability.

Using WebGL, you can make your website more immersive while still being accessible via a browser URL. Example use cases include – Virtual Reality applications (such as online education, 3D product visualization, and customized e-commerce), virtual twins (making an online replica of equipment or a facility), and many more.

Following this solution, you will learn how to build WebGL websites serving compressed static content (HTML/CSS/JS) at the edge using Amazon CloudFront. You will also see how to compress 3D models at CloudFront’s origin (this blog post example uses Amazon Simple Storage Service (Amazon S3) to host the website content), and AWS Lambda to automatically compress the objects uploaded to S3.

Background on compression and WebGL

HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. This capability is negotiated between the server and the client using an HTTP header which may indicate that a resource being transferred, cached, or otherwise referenced is compressed. Amazon CloudFront on the server-side supports Content-Encoding header.

On the client-side, most browsers today support brotli and gzip compression through HTTP headers (Accept-Encoding: deflate, br, gzip) and can handle server response headers. This means browsers will automatically download and decompress content from a web server at the client-side, before rendering webpages to the viewer.

WebGL supports certain 3D asset formats, examples include:

  • glTF: a standard file format for three-dimensional scenes and models. A glTF file uses one of two possible file extensions, gltf or glb. A gltf file may be self-contained or may reference external binary and texture resources, while a .glb file is entirely self-contained.
  • Babylon.js files: Babylon.js uses a JSON file format for describing scenes.
  • Wavefront .obj: OBJ is a geometry definition file format first developed by Wavefront Technologies for its Advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors

Overview of solution

You are provided a deployable demo for this solution and a walkthrough of the configuration. The example application uses Amazon S3, a cloud object storage service, AWS Lambda, a serverless compute service that runs your code based on the incoming request or event, and Amazon CloudFront, a fast content delivery network (CDN) that works seamlessly with any AWS origin, such as Amazon S3. This is the solution architecture:

Figure 1: Website and 3D assets compression using CloudFront and Lambda

High-level steps:

  1. You upload objects to an S3 bucket that is used for website hosting.
  2. When an object is uploaded to the source S3 bucket, S3 triggers a Lambda function.
  3. Lambda compresses the 3D model file and re-uploads to the bucket under a different folder.
  4. Cloudfront serves the website from S3 bucket origin.
  5. Client browsers are served with the compressed website (HTML, CSS, JS and 3D assets). Browsers automatically decompress content at the client side.

Demo using AWS Serverless Application Model (SAM)

View source code

Prerequisites

To deploy the example application, you need:

  • AWS credentials that provide the necessary permissions to create the resources. This example uses admin credentials.
  • The AWS SAM CLI installed.
  • Clone the GitHub repository.

Deployment steps

  • Navigate to the cloned repo directory. Alternatively, use the sam init command and paste the repo URL:

AWS SAM init prompt

  • Build the AWS SAM application:

sam build

  • Deploy the AWS SAM application:

sam deploy --guided

Note: SAM guided deployment asks for a globally unique S3 bucket name to be created, make sure you use lower case characters and numbers only.

  • After deployment is successful, take a note of the CloudFront URL from the SAM deployment outputs section.

Note: The deployment takes about 15 mins.

Testing the solution

  1. In the AWS Management console, go to your newly created S3 bucket.
  2. From the cloned repo, upload all files under website/ to the root of the S3 bucket.
  3. Open your browser, enable developer tools and visit the CloudFront URL.

Figure 2: Simple Babylon.js page showing glb model with 46% compression. “Grace” from Amazon Sumerian Hosts is licensed under CC BY-SA 4.0.

Putting it all together, you can see most of the website content (3D assets and static content) is served to the client in compressed form (brotli and gzip).

Walkthrough

This stack creates and configures the following resources:

CloudFront distribution

A CloudFront distribution with the following configuration:

In this solution, CloudFront will give you the following benefits:

  1. Delivers the website over HTTPS with CloudFront default certificate. HTTPS secures your website and is required by browsers if you want to enable audio support (e.g., an online meeting, conversational VR host, etc…).
  2. Enables automatic object compression at the edge for static website files (e. g .html, .js and .css), and less than 10 MB in size.

S3 bucket

An S3 bucket with a configurable bucket name through the deployment, with the following configuration:

  • Private access
  • Bucket policy with access restricted to CloudFront OAI

Lambda function

The function uses Python’s runtime with the following code logic:

  • Lambda ‘/tmp/’ as temporary working read/write directory. This space has a fixed size of 512 MB.
  • Compresses 3D asset files using gzip at default compression level.

Note: you can experiment with different compression levels, however increasing compression level may bring diminishing benefits in terms of file size reduction and will increase browser decompression time. Browser decompression time adds an overhead; however, compression benefits outweigh browser decompression time.

Browser finish time = Asset Download time + client browser decompression time of the asset

  • Adds metadata (Content-Encoding: gzip) the to the file.
  • Maintains the original filename extension of the asset (i.e., not adding .gz suffix to the file extension). This is important to ensure your 3D engine JavaScript code remains transparent.

Figure 3: an example using Babylon.js Append function using original file extension

  • Uploads the compressed file(s) back to S3 and deletes the original uncompressed file(s).

Lambda is configured with S3 triggers to automatically invoke your function upon object upload. Triggers are added for all 3D asset types in scope. In this example, the suffix is for three file types (gltf, glb and babylon). This configuration is important to avoid triggering the Lambda function for all files in your project.

Figure 4: Lambda function S3 triggers configuration

The function’s memory is set to 512 MB, and a timeout of 60 seconds. With this configuration, it takes an average of 2 seconds to compress a 13 MB asset. Configure these parameters according to your needs. Necessary permissions are added to S3 bucket to trigger Lambda, as well as permissions for Lambda to fetch and save objects to the configured S3 bucket.

Project example

For a larger scale project, such as my VR Child Monitor (you can also navigate the actual 3D scene), analysis shows the improvement of website loading ‘Finish’ time with 57.4% improvement. The website is entirely loaded in 27 seconds (with compressed assets) compared to 56 seconds (with uncompressed assets). The data transfer out (in MB) from AWS to viewers is reduced by 54.3%.

Figure 5: raw/uncompressed assets served to end user

Figure 6: compressed assets served to end user

Note: Images (.png, .jpg, etc.) do not benefit from compression as their format is natively compressed. Adding gzip compression on top will yield minimal improvements of 3-4%.

Cleaning up

To avoid incurring future charges, delete the resources as follows:

  • Delete all objects in the S3 bucket.
  • Delete the CloudFormation stack.

Conclusion

This post shows how to deploy a compression pipeline for WebGL based websites in the AWS Cloud. It is decoupled and serverless using Amazon S3, AWS Lambda and Amazon CloudFront.

The demo shows content compression benefits in serving your website faster to end users, this becomes more important especially with larger scenes. It also saves storage and transfer costs. No JavaScript code change is required and the pipeline is fully automated.

Check my 3D projects using Raspberry Pi, S3, Amazon Lex, Amazon Polly, and AWS IoT core: Kid Monitor demo and Air Pollution demo.

About the author

Ahmed ElHaw is a Sr. Solutions architect at Amazon Web Services (AWS) with background in telecom, web development and design, spatial computing and AWS serverless technologies. He enjoys providing technical guidance to customers, helping them architect and build solutions that make the best use of AWS. Outside of work he enjoys coding, spending time with his kids, and playing video games.