Serving compressed WebGL websites using Amazon CloudFront, Amazon S3 and AWS Lambda
In this post, you will learn how to deliver compressed WebGL websites to your end users. When requested webpage objects are compressed, the transfer size is reduced, leading to faster downloads, lower cloud storage fees, and lower data transfer fees. Improved load times also directly influence the viewer experience and retention, which will help you improve website conversion and discoverability.
Using WebGL, you can make your website more immersive while still being accessible via a browser URL. Example use cases include – Virtual Reality applications (such as online education, 3D product visualization, and customized e-commerce), virtual twins (making an online replica of equipment or a facility), and many more.
Following this solution, you will learn how to build WebGL websites serving compressed static content (HTML/CSS/JS) at the edge using Amazon CloudFront. You will also see how to compress 3D models at CloudFront’s origin (this blog post example uses Amazon Simple Storage Service (Amazon S3) to host the website content), and AWS Lambda to automatically compress the objects uploaded to S3.
Background on compression and WebGL
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. This capability is negotiated between the server and the client using an HTTP header which may indicate that a resource being transferred, cached, or otherwise referenced is compressed. Amazon CloudFront on the server-side supports Content-Encoding header.
On the client-side, most browsers today support brotli and gzip compression through HTTP headers (
Accept-Encoding: deflate, br, gzip) and can handle server response headers. This means browsers will automatically download and decompress content from a web server at the client-side, before rendering webpages to the viewer.
WebGL supports certain 3D asset formats, examples include:
- glTF: a standard file format for three-dimensional scenes and models. A glTF file uses one of two possible file extensions, gltf or glb. A gltf file may be self-contained or may reference external binary and texture resources, while a .glb file is entirely self-contained.
- Babylon.js files: Babylon.js uses a JSON file format for describing scenes.
- Wavefront .obj: OBJ is a geometry definition file format first developed by Wavefront Technologies for its Advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors
Overview of solution
You are provided a deployable demo for this solution and a walkthrough of the configuration. The example application uses Amazon S3, a cloud object storage service, AWS Lambda, a serverless compute service that runs your code based on the incoming request or event, and Amazon CloudFront, a fast content delivery network (CDN) that works seamlessly with any AWS origin, such as Amazon S3. This is the solution architecture:
- You upload objects to an S3 bucket that is used for website hosting.
- When an object is uploaded to the source S3 bucket, S3 triggers a Lambda function.
- Lambda compresses the 3D model file and re-uploads to the bucket under a different folder.
- Cloudfront serves the website from S3 bucket origin.
- Client browsers are served with the compressed website (HTML, CSS, JS and 3D assets). Browsers automatically decompress content at the client side.
Demo using AWS Serverless Application Model (SAM)
To deploy the example application, you need:
- AWS credentials that provide the necessary permissions to create the resources. This example uses admin credentials.
- The AWS SAM CLI installed.
- Clone the GitHub repository.
- Navigate to the cloned repo directory. Alternatively, use the
sam initcommand and paste the repo URL:
- Build the AWS SAM application:
- Deploy the AWS SAM application:
sam deploy --guided
Note: SAM guided deployment asks for a globally unique S3 bucket name to be created, make sure you use lower case characters and numbers only.
- After deployment is successful, take a note of the CloudFront URL from the SAM deployment outputs section.
Note: The deployment takes about 15 mins.
Testing the solution
- In the AWS Management console, go to your newly created S3 bucket.
- From the cloned repo, upload all files under website/ to the root of the S3 bucket.
- Open your browser, enable developer tools and visit the CloudFront URL.
Putting it all together, you can see most of the website content (3D assets and static content) is served to the client in compressed form (brotli and gzip).
This stack creates and configures the following resources:
A CloudFront distribution with the following configuration:
- S3 REST regional endpoint as the origin, with access restricted by an Origin Access Identity.
- A managed cache policy = CachingOptimized
- Compress Objects Automatically = True
In this solution, CloudFront will give you the following benefits:
- Delivers the website over HTTPS with CloudFront default certificate. HTTPS secures your website and is required by browsers if you want to enable audio support (e.g., an online meeting, conversational VR host, etc…).
- Enables automatic object compression at the edge for static website files (e. g .html, .js and .css), and less than 10 MB in size.
An S3 bucket with a configurable bucket name through the deployment, with the following configuration:
- Private access
- Bucket policy with access restricted to CloudFront OAI
The function uses Python’s runtime with the following code logic:
- Lambda ‘/tmp/’ as temporary working read/write directory. This space has a fixed size of 512 MB.
- Compresses 3D asset files using gzip at default compression level.
Note: you can experiment with different compression levels, however increasing compression level may bring diminishing benefits in terms of file size reduction and will increase browser decompression time. Browser decompression time adds an overhead; however, compression benefits outweigh browser decompression time.
Browser finish time = Asset Download time + client browser decompression time of the asset
- Adds metadata (Content-Encoding: gzip) the to the file.
- Uploads the compressed file(s) back to S3 and deletes the original uncompressed file(s).
Lambda is configured with S3 triggers to automatically invoke your function upon object upload. Triggers are added for all 3D asset types in scope. In this example, the suffix is for three file types (gltf, glb and babylon). This configuration is important to avoid triggering the Lambda function for all files in your project.
The function’s memory is set to 512 MB, and a timeout of 60 seconds. With this configuration, it takes an average of 2 seconds to compress a 13 MB asset. Configure these parameters according to your needs. Necessary permissions are added to S3 bucket to trigger Lambda, as well as permissions for Lambda to fetch and save objects to the configured S3 bucket.
For a larger scale project, such as my VR Child Monitor (you can also navigate the actual 3D scene), analysis shows the improvement of website loading ‘Finish’ time with 57.4% improvement. The website is entirely loaded in 27 seconds (with compressed assets) compared to 56 seconds (with uncompressed assets). The data transfer out (in MB) from AWS to viewers is reduced by 54.3%.
Note: Images (.png, .jpg, etc.) do not benefit from compression as their format is natively compressed. Adding gzip compression on top will yield minimal improvements of 3-4%.
To avoid incurring future charges, delete the resources as follows:
- Delete all objects in the S3 bucket.
- Delete the CloudFormation stack.
About the author
Ahmed ElHaw is a Sr. Solutions architect at Amazon Web Services (AWS) with background in telecom, web development and design, spatial computing and AWS serverless technologies. He enjoys providing technical guidance to customers, helping them architect and build solutions that make the best use of AWS. Outside of work he enjoys coding, spending time with his kids, and playing video games.