AWS Architecture Blog
Dynamic Request Routing in Multi-tenant Systems with Amazon CloudFront
In this blog post, we will share how OutSystems designed a globally distributed serverless request routing service for their multi-tenant architecture. This will provide you ways to benefit from a managed solution that’s scalable and requires a low operational effort. Namely, we explain how to select the origin serving an HTTP/S request using Lambda@Edge, including the capability to call an external API to make a dynamic selection.
Lambda@Edge is an extension of AWS Lambda, a service that lets you perform functions that customize the content that Amazon CloudFront delivers. Lambda@Edge scales automatically, from a few requests per day to many thousands of requests per second. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience. You can run Lambda functions when the following CloudFront events occur:
- When CloudFront receives a request from a viewer (viewer request)
- Before CloudFront forwards a request to the origin (origin request)
- When CloudFront receives a response from the origin (origin response)
- Before CloudFront returns the response to the viewer (viewer response)

Figure 1. CloudFront events that can trigger Lambda@Edge functions
There are many uses for Lambda@Edge processing. You can check out more details on the service documentation.
Challenges in OutSystems’ multi-tenant architecture
To support some changes in how OutSystems handles multi-tenancy, they need to be able to dynamically select the origin in each request based on its host
header and path
.
OutSystems has a global presence and handles an increasing volume of requests every day. Deploying a custom reverse proxy solution globally would bring an additional development and operational overhead.
OutSystems’ use case required a system for advanced routing. The system should be secure, high-performing, and easy to operate. Finally, it needs to quickly integrate with existing deployment and orchestration tooling at OutSystems.
Architecture for the serverless dynamic origin selection solution
In CloudFront and Lambda@Edge, OutSystems found a fully managed serverless solution that will scale with their business and allow them to focus on their customers’ needs.
Natively, CloudFront can route requests to different origins based on path patterns, which can be configured in cache behaviors. For more advanced routing, customers can implement their own logic using Lambda@Edge.
Let’s take a look at the architecture OutSystems designed.

Figure 2. Example architecture for a Dynamic Request Routing solution
In this configuration, end users, regardless of their location, send requests to a common CloudFront distribution. Once the request arrives to CloudFront, it is evaluated based on the two configured cache behaviors:
- The first behavior serves static objects from Amazon Simple Storage Service (Amazon S3) that are common to all tenants. This cache behavior is optimized for performance and caches static resources.
- The second behavior forwards requests to the backend service. On this behavior, a Lambda@Edge is configured on Origin Request event to implement origin selection logic.
Several multi-tenant clusters will be running on different AWS Regions to serve their requests. To properly route users’ requests, we use a Lambda@Edge function. This function evaluates the request’s host
header and/or path
, based on that it chooses the corresponding origin cluster. The request is then forwarded upstream by CloudFront.
Choosing the right origin is based on an API call made to an Amazon DynamoDB table where OutSystems stores the mappings between their customers and different backend clusters. To improve performance, OutSystems implemented some of the best practices mentioned in the Leveraging external data in Lambda@Edge blog post:
- Lambda@Edge temporarily caches the API call results, avoiding the need to make an API call for every request.
- Additionally, DynamoDB global tables feature is used, and Lambda@Edge will make the API call to the nearest region to reduce latency.
The following code snippet can be used as guidance.
After this step, the origin domain name is selected and the request forwarded upstream.
Globally, with this architecture OutSystems met their initial requirements:
- Security
- Stronger distributed denial of service (DDoS) resource protection by using global presence of CloudFront
- Performance
- Transport layer security (TLS) termination on CloudFront
- Cache optimization for multiple tenants
- Operation
- Low maintenance
- Avoid logic replication
- Reliability
- Global presence (200+ Points of Presence across the globe)
Conclusion
By using CloudFront and Lambda@Edge together with AWS services like DynamoDB, you can build high-performing distributed web applications for your use cases. In this blog post, we shared how OutSystems was able to dynamically route requests to their multi-tenant application, while achieving global distribution, service availability, and the agility to operate at scale.
About OutSystems
OutSystems is an AWS Advanced Tier Technology Partner that helps developers build applications quickly and efficiently. They provide a visual, model-driven development environment with AI-based assistance to ensure applications are built efficiently. Their platform services, also with AI, provide automation, which enhances the application lifecycle so that applications can be deployed and managed easily.