AWS Startups Blog

Build a Hybrid Architecture for Local Compliance and Global Scalability

By Saud Albazei, Alexandru Costescu, and Anshuman Nanda

Startups at any stage face many regulatory challenges when expanding to new markets or trying to comply with data residency regulations in their home market, putting them at a disadvantage compared to established enterprises that can afford the upfront capital investment of operating their own physical facilities with AWS Outposts where an AWS region is not yet available in a country. In this post, we will explore alternatives where a startup could run workloads in multiple infrastructures in a hybrid approach to comply with local data residency requirements, while utilizing the AWS regions for global scalability.

Understanding the regulation is fundamental to implementing a compliant technical solution. Different countries impose different regulatory requirements, and some even regulate industries differently. That could include Logistics, FinTech, HeathTech, EdTech, and a few others. The regulated part of the data in most cases is a subset of the entire workload. An example of that would be Personally Identifiable Information (PII), Payment or Financial Transactions, Short Message Services (SMS) and more. Once your local regulations have been clearly identified, you can use a few AWS services to help with the deployment, operation, and monitoring of all infrastructures in a centralized location.

Hybrid architecture with a single local data center.

This hybrid architecture gives startups the freedom to provision any infrastructure as long as the servers are running a supported operating system such as Amazon Linux 2, Bottlerocket, Ubuntu, RHEL, SUSE, Debian, CentOS, or Fedora. This flexibility allows you to run the regulated workloads anywhere faster and more cost effectively.

Primary Approaches for Hybrid Architectures

There are two primary scenarios in which this architecture can be implemented. The first use case is when operating in a single country with a single-entry endpoint such as a local API Gateway that routes requests based on the data classification. The major benefit of this approach is that the clients can continue to communicate to the back-end over a single endpoint and API.

Hybrid architecture with a single local data center where non-regulated and dynamic API requests are routed to the Cloud using a local API gateway.

The second use case is when operating in multiple countries with multiple entry points and multiple API gateways. In this approach, the client has to have the necessary routing logic to communicate with the services in the user’s respective country. An example to consider would be a FinTech startup that has to store user’s financial transactions based on the country they reside in. Each local deployment will have a dedicated endpoint, and the client will communicate to each local endpoint based on the location set in the user’s profile.

Hybrid architecture with multiple local data centers.

All of these local and Cloud based deployments are managed in a centralized location, and in the following demonstration, you will learn how to deploy a sample local containerized service, focusing on the following tools:

The first step is to provision the local servers with the desired capacity and supported OS. Once you have them up and running, you can launch your first Amazon ECS Anywhere Cluster. ECS Anywhere is a feature of Amazon ECS that enables you to easily run and manage containerized workloads on your managed infrastructure.

Amazon ECS Console. Launching a new cluster.

Once the ECS cluster is created, you can register local servers to it as ECS Instances, adding the provisioned local servers to this cluster as hosts. The control plane will reside in the Cloud and continues to be fully managed by Amazon ECS. After registration, it will give you a command to execute a shell script that will install all required software and register the instance with AWS Systems Manager and the ECS Cluster. Local servers will host the data plane and an ECS Agent, and the control plane connects securely to the hosts over a TLS connection, and no data is transferred between the control plane and host other than deployment related metrics and health status of the services.

Amazon ECS Console. Registering external instances.

Local instance SSH session to install the required software and register the instance with the ECS Cluster.

Amazon ECS Console showing a launched cluster and a registered instance.

Now, you have an ECS Cluster with registered local instances and ready for a deployment. Containerized deployments in ECS are packaged in tasks, which share many characteristics with Kubernetes pods. You could deploy a single task, or multiple tasks in services distributed across multiple instances for high availability and managed scalability. In order to deploy the local service, you need to create a task definition for it first. You can do that by navigating to Task Definitions and creating a new definition of type external. Fill in all necessary information, and add the necessary containers and their configuration and they could either be registered with Amazon Elastic Container Registery (Amazon ECR) or any other container registry service such as Docker Hub.

Amazon ECS Console. Creating a new Task Definition.

Your ECS cluster is now ready for the deployment. Create the service, which will be type ‘external.’ Set the service name, desired number of tasks, deployment type, and placement strategy. Once created, Amazon ECS control plane will work with the deployed ECS agents in each local server to download the container images and run them, which includes checking their health, and reporting their state in the AWS console.

Amazon ECS Console. Run a task.

Local instance showing deployed and running containers.

Now, you have your containerized application running on your locally provisioned servers. The application is ready to take in requests from a frontend API Gateway, Reverse Proxy, or a Load Balancer such as NGINX or KrakenD through the exposed ports. Additionally, given the ephemeral nature of these containers, the application will need to persist data such as databases or media objects. Amazon S3 can be used for object storage if this media is not regulated, or use client-side encryption before uploading objects to S3 if the regulation allows. For the database, Amazon RDS on VMware can be used locally, or managed by the customer through deploying it to a dedicated server managed and monitored by AWS Systems Manager and Amazon CloudWatch respectively. Both services can help manage the server fleet, automate operations, and offer a centralized observability solution for all workloads.

Observability for Hybrid Deployments

All registered servers, either with the ECS cluster or with SSM, can be monitored and managed through the AWS console. This includes health status of the servers by observing performance metrics and connecting to them over SSH. The CloudWatch agent also allows you to collect internal system-level metrics from your servers as well as collect logs to be displayed in the CloudWatch Dashboard. This dashboard is highly configurable, so you can choose what data is uploaded to the Cloud to avoid uploading regulated data. In addition, CloudWatch metrics can trigger Lambda functions that could be designed to respond to certain events such as failover, scaling the local infrastructure and various other events.

Conclusion

This hybrid approach allows you to create the necessary services in a combination of AWS managed infrastructure as well as customer managed infrastructure while maintaining a single observantly location and using the same AWS services and tools to deploy and manage the workloads. Segregating regulated microservices from unregulated ones is key to having a successful hybrid architecture that can be scalable and highly available. Startups can expand quickly and cost effectively using this approach while continuing to benefit from the continuously growing global AWS infrastructure.

Saud Albazei Saud Albazei is a Solutions Architect at Amazon Web Services. He advises startups from early stages through helping them go public. He leverages his experience to help startups innovate and bring ideas to life. He has a passion for building distributed and scalable systems using serverless technologies.
Alexandru Costescu Alexandru Costescu is a Startup Solutions Architect based out of Bucharest ,Romania. He is passionate about all things tech and about designing systems that can take full advantage of the cloud and embrace the DevOps culture.
Anshuman Nanda Anshuman Nanda is a Sr. Startup Solutions Architect based out of Dubai, UAE. He enjoys learning new technologies and helping customers solve complex technical problems by providing solutions using AWS products and services.