Building a serverless web application architecture for the AWS Secure Environment Accelerator (ASEA)
Government departments work hard to meet required security framework controls for cloud service configuration and deployment, and obtaining an Authority to Operate (ATO) can sometimes take up to 18 months. To assist with this process, Amazon Web Services (AWS) developed the open-source AWS Secure Environment Accelerator (ASEA), a tool designed to help deploy and operate secure multi-account AWS environments.
Designed in consultation with the Canadian Centre for Cybersecurity (CCCS) and the Treasury Board of the Government of Canada, ASEA automates the configuration of AWS services to help meet the CCCS Medium Cloud Security Profile controls. ASEA automation has saved customers three months of effort on average—a key advantage for government teams facing time and labor resource constraints. This post describes how government departments can more simply deploy a web application consisting of a single-page application (SPA), backend API, and database within the ASEA.
The AWS ASEA architecture
Figure 1. An architecture for an application consisting of a single-page application, backend API, and database deployed within the ASEA.
The ASEA workload AWS account will have preconfigured networking AWS resources that teams must use. As pictured in Figure 1, these resources are the following:
- Virtual Private Cloud (VPC)/Subnets: App and Data are pairs of subnets, deployed in different Availability Zones, that teams can use for AWS network resources. The App subnet is designed for application tier resources and the Data subnet is designed for database resources.
- Security Groups: App_sg and Data_sg exist in each ASEA workload AWS account that has preconfigured “Allow Inbound” rules. For example, the Data_sg allows common database ports and only accepts traffic from App_sg. These governed security groups cannot be modified; they are protected by the Service Control Policies (SCPs). However, teams can create additional security groups that they can manage.
VPCs have a connection to an AWS Transit Gateway where network traffic is routed to a central VPC for egress internet traffic. This network traffic, by default, flows through next generation firewalls (NGFW) where the traffic is subject to any firewall rule configured by these devices. If the workload API establishes a connection to any external API resource, the network traffic would follow this described path, as ASEA workload AWS accounts do not have their own internet gateway.
The following Figure 2 is a high-level reference architecture for ASEA with additional red boxes drawn to highlight the flow of traffic from an ASEA workload account to the internet.
Figure 2. Egress traffic flow originating from the Amazon VPC in the Workload account, traversing through the AWS Transit Gateway in the shared network account, through the next generation firewalls in the perimeter account, and finally out through the internet gateway in the perimeter account. View a larger version of the image here. (Reference)
ASEA’s serverless application architecture main components
Building applications within a secure AWS environment that has guardrails, restrictive networking, and reduced permissions is challenging. The serverless application architecture, as described in Figure 1, accelerates how teams can build with the ASEA. Let’s break down that architecture into three main components: web front-end, backend API, and database.
When deploying a backend HTTPS API (ingress traffic) an Amazon API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It can proxy requests to backend HTTP/HTTPS resources running in your Amazon VPC by setting up private integrations using VPC links. In this proposed architecture (Figure 1), HTTPS API requests are sent to an Amazon API Gateway, through CloudFront, and proxied to serverless AWS Fargate containers running on Amazon Elastic Container Service (Amazon ECS) in a private VPC. The API Gateway can restrict requests such that only CloudFront can initiate requests. This is configured using an AWS Web Application Firewall (AWS WAF) using a rule looking to only accept requests with a matching header and value. This practice is explained in detail in the blog post, “How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager.”
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. When deploying in the ASEA, an Amazon RDS subnet group should be created that specifies the existing Data subnets. This controls which subnets and Availability Zones that the Amazon RDS database will be deployed in. The Data_sg VPC security group can also be used to restrict connections to the database.
Once the Amazon RDS database is provisioned, how do development teams connect to it? Here are two options:
- On-premises connectivity: Teams can connect through the Transit Gateway if AWS Site-to-Site VPN or AWS Direct Connect has been established in the AWS environment. A VPC security group would need to be added to the Amazon RDS database to allow connections from an on-premises IP range.
- Via Amazon Elastic Compute Cloud (Amazon EC2): This approach is similar to the concept of a bastion host (or jump box), but differs from being publicly accessible. In the ASEA, preconfigured Amazon EC2 roles and AWS Systems Manager Session Manager (SSM) are ready for teams to use. Launching a current Amazon Machine Image (AMI), for example Amazon Linux 2, has the SSM agent preinstalled. By selecting the appropriate identity and access management (IAM) role—or if you don’t select one, the ASEA will automatically assign it—SSM Session Manager will be available for teams to use to securely connect to private Amazon EC2 instances. This method can be combined with SSH connections and port forwarding that can proxy local database connections through the SSH connection to an Amazon RDS endpoint. Details on SSM Session Manager and SSH connections can be read in the AWS Systems Manager documentation here.
Putting the pieces together
To help customers accelerate the adoption of this architecture, the infrastructure as code (IaC) can be found on this GitHub repository. This project uses the AWS Cloud Development Kit (AWS CDK) to define the AWS resources using the programming language TypeScript. Development teams can very quickly adopt this project by updating its configuration to point to the location of an SPA build folder—for example Angular dist build folder output—and to an API code project containing a Dockerfile. The CDK deployment will create all of the AWS resources described above, upload the SPA static files, build and push the Docker container, and configure Amazon ECS to run the container.
With ASEA, there is growing momentum behind getting the Government of Canada’s PROTECTED B / Medium Integrity / Medium Availability (PBMM) workloads onto AWS—with 30 deployments to date. Public sector customers in the UK and Australia are also adopting ASEA. Each ATO achieved around the world delivers new lessons that are incorporated into ASEA and the sample evidence package where feasible.
For government teams who want to accelerate their launch of PBMM workloads in the cloud, the AWS ASEA is a reliable, secure, open-source starting point. For more information on meeting your compliance needs on AWS, visit our compliance page, or contact your AWS account team.
Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us.
Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey, and we’ll use feedback from the survey to create more content aligned with the preferences of our readers.