[SEO Subhead]
Important: This Guidance requires the use of AWS CodeCommit, which is no longer available to new customers. Existing customers of AWS CodeCommit can continue using and deploying this Guidance as normal.
This Guidance demonstrates how to configure a proxy in a virtual private cloud (VPC) to connect external services to your Amazon VPC Lattice service network, enabling public, hybrid, or cross-Region access. While VPC Lattice simplifies service-to-service consumption within an AWS Region, if your applications reside outside that Region, you'll need to create and manage a proxy solution. By following this Guidance to build an ingress VPC and configure appropriate DNS resolution, you can easily establish connectivity to your external resources from your VPC Lattice service network.
Please note: [Disclaimer]
Architecture Diagram
-
Overview
-
Public access
-
Hybrid access
-
Cross-Region access
-
Overview
-
This architecture diagram shows how to configure a proxy in a virtual private cloud (VPC) to connect external services to Amazon VPC Lattice. There are three ways to use Amazon VPC Lattice for public, hybrid, or cross-Region access. Each are outlined further are in the corresponding tabs.
Step 1
This Guidance will deploy a virtual private cloud (VPC) in multiple Availability Zones (AZs), with both public and private subnets containing internal and external Network Load Balancers.Step 2
AWS PrivateLink VPC endpoints (interface and gateway) are created to reach AWS services privately.Step 3
AWS CodePipeline orchestrates the build and delivery of this Guidance. The code is pulled from GitHub to an AWS CodeCommit repository.Step 4
AWS CodeBuild builds containers that run an open-source version of NGINX. The container image is stored in Amazon Elastic Container Registry (Amazon ECR).Step 5
The deployment stage in the pipeline uses AWS CloudFormation to build an Amazon Elastic Container Service (Amazon ECS) cluster, task definition, and service, using AWS Fargate as the capacity provider.Step 6
Four target groups are used to pass traffic to the backend compute solution. Each Network Load Balancer configures two TCP listeners for ports 80 (HTTP) and 443 (HTTPS). The Amazon ECS tasks therefore service both internal and external traffic. -
Public access
-
This architecture diagram shows how placing a proxy solution in an associated VPC enables external consumption of VPC Lattice services by adjusting the DNS resolution.
Step 1
The consumer application located outside AWS tries to resolve service1’s domain name publicly. An Amazon Route 53 public hosted zone resolves to the Network Load Balancer domain name.Step 2
Traffic is sent to the Network Load Balancer public IPs (obtained after the DNS resolution), and the request is forwarded to the Fargate proxy fleet.Step 3
Inside the ingress VPC, the proxy fleet resolves service1’s domain name by using the VPC DNS resolver. A Route 53 private hosted zone is used to map the custom domain name with the domain name generated by Amazon VPC Lattice.Step 4
The DNS resolution provides VPC Lattice with link-local addresses. Traffic is sent using the VPC Lattice VPC association.Step 5
A service policy allows traffic between the AWS service network account and the AWS provider account if there is an association between the VPC Lattice service and the VPC Lattice service network. This can then be associated with the ingress VPC.Step 6
This request is redirected to an AWS Lambda function. -
Hybrid access
-
This architecture diagram shows how placing a proxy solution in an associated VPC enables on-premises applications to have external consumption of VPC Lattice services by adjusting the hybrid DNS resolution.
Step 1
The on-premises consumer application tries to resolve service1’s domain name locally. The on-premises DNS server forwards the DNS request to a Route 53 resolver inbound endpoint, located on AWS. You can make use of any hybrid connectivity solution with AWS.Step 2
The Route 53 resolver inbound endpoint queries a Route 53 private hosted zone to resolve the Network Load Balancer domain name.Step 3
A hybrid connectivity solution can be used for the connectivity between on-premises applications and AWS. Traffic is sent to the Network Load Balancer private IPs (obtained after the DNS resolution), and the request is forwarded to the Fargate proxy fleet.Step 4
Inside the ingress VPC, the proxy fleet resolves service1’s domain name by using the VPC DNS resolver. A Route 53 private hosted zone can be used to map the custom domain name with the domain name generated by VPC Lattice.Step 5
The DNS resolution provides VPC Lattice with link-local addresses. Traffic will be sent using the VPC Lattice VPC association.Step 6
A service auth policy allows traffic between the AWS service network account and the AWS provider account if there is an association between the VPC Lattice service and the VPC Lattice service network. This can then be associated with the ingress VPC.Step 7
This request is redirected to a Lambda function. -
Cross-Region access
-
This architecture diagram shows how placing a proxy solution in an associated VPC enables cross-Region consumption of VPC Lattice services by adjusting the hybrid DNS resolution.
Step 1
Consumer applications in the consumer VPC from AWS Region 1 use their local DNS VPC resolver for service1’s domain name resolution by using a Route 53 private hosted zone.Step 2
Configure the DNS resolution to point to the proxy solution in the ingress VPC in Region 2.Step 3
Any inter-Region connectivity option* enables communication between the consumer VPC in Region 1 and the ingress VPC in Region 2.*You can check the Amazon Virtual Private Cloud Connectivity Options whitepaper for more information about inter-Region connectivity options.
Step 4
Inside the ingress VPC, the Fargate proxy fleet will resolve service1’s domain name by using the VPC DNS resolver. A Route 53 private hosted zone can be used to map the custom domain name to the domain name generated by VPC Lattice.Step 5
DNS resolution will provide VPC Lattice with link-local addresses. Traffic will be sent using the VPC Lattice VPC association.Step 6
A service auth policy allows traffic between the AWS service network account and the AWS provider account if there is an association between the VPC Lattice service and the VPC Lattice service network. This can then be associated with the ingress VPC.Step 7
This request is redirected to a Lambda function.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
CodePipeline enables you to use a pipeline to make controlled and auditable changes through an artifact repository for the NGINX proxy solution, helping you avoid undesired updates. In addition, CloudFormation uses infrastructure as code to deploy all resources, giving you visibility and control over the created resources.
-
Security
VPC Lattice handles authentication and authorization by using optional auth policies, both in the VPC Lattice service network and services. AWS Identity and Access Management (IAM), which uses the Zero Trust on AWS security model, establishes secure authentication and authorization mechanisms for service-to-service communication. IAM security credentials generate AWS Signature Version 4 signatures, which are passed to VPC Lattice. Common network security measures for the VPC and application add a second layer of security control. For example, you can use security groups and network access control lists, and the NGINX configuration enables you to define an allowlist of the source IPs that can connect to the proxy targets.
-
Reliability
This Guidance deploys resources in three AZs, providing high availability for your proxy solution, made using a Network Load Balancer and an Amazon ECS on Fargate fleet. Additionally, CodePipeline uses managed AWS services—including CodeCommit, CodeBuild, and CloudFormation—that are built to be highly available within a Region by default.
Note: This Guidance has been built on the assumption that it will automatically scale and contract using average CPU metrics for the Amazon ECS service as the dimension. Load testing revealed that the Guidance is CPU-bound as the load increases, based on the specifications of the chosen task sizes. You can adjust this Guidance to use a metric that best suits your application’s profile and load.
-
Performance Efficiency
This Guidance uses a combination of AWS native services and customizable options. It uses a Network Load Balancer as the entry point because it provides high throughput, flexibility in protocol, and feature support when connecting to VPC Lattice. An Amazon ECS on Fargate proxy fleet provides flexibility in resolving the domain name generated by VPC Lattice to the link-local addresses (which might vary). The fleet uses Fargate to gain the scalability of serverless technologies and to simplify container management.
-
Cost Optimization
This Guidance automatically scales the Amazon ECS on Fargate tasks as required, depending on CPU utilization. This automatic scalability of Amazon ECS means that the proxy solution only uses the required compute capacity, so you do not have to pay for unnecessary compute.
-
Sustainability
Amazon ECS on Fargate handles scaling automatically so that your proxy solution has an optimal compute footprint based on CPU load. Additionally, the tasks use a lightweight version of NGINX to minimize the computational load when sending requests to VPC Lattice. By using configured and tested workload elasticity, this Guidance helps you efficiently match your cloud resource utilization to demand and avoid overprovisioned capacity, ultimately lowering your carbon footprint.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.