Skip to main content

Guidance for Deploying Rancher RKE2 at the Edge on AWS

Overview

This Guidance demonstrates how to deploy Rancher Kubernetes Engine (RKE2) at the edge with AWS services, enabling organizations to run mission-critical workloads in tactical edge environments with Denied, Disruptive, Intermittent, and Limited (DDIL) communication. It showcases an edge-to-cloud pattern for collecting and forwarding sensor data from the field to the cloud. This helps organizations overcome the challenges of deploying and managing mission-critical applications in environments with limited or intermittent connectivity.

How it works

Single-node cluster

This architecture diagram shows an edge and cloud pattern to deploy containerized workloads on a single node cluster at the edge using RKE2 on any third party hardware in DDIL environments.

Architecture diagram illustrating an AWS Rancher RKE2 edge single node setup. The diagram details the integration of sensors in fixed and mobile assets, edge location components, mission workloads, AWS IoT Greengrass, AWS Distro for Open Telemetry, Elemental OS, RKE2/K3s, and the interactions with AWS GovCloud US services such as Amazon EC2, Amazon S3, Amazon ECR, AWS IoT Core, Fleet Manager, ELB, and CloudWatch, highlighting local operator, cloud services, and connectivity paths.

Multi-node cluster

This architecture diagram shows two distinct edge-to-cloud patterns for managing applications in tactical edge scenarios, illustrating how mission-critical workloads can be deployed on RKE2 in DDIL environments. 

Architecture diagram illustrating the deployment of AWS Edge and Rancher RKE2 in a multi-node environment, detailing integration with sensors, mission workloads, SUSE Linux, AWS GovCloud, IoT Core, and key AWS services including EC2, ECR, S3, IoT Greengrass, and CloudWatch.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

For consistent and continuous deployment of edge applications through a continuous DevOps pipeline, leverage Rancher's MCM and Rancher Fleet to automate the deployment and updating of RKE2 container workloads at the edge. Gain visibility into the performance and status of your edge components by integrating Distro for Open Telemetry and CloudWatch, providing you with key insights to optimize your edge operations.

Read the Operational Excellence whitepaper

Secure your edge infrastructure by using Session Manager, a capability of AWS Systems Manager, to establish safe connections to your EC2 instances running Rancher MCM. AWS Identity and Access Management (IAM) roles and policies control access, and IAM instance roles enable your EC2 instances to interact with other AWS services. Enhance image security by leveraging Harbor registry to scan for vulnerabilities, ensuring your container images are secure. Protect your environment from distributed denial of service (DDoS) attacks with AWS Shield Standard, and enforce code compliance through the use of Harbor registry.

Read the Security whitepaper

Amazon EC2 Auto Scaling helps ensure your edge applications are highly available and resilient by maintaining necessary capacity. ELB distributes traffic across multiple EC2 instances in different Availability Zones. Leverage multi-AZ and multi-Region configurations to enhance the reliability of your edge solution, providing you with a robust and fault-tolerant architecture.

Read the Reliability whitepaper

Monitor the performance of your edge solution using CloudWatch metrics, and leverage Amazon EC2 Auto Scaling to optimize resource utilization. By analyzing performance data to identify and address any bottlenecks, you can ensure your edge applications and Rancher MCMoperate efficiently.

Read the Performance Efficiency whitepaper

Amazon EC2 Auto Scaling automatically adjusts your capacity based on demand, expanding during peak periods and scaling down during slower times. AWS Trusted Advisor receives recommendations on cost optimization opportunities, while the open-source Harbor registry eliminates the need for purchasing commercial licenses.

Read the Cost Optimization whitepaper

Reduce your environmental impact by leveraging the wide variety of EC2 instance types to choose the right-sized resources for your workloads. Combine this with Amazon EC2 Auto Scaling to automatically scale resources up and down, minimizing unused capacity and lowering your carbon footprint.

Read the Sustainability whitepaper

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.