[SEO Subhead]
This Guidance demonstrates how to deploy Rancher Kubernetes Engine (RKE2) at the edge with AWS services, enabling organizations to run mission-critical workloads in tactical edge environments with Denied, Disruptive, Intermittent, and Limited (DDIL) communication. It showcases an edge-to-cloud pattern for collecting and forwarding sensor data from the field to the cloud. This helps organizations overcome the challenges of deploying and managing mission-critical applications in environments with limited or intermittent connectivity.
Note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Rancher Multi-Cluster Manager (MCM) is deployed in AWS GovCloud (US) on an RKE2 cluster. RKE2 cluster will be deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances running Suse Enterprise Linux Server (SLES) AMI hardened to Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) security standards.
Step 2
Rancher MCM provides centralized administration for downstream RKE2 clusters on one or more edge devices through Elastic Load Balancing (ELB).
Step 3
Elemental is an MCM extension which provides a full cloud-native OS management for edge devices. An endpoint is registered in Elemental, which creates a seed image and an initial registration config that contains a registration URL.
Step 4
Elemental-built Suse Linux Enterprise Micro (SLE Micro) is installed along with the initial registration config on the edge device through a USB.
Step 5
The device registers to Elemental in Rancher MCM.
Step 6
Fleet is a DevOps engine that polls container registries and Git repositories for declarative changes to infrastructure and applications.
Step 7
Fleet first deploys an RKE2 cluster or K3s in addition to Harbor registry at the edge. K3s is recommended for light-weight workloads while RKE2 is recommended for larger complex workloads.
Step 8
Fleet then orchestrates replication of contents from Amazon Elastic Container Registry (Amazon ECR) to Harbor registry.
Step 9
Once RKE2 or K3s is available, Fleet deploys mission workloads on the edge, pulling images from Harbor registry. Fleet provides centralized deployment of initial workloads and Day 2 operations.
Step 10
Operators interact with mission applications through exposed mission web applications.
Step 11
Mission applications receive sensor data from the field.
Step 12
An AWS IoT Greengrass client running on Elemental can also receive sensor data.
Step 13
An AWS IoT Greengrass client forwards data to AWS IoT Core on the cloud.
Step 14
Mission workloads connect with upstream AWS services such as Amazon Simple Storage Service (Amazon S3) to transfer data from edge to the cloud.
Step 15
AWS Distro for Open Telemetry processes telemetry data at the edge and forwards this data to Amazon CloudWatch for performance monitoring of edge device and mission applications.
Get Started
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
For consistent and continuous deployment of edge applications through a continuous DevOps pipeline, leverage Rancher's MCM and Rancher Fleet to automate the deployment and updating of RKE2 container workloads at the edge. Gain visibility into the performance and status of your edge components by integrating Distro for Open Telemetry and CloudWatch, providing you with key insights to optimize your edge operations.
-
Security
Secure your edge infrastructure by using Session Manager, a capability of AWS Systems Manager, to establish safe connections to your EC2 instances running Rancher MCM. AWS Identity and Access Management (IAM) roles and policies control access, and IAM instance roles enable your EC2 instances to interact with other AWS services. Enhance image security by leveraging Harbor registry to scan for vulnerabilities, ensuring your container images are secure. Protect your environment from distributed denial of service (DDoS) attacks with AWS Shield Standard, and enforce code compliance through the use of Harbor registry.
-
Reliability
Amazon EC2 Auto Scaling helps ensure your edge applications are highly available and resilient by maintaining necessary capacity. ELB distributes traffic across multiple EC2 instances in different Availability Zones. Leverage multi-AZ and multi-Region configurations to enhance the reliability of your edge solution, providing you with a robust and fault-tolerant architecture.
-
Performance Efficiency
Monitor the performance of your edge solution using CloudWatch metrics, and leverage Amazon EC2 Auto Scaling to optimize resource utilization. By analyzing performance data to identify and address any bottlenecks, you can ensure your edge applications and Rancher MCM operate efficiently.
-
Cost Optimization
Amazon EC2 Auto Scaling automatically adjusts your capacity based on demand, expanding during peak periods and scaling down during slower times. AWS Trusted Advisor receives recommendations on cost optimization opportunities, while the open-source Harbor registry eliminates the need for purchasing commercial licenses.
-
Sustainability
Reduce your environmental impact by leveraging the wide variety of EC2 instance types to choose the right-sized resources for your workloads. Combine this with Amazon EC2 Auto Scaling to automatically scale resources up and down, minimizing unused capacity and lowering your carbon footprint.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.