[SEO Subhead]
This Guidance shows two architectural patterns for deploying applications in tactical edge environments on AWS using third-party hardware devices and platforms. The term "edge" refers to compute, network, and storage capabilities that operate outside AWS Regions, often in scenarios where communication with the cloud may be limited by low bandwidth, intermittent connectivity, or extended periods of disconnection. In addition to establishing a foundational tactical edge architecture, this Guidance offers deployment patterns that use both native AWS Internet of Things (IoT) services and Kubernetes, an open-source container orchestration system. AWS customers can use this to reliably deploy mission-critical applications in tactical edge environments with limited or intermittent network connectivity like mobile command centers, tactical vehicles, and operating bases.
Please note: [Disclaimer]
Architecture Diagram
-
Deploy applications onto third-party hardware
-
Kubernetes-based deployment
-
Deploy applications onto third-party hardware
-
This architecture diagram shows how to deploy tactical edge applications from the cloud onto third-party edge hardware devices with AWS services.
Step 1
AWS IoT Greengrass core software runs on compatible edge hardware devices and operating systems.Step 2
AWS IoT Core and AWS IoT Greengrass cloud services establish secure connections from edge devices to AWS using TLS and X.509 certificates, and orchestrate over-the-air (OTA) deployments.Step 3
The AWS-provided components are used to manage edge applications. This includes a local MQTT 5 broker, stream manager for data streaming, secret manager, and AWS Systems Manager for managing local secrets, patching, and SSH tunnels. It also includes a shadow manager for managing device and application state.Step 4
The mission application DevOps pipeline integrates with the cloud-to-edge deployment capabilities of IoT Core and IoT Greengrass, utilizing the AWS Cloud Development Kit (AWS CDK) and/or the IoT Greengrass Development Kit Command-Line Interface (GDK CLI) to configure and trigger IoT Greengrass deployments.Step 5
The edge application artifacts are built and staged in Amazon Simple Storage Service (Amazon S3) and/or a container registry. These artifacts are then deployed as IoT Greengrass components to the edge device through IoT Greengrass deployments.Step 6
Mission-specific IoT Greengrass components and containers are used to execute the applications and machine learning (ML) models deployed from the mission application pipeline.
Step 7
The mission applications communicatie with vehicles, sensors, and other fixed assets that are connected to the third-party hardware in the field. This communication can occur through either mission wireless networks or hardwired links to the connected assets.
Step 8
Network connectivity is used for the initial deployment and edge-to-cloud data capture, if available during mission operations. However, the edge applications are designed to continue running even if the network is disrupted or becomes unavailable.
Step 9
Operators interact with mission applications and the underlying system resources. Components can be deployed locally, if needed, through the IoT Greengrass CLI.Step 10
Data and analytics pipelines are utilized to process and store the mission data in the cloud. Furthermore, machine learning models can be trained on this data using Amazon SageMaker, and then staged for deployment through the mission application pipeline. -
Kubernetes-based deployment
-
This architecture extends the deployment onto third-party hardware to a single-node Kubernetes cluster on the third-party hardware.
Step 1
IoT Greengrass core software runs on compatible edge hardware devices and operating systems.
Step 2
IoT Core and IoT Greengrass cloud services establish secure connections from edge devices to AWS using TLS and X.509 certificates, and orchestrate over-the-air (OTA) deployments.Step 3
Non-containerized mission specific applications and ML models are deployed to the edge device as IoT Greengrass components.Step 4
The AWS-provided components are used to manage edge applications. This includes a local MQTT 5 broker, a stream manager for data streaming, a secret manager and Systems Manager for managing local secrets, patching, and SSH tunnels.Step 5
Custom Kubernetes components are responsible for configuring a single-node Kubernetes cluster on the edge device, and subsequently deploying containers to the cluster.Step 6
The Kubernetes cluster runs adjacent to the IoT Greengrass software on the operating system of the edge device. This Kubernetes cluster is responsible for running the mission containers, which are either deployed by the custom Kubernetes IoT Greengrass components or containers run by local operators.Step 7
The mission containers are built and staged in either a cloud-based container registry or Amazon S3. These containers are then deployed to the edge device through the custom Kubernetes components using IoT Greengrass deployments.Step 8
A local container registry can be deployed and configured by the Kubernetes components. The Kubernetes components can stage container images to the registry from the cloud as part of IoT Greengrass deployments.
Get Started
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
The IoT Greengrass, IoT Core, Systems Manager, and Amazon CloudWatch services facilitate the secure provisioning and onboarding of edge devices, as well as the deployment of edge applications. This is achieved through the over-the-air deployment capabilities provided by IoT Core and IoT Greengrass. Furthermore, these services enable proactive monitoring of edge device health and operational status using the monitoring and logging capabilities of IoT Greengrass and CloudWatch. Additionally, they enforce consistent configurations across the edge fleet with IoT Core groups, IoT Greengrass deployments, and Systems Manager for operating system and package management.
-
Security
This Guidance uses unique X.509 certificates for secure device authentication and TLS-encrypted communication. Device permissions are scoped using the device’s IoT policies and AWS Identity and Access Management (IAM) roles. Certificates and keys are stored in secure hardware like hardware security modules (HSMs) and trusted platform modules (TPMs). Secrets Manager and IoT Greengrass secrets manager facilitate secure synchronization of credentials between the cloud and the edge. These services provide the foundational security capabilities for data protection and access control in the two architecture patterns.
-
Reliability
The IoT Greengrass service enables disconnected application management, facilitating offline operation and data processing to help ensure mission-critical capabilities remain functional even in disconnected environments. When the edge devices are connected to the cloud, they can receive regular software updates and patches using the capabilities of Systems Manager and the over-the-air deployment features of IoT Greengrass. This helps address vulnerabilities and maintain the overall system reliability. Furthermore, the IoT Greengrass service is designed to operate in environments where network connections may be intermittent or disconnected for extended periods of time or indefinitely.
-
Performance Efficiency
This Guidance uses IoT Greengrass to deploy edge applications that process data closer to the source, reducing latency and bandwidth requirements. AWS customers can deploy edge applications like ML and video analytics to filter, preprocess, and act on data at the edge, minimizing raw data transfer to the cloud. This Guidance also optimizes resource utilization by allowing AWS customers to tailor hardware and edge deployments according to their mission’s needs. It implements caching and buffering to enable offline operation using IoT Greengrass; it uses Systems Manager for monitoring to proactively optimize performance, enabling edge data processing without the need to transfer data back to the cloud.
-
Cost Optimization
By using IoT Greengrass, AWS customers can configure edge environments tailored to their needs, optimizing resource utilization and cost-effectiveness. The flexible architecture of this offering enables deploying only the necessary components, minimizing unnecessary resource consumption. It optimizes data transfer costs by processing and analyzing data at the edge, reducing cloud transmission and using purpose-built AWS services for further storage and analysis. Additionally, IoT Greengrass enables AWS customers to right-size the edge hardware platform for their specific use case. This allows processing data locally at the edge before transferring only the necessary pre-processed data, especially over expensive network links like satellite.
-
Sustainability
This Guidance offers AWS customers to choose hardware that is optimized for their specific mission requirements so that power and cooling are tailored accordingly. Two examples of this are IoT Greengrass and containers that allow for the optimization and reduction of the software deployment footprint, minimizing unnecessary resource consumption. This Guidance also optimizes data transfer resources by processing and analyzing data at the edge, which reduces the amount of data transmitted to the cloud and consequently minimizes power consumption due to lower network bandwidth needs. These services allow AWS customers to right-size their edge application deployment and compute needs, as well as process data locally close to the source without the need to transfer raw data back to the cloud over resource-intensive data links.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.