This Guidance shows two architectural patterns for deploying applications in tactical edge environments on AWS using third-party hardware devices and platforms. The term "edge" refers to compute, network, and storage capabilities that operate outside AWS Regions, often in scenarios where communication with the cloud may be limited by low bandwidth, intermittent connectivity, or extended periods of disconnection. In addition to establishing a foundational tactical edge architecture, this Guidance offers deployment patterns that use both native AWS Internet of Things (IoT) services and Kubernetes, an open-source container orchestration system. AWS customers can use this to reliably deploy mission-critical applications in tactical edge environments with limited or intermittent network connectivity like mobile command centers, tactical vehicles, and operating bases.

Please note: [Disclaimer]

Architecture Diagram

Download the architecture diagram PDF 
  • Deploy applications onto third-party hardware
  • This architecture diagram shows how to deploy tactical edge applications from the cloud onto third-party edge hardware devices with AWS services.

  • Kubernetes-based deployment
  • This architecture extends the deployment onto third-party hardware to a single-node Kubernetes cluster on the third-party hardware.

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • The IoT Greengrass, IoT Core, Systems Manager, and Amazon CloudWatch services facilitate the secure provisioning and onboarding of edge devices, as well as the deployment of edge applications. This is achieved through the over-the-air deployment capabilities provided by IoT Core and IoT Greengrass. Furthermore, these services enable proactive monitoring of edge device health and operational status using the monitoring and logging capabilities of IoT Greengrass and CloudWatch. Additionally, they enforce consistent configurations across the edge fleet with IoT Core groups, IoT Greengrass deployments, and Systems Manager for operating system and package management.

    Read the Operational Excellence whitepaper 
  • This Guidance uses unique X.509 certificates for secure device authentication and TLS-encrypted communication. Device permissions are scoped using the device’s IoT policies and AWS Identity and Access Management (IAM) roles. Certificates and keys are stored in secure hardware like hardware security modules (HSMs) and trusted platform modules (TPMs). Secrets Manager and IoT Greengrass secrets manager facilitate secure synchronization of credentials between the cloud and the edge. These services provide the foundational security capabilities for data protection and access control in the two architecture patterns.

    Read the Security whitepaper 
  • The IoT Greengrass service enables disconnected application management, facilitating offline operation and data processing to help ensure mission-critical capabilities remain functional even in disconnected environments. When the edge devices are connected to the cloud, they can receive regular software updates and patches using the capabilities of Systems Manager and the over-the-air deployment features of IoT Greengrass. This helps address vulnerabilities and maintain the overall system reliability. Furthermore, the IoT Greengrass service is designed to operate in environments where network connections may be intermittent or disconnected for extended periods of time or indefinitely.

    Read the Reliability whitepaper 
  • This Guidance uses IoT Greengrass to deploy edge applications that process data closer to the source, reducing latency and bandwidth requirements. AWS customers can deploy edge applications like ML and video analytics to filter, preprocess, and act on data at the edge, minimizing raw data transfer to the cloud. This Guidance also optimizes resource utilization by allowing AWS customers to tailor hardware and edge deployments according to their mission’s needs. It implements caching and buffering to enable offline operation using IoT Greengrass; it uses Systems Manager for monitoring to proactively optimize performance, enabling edge data processing without the need to transfer data back to the cloud.

    Read the Performance Efficiency whitepaper 
  • By using IoT Greengrass, AWS customers can configure edge environments tailored to their needs, optimizing resource utilization and cost-effectiveness. The flexible architecture of this offering enables deploying only the necessary components, minimizing unnecessary resource consumption. It optimizes data transfer costs by processing and analyzing data at the edge, reducing cloud transmission and using purpose-built AWS services for further storage and analysis. Additionally, IoT Greengrass enables AWS customers to right-size the edge hardware platform for their specific use case. This allows processing data locally at the edge before transferring only the necessary pre-processed data, especially over expensive network links like satellite.

    Read the Cost Optimization whitepaper 
  • This Guidance offers AWS customers to choose hardware that is optimized for their specific mission requirements so that power and cooling are tailored accordingly. Two examples of this are IoT Greengrass and containers that allow for the optimization and reduction of the software deployment footprint, minimizing unnecessary resource consumption. This Guidance also optimizes data transfer resources by processing and analyzing data at the edge, which reduces the amount of data transmitted to the cloud and consequently minimizes power consumption due to lower network bandwidth needs. These services allow AWS customers to right-size their edge application deployment and compute needs, as well as process data locally close to the source without the need to transfer raw data back to the cloud over resource-intensive data links.

    Read the Sustainability whitepaper 
[Content Type]

[Title]

This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?