This Guidance demonstrates how to build satellite communications (SATCOM) control plane analytics pipelines. Given bursts of time-series metrics, this Guidance helps SATCOM operators transform their data to identify trends that impact their business outcomes. It then renders results using business intelligence tools to display data-rate trends on a geo-map, and applies machine learning (ML) to flag unexpected signal-to-noise ratio (SNR) deviations. Moreover, this Guidance offers a low/no-code approach, enabling SATCOM operators to discover new insights into their data, achieve automatic scaling, lower costs, and reduce operational overhead.

Please note: [Disclaimer]

Architecture Diagram

[text]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • There are two key ways you can respond to incidents and events when operating this Guidance. First, deviations in the data invoke a Lambda function, which then filters the severity and, if necessary, invokes an Amazon SNS email or SMS text notification to your operations team. Second, feedback is incorporated into the SageMaker machine learning model in the form of additional training data. This improves the quality of the model by constantly adjusting the threshold for SATCOM data anomalies.

    Read the Operational Excellence whitepaper 
  • A number of design decisions were factored for people and machine access when deploying this Guidance. For one, the principle of least privilege is applied with all services using the minimal IAM role permissions to process their function. Additionally, the AWS Shared Responsibility Model dictates that for AWS managed services, it is the responsibility of AWS to ensure the confidentiality, integrity, and the availability of its services. Meaning, those who use this Guidance do not have the responsibility to protect and maintain the underlying compute and network for the services used (such as Lambda and QuickSight). AWS implements the appropriate controls and maintenance as outlined by its internal polices and the many regulatory, legal, and compliance frameworks. Moreover, server-side encryption through AWS KMS is applied to the Kinesis Data Firehose records. Finally, the post-processed SATCOM analytics data-lake leverages Amazon S3, which automatically encrypts all new objects added to buckets on the server side, using AES-256 by default.

    Read the Security whitepaper 
  • This architecture leverages serverless tooling which auto-scales up (or down) on demand, helping you implement a highly available network topology. In addition, Lambda runs functions in multiple Availability Zones to ensure that it is available to process events in the case of a rare, but possible, service interruption in a single zone. The Lambda functions are also stateless, enabling as many copies of the function as needed to scale to the rate of incoming events.  Finally, the Amazon S3 Standard storage class is designed for 99.99% availability. 

    Read the Reliability whitepaper 
  • The AWS services used throughout this Guidance were selected to provide a centralized and managed capability that ensures you can efficiently scale your implementation to any number of teleports and configurations.  In addition, the data partitioning in Kinesis Data Firehose and the Amazon S3 data lake enable highly efficient queries in Athena downstream. Also, consider reviewing the associated blog Creating satellite communications data analytics pipelines with AWS serverless technologies. It can help you get started through a code repository and AWS CloudFormation templates.

    Read the Performance Efficiency whitepaper 
  • This Guidance scales to continually match demand while ensuring that only the minimum resources are required in two primary ways. One, the control plane analytics pipeline leverages AWS serverless components; therefore, costs are only incurred when jobs are run. Second, the Guidance can scale up and down (to zero) as needed. As the number of satellite teleports and remote terminals grow, the architecture will adapt to match the demand, such as auto scaling the number of workers for an AWS Glue job.

    Read the Cost Optimization whitepaper 
  • This Guidance provides two benefits to help you meet your sustainability commitments. The first is that the IT infrastructure is elastic, which scales up and down based on usage, and does not provide excess compute resources, creating unintended emission.  You can follow your CO2 emissions using the AWS Sustainability Tools. The second gain is through the agility brought to engineering teams, where technologies like AWS Glue and QuickSight can help you to optimize your engineering operations by increasing efficiency and minimizing emissions.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

[Subject]
[Content Type]

[Title]

[Subtitle]
This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Was this page helpful?