Reduce attack surface and vulnerabilities with a cybersecurity solution to protect your critical infrastructure
This Guidance demonstrates how to monitor the connectivity between operational technology (OT), IT, or external networks to ensure the security and integrity of critical industrial systems. It provides centralized monitoring and threat management for connected assets, enabling you to identify and prioritize risks to your OT environment with a holistic view of your assets and threat detection capabilities. This Guidance also supports auditing and compliance regulations, with its comprehensive monitoring and reporting capabilities. It generates detailed reports on asset inventory, vulnerabilities, security events, and timeline-based auditing of network communications, reducing your security risk while helping you meet compliance requirements.
Please note: [Disclaimer]
Architecture Diagram

[Architecture diagram description]
Step 1
In the Purdue model for industrial control system (ICS) security, Level 0 refers to the physical process: sensors and actuators, field devices, solenoid valves, and motors. For basic control, Level 1 consists of programmable logic controllers (PLC), distributed control systems (DCS), and safety instrumented systems (SIS) that interface with the electromechanical devices in Level 0.
Step 2
At Level 2, DCS, Supervisory Control and Data Acquisition (SCADA), and human-machine interfaces (HMIs) provide control and monitoring of the manufacturing process. One or more Dragos sensors collect data from the control system network through a mirror port on an existing network switch on a network traffic access point (TAP).
Step 3
Level 3 consists of historians, engineering workstations, and other systems that manage manufacturing operations. One or more Dragos sensors collect data from the supervisory network through a mirror port on an existing network switch or network TAP. Level 3.5 denotes the demilitarized zone (DMZ) that separates the corporate network from the industrial control systems (ICS) environment.
Step 4
The Dragos sensor converts network traffic into lightweight metadata, which is then forwarded to Dragos SiteStore for correlation and processing through an encrypted connection that supports TLS and Internet Protocol Security (IPSec) protocols.
Step 5
Dragos SiteStore, hosted on Amazon Elastic Kubernetes Service (Amazon EKS), serves as the management and reporting console for the Dragos sensor data. To enable multi-site operational technology (OT) access, this can reside in a separate virtual private cloud (VPC). Dragos SiteStore can also be deployed on-premises depending on your needs.
Step 6
Dragos CentralStore, residing on Amazon EKS, provides enterprise-scale, multi-site OT visibility, detection, and response. Dragos CentralStore can also be deployed on-premises depending on your needs.
Step 7
The OT security event data from SiteStore and CentralStore is sent to a security incident and event management system (SIEM) or a security operations center (SOC) that features AWS Security Hub, Amazon GuardDuty, Amazon Macie, and AWS CloudTrail, among other services. Considerations for the security operations center in the cloud provides more context for a SOC function when you operate in the cloud.
Step 8
Syslog data associated with OT security events can be stored in an Amazon Simple Storage Service (Amazon S3) data lake. The data can be analyzed using Amazon Athena, Amazon OpenSearch Service, and Amazon SageMaker. This data can also be combined with IT security data to provide a centralized view of all OT and IT security events to enable alerting and automatic remediation.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon CloudWatch is a service that allows you to optimize your operations through monitoring, logging, setting alarms, and viewing dashboards of Amazon EKS environments. CloudWatch monitors bandwidth utilization, performance, and the traffic parameters to the Dragos platform. Visualizing and analyzing these components using CloudWatch helps you to identify performance bottlenecks and to troubleshoot requests.
-
Security
AWS Key Management Service (AWS KMS) offers centralized control over the cryptographic keys used to protect your data. GuardDuty provides visibility and remediation for detailed security findings. AWS Config and CloudWatch help you keep track of configuration changes and the activity within your accounts. Security Hub helps you keep track of the holistic security of the environment.
These services work collaboratively to provide a secure footprint for this Guidance. AWS Config, AWS Identity and Access Management (IAM), and CloudWatch actively lock down the environment while GuardDuty and Security Hub help you maintain visibility and updates.
-
Reliability
Amazon EKS provides a production-grade Kubernetes control plane designed to be highly-available and fault-tolerant. It runs the Kubernetes control plane across three Availability Zones in an AWS Region, and automatically manages the availability and scalability of Kubernetes API servers and Kubernetes clusters. Amazon EKS automatically scales control plane instances based on load; it also detects and replaces unhealthy control plane instances and automatically patches the control plane.
-
Performance Efficiency
Amazon EKS allows for optimizing container, resource management, and scalability management to drive performance efficiency for end users. It uses purpose-built storage services, such as Amazon S3, that reduces latency, increases throughput, and is highly scalable. This Guidance uses these services to scale effectively and to offload the undifferentiated heavy lifting of infrastructure management.
-
Cost Optimization
This Guidance uses a combination of AWS Cost Explorer and the right-sizing of Amazon Elastic Compute Cloud (Amazon EC2) instances to optimize the costs for Amazon EKS. Cost Explorer uses the tags of Amazon EC2 instances that are part of a managed node group and activated in the AWS Billing and Cost Management console. Cost Explorer helps you analyze your AWS bill with an easy-to-use interface that lets you visualize, understand, and manage AWS costs and usage over time.
-
Sustainability
Amazon EC2 autoscaling enables you to add or remove nodes or pods as your workload changes; this autoscaling feature adjusts the cluster size so that the workload can run as efficiently as possible, thereby ensuring optimal resource use. It also allows for scheduled or dynamic scaling policies based on metrics, such as average CPU utilization or average network in or out. When used in conjunction with Cost Explorer, the Guidance receives opportunities to reduce cost and improve resource efficiency by downsizing instances where possible.
Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content

[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.