Implement a cybersecurity approach to secure OT, Internet of Things (IoT), and Industrial Internet of Things (IIoT) assets
This Guidance helps protect OT infrastructure using cybersecurity services. As digital applications for OT have increased, so has the convergence of OT and IoT technology. However, a more complex OT network may open the way for security vulnerabilities. With this Guidance, you can implement a cybersecurity approach to defend against malicious attacks, support passive monitoring 24/7, and plan for pre-defined actions to avoid security breaches that may lead to plant shutdowns.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
In the Purdue Model, Level 0 refers to the physical process: sensors and actuators, field devices, solenoid valves, and motors. Level 1 consists of Programmable Logic Controllers (PLC), Distributed Control Systems (DCS) Controllers, and Safety Instrumented System (SIS) that interface with the electromechanical devices in Level 0 to provide basic control.
Step 2
At Level 2, DCS Supervisory Control and Data Acquisition (SCADA) and human-machine interfaces (HMIs) provide control and monitoring of the manufacturing process. One or more Claroty xDome Collection servers can be installed to collect data from the control system network through a mirror port on an existing network switch that is on a network traffic access point (TAP). Claroty Edge can be deployed in order to actively discover devices.
Step 3
Level 3 consists of historians, engineering workstations, and other systems that manage manufacturing operations. One Claroty xDome collection server collects data from the supervisory network through a mirror port on the core network.
Level 3.5 denotes the demilitarized zone (DMZ) that separates the corporate network from the industrial control systems (ICS) environment. One or more IoT gateways that collect wireless sensor data from Level 0 resides behind the firewall.
Step 4
The xDome collection server converts network traffic into lightweight metadata, which is then forwarded to Claroty xDome software as a service (SaaS) for correlation and processing through an encrypted connection that supports TLS and IP Security (IPsec) protocols.
Step 5
The Claroty xDome analysis engine sits on the AWS Cloud, providing native, SaaS-delivered security with the latest protections and low total cost ownership (TCO) due to scalability and continuous updates. Each Amazon Virtual Private Cloud (Amazon VPC) can sit in a corresponding AWS Region to enable multi-site access.
Claroty xDome uses Amazon Elastic Kubernetes Service (Amazon EKS) for performance efficiency, Amazon Relational Database Service (Amazon RDS) for disaster recovery, and Amazon Simple Storage Service (Amazon S3) to store backups. Amazon ElastiCache caches frequently accessed data and absorbs spikes in traffic.
Step 6
Claroty xDome sends events and vulnerabilities to AWS Security Hub and natively sends events to Amazon Security Lake using the Open Cybersecurity Schema Framework (OCSF). These services can be a part of a comprehensive Security Operations Center and Security Information and Event Management workflow that consolidates OT and IIoT security event data and actions.
Step 7
Security Lake can work with AWS Partner solutions in addition to automation and analytics services, such as Amazon Athena, Amazon OpenSearch Service, and Amazon SageMaker to gain additional insights on security events.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
ElastiCache simplifies deploying, operating, and scaling an in-memory cache in the cloud. ElastiCache is a managed service, so AWS takes care of the operational aspects of running Redis, including hardware provisioning, software patching, horizontal and vertical scaling, upgrading engine versions, automatic backup, and monitoring.
-
Security
Security Hub provides you with a centralized location to view and manage security alerts, findings, and recommendations across multiple AWS accounts and services. Security Hub in this Guidance provides managed threat intelligence based on analysis from AWS and third-party sources to identify known bad actors.
-
Reliability
Amazon RDS automatically performs backups of the database to Amazon S3. This includes daily snapshots and transaction logs captured every 5 minutes, which aids in disaster recovery. This Guidance also uses multi-Availability Zone (AZ) Amazon RDS database instances to provide automated failover to a standby replica in another AZ. This minimizes downtime if there are AZ failures.
-
Performance Efficiency
Amazon EKS provides Amazon CloudWatch metrics for monitoring overall cluster health, resource utilization, application performance, and bottlenecks. Amazon EKS integrates tightly with Amazon S3 and Amazon RDS in addition to caching, networking, security, and monitoring services. This integration can help you meet workload requirements around scaling, traffic management, and data access patterns.
-
Cost Optimization
A data lake powered by Amazon S3 provides a highly durable and inexpensive way to store large amounts of log, packet capture, and forensic data from the Guidance over long periods of time at lower costs. Additionally, analyzing storage metrics and access patterns can help you better understand your resource usage so you can make informed decisions about optimizing frequently accessed "hot" data versus using less expensive “cold” data archives.
-
Sustainability
Amazon RDS makes it easy to right size database instance types and storage based on actual utilization data—this helps you prevent overprovisioning. Additionally, Amazon S3 provides highly durable storage, reducing how often the Guidance must replicate data for redundancy.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.