Break data silos with an oil and gas production monitoring system
This Guidance helps oil and gas companies build a modern data management system to monitor oil and gas production. Data is brought into the cloud from physical infrastructures, centralized and standardized for customers to easily access, manage, and consume their operational data. Oil and gas customers can use this data to monitor, forecast, and increase production.
Please note: [Disclaimer]
Architecture Diagram
Step 1
Use partner applications, such as Embassy of Things, TwinTalk, and Element Unify to consume historical and real-time telemetry data. You can also use asset metadata from industrial sources using industrial protocols, such as Modbus, and OPC.
Proprietary formats, such as AVEVA PI, can be integrated as well. Connect devices enabled as Internet of Things (IoT) to AWS IoT Core through secure sessions in which the devices are authenticated and data is encrypted in transit with x.509 certificates.
Step 2
Deploy AWS IoT Greengrass if edge processing is desired for data pre-processing, batching, or to run code at the oil field. Install AWS IoT Greengrass in an industrial computer on site.
Step 3
Process incoming messages from the field for unmodeled data, or from partner applications for modeled data with AWS IoT Core and AWS IoT SiteWise.
Step 4
Enrich data from operational sources with records from non-operational technology (OT) systems. For example, computerized maintenance management system (CMMS), or enterprise resource planning (ERP) for production records. Extract these systems with AWS Database Migration Service (AWS DMS).
Step 5
Ingest data from OT and non-OT sources into Amazon Simple Storage Service (Amazon S3), where AWS Glue can further process data (such as compression, aggregation, and calculated records).
Ingest results into a modern data architecture composed by Amazon S3 as the data lake. These results can be loaded to Amazon Timestream if fast querying of time-series data is required.
Step 6
Load metadata from the data lake to enable analytics with AWS Glue Data Catalog. Similarly, AWS Lake Formation allows fine-grained governance and access control for the data lake.
Step 7
Run specialized queries with Amazon Athena to calculate production aggregates, and load them into Amazon Redshift. Or use machine learning (ML) to forecast future production with Amazon SageMaker.
Step 8
Leverage production data and results from analytics and ML with business intelligence (BI) internal applications, such as Amazon Managed Grafana or Amazon QuickSight.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance uses Amazon CloudWatch integration to monitor the metrics and logs for each individual component. You can troubleshoot issues with the CloudWatch console to identify the error logs and determine the root cause. This helps ensure you are continually operating in a well-architected environment.
-
Security
For secure authentication and authorization, industrial devices and connectivity partners can connect securely to AWS IoT Core with X.509 certificates and policies. Once on AWS, you can access data as allowed by AWS Identity and Access Management (IAM) and Lake Formation permissions for fine-grained access control.
To protect data in this Guidance, configure the Amazon S3 buckets for encryption at rest. Streaming data will be encrypted in transit, using secure industrial protocols to export data from our connectivity partners such as Open Platform Communications Unified Architecture (OPC-UA) and Message Queuing Telemetry Transport (MQTT), instead of unsecure partners, such as Modbus.
In the case in which the operational technology components, such as SCADA and historians, are migrated to AWS, the separation between levels is configured with network firewalls and packet inspection, as required by the Purdue model for security.
-
Reliability
This Guidance implements a highly available network topology by depending on serverless services, such as Amazon S3, that are highly available. When the operational technology components are deployed to the cloud, Amazon Elastic Compute Cloud (Amazon EC2) must be deployed in at least two Availability Zones to support high availability and disaster recovery. This helps ensure that all components in the data platform can recover from disaster events.
-
Performance Efficiency
Latency and performance are dependent on the location of this Guidance. The connectivity partners offer pre-processing and batching, and while both improve data quality and decrease cost, they may also increase latency. We recommend you analyze your needs to find the right balance between cost optimization and performance.
Also, the services selected for this Guidance were purpose built to support a serverless data pipeline, so you can visualize operational events in your production assets.
-
Cost Optimization
When evaluating costs, be aware of device management and asset hierarchy when selecting services for data ingestion. Add AWS IoT Core if you need to manage devices, and AWS IoT SiteWise if you need to manage devices while keeping asset hierarchy. We also recommend adding Amazon Kinesis to manage a data stream and add streaming analytics.
Data flows from the industrial source, to the data broker, to the modern data architecture, and finally to the domain-specific applications. The largest contributor, and the most difficult to estimate, is the data to the applications. Select a data store that will support your applications with the performance required and at a cost that is reasonable for you. This may require setting up a data warehouse, such as Amazon Redshift, or using a separate transformation to have clean data in a column format that will facilitate querying.
-
Sustainability
This Guidance implements an event-driven pipeline architecture that helps ensure the resources are utilized consistently and that between events, it scales down to zero.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.