Guidance for Well Construction Operator Analytics on AWS
Overview
This Guidance shows how drillers and builders of well systems can improve how they gather, access, and use their operational data. Well system construction data is often siloed between the oilfield equipment and services (OFS) industry that produces the data, and the operators who will consume and analyze that data. Not only do the operators experience challenges in obtaining data from OFS, the data they do receive is unreliable, and requires lengthy integration and analysis. This Guidance solves for those challenges by helping operators gather data from a multitude of OFS companies, securely store, and then process the data—all in a single environment. Operators can monitor, visualize, and analyze their operation's data to improve their construction efficiency.
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
Amazon S3 and QuickSight were selected for this Guidance because of the capabilities those managed services offer to help operators run and monitor their operational systems effectively when it comes to data.Specifically, QuickSight allows operators to build operational dashboards to track Amazon CloudWatch metrics, which describe the operational health of content delivery services, such as CloudFront, or metrics about objects stored in Amazon S3. These services natively integrate with CloudWatch, which helps operators to seamlessly centralize logs and metrics.
Read the Operational Excellence whitepaperSecurity
Lambda@Edge, a feature of CloudFront, Amazon AppFlow, and AWS Secrets Managerall work together to help operators maintain the integrity of their data, manage user permissions, and establish controls to detect security events. With Lambda@Edge, operators can enforce custom authorization flows before a request can enter the AWS environment. It also segments data uploaded from different sources into different Amazon S3 prefixes to enforce data isolation boundaries. Amazon AppFlow uses Secrets Manager to store sensitive information required to connect to a third-party application, such as passwords and authentication tokens.
Read the Security whitepaperReliability
The capabilities of Amazon S3, AWS Glue, and Athena enhance thereliability of operator's workloads as these services support a distributed system design. For example, operators' query data stored in Amazon S3 with Athena is based on table definitions in AWS Glue. These Regional AWS services automatically scale across multiple independent failure zones to preserve application availability in the event of a rare, but possible, Availability Zone failure.
Read the Reliability whitepaperPerformance Efficiency
This Guidance enhances the performance efficiency for operators through a structured and streamlined allocation of resources. For instance, it walks operators through the process to partition data in the AWS Glue table based on context added by Lambda@Edge, such as which company uploaded the document and which asset the document relates to. Partitioning this data optimizes Athena query time by reducing the volume of data scanned for each query.
Read the Performance Efficiency whitepaperCost Optimization
This Guidance uses Amazon S3 for persistent data storage. That is, an Amazon S3 Lifecycle policy automatically moves objects into the Amazon S3 Intelligent-Tiering storage class, reducing storage expenses by automatically moving objects to the cost-optimal storage classes based on access patterns.
Read the Cost Optimization whitepaperSustainability
The Lambda function that processes files scales up based on how quickly files are added to the Amazon S3 file ingestion bucket. After those files are processed, the Lambda function scales back down. This automatic scaling feature of Lambda right-sizes compute usage based on demand, which minimizes compute usage. Preventing the over-provisioning of compute reduces energy usage, minimizing the workload’s environmental impact.
Read the Sustainability whitepaperDisclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages