[SEO Subhead]
This Guidance shows how drillers and builders of well systems can improve how they gather, access, and use their operational data. Well system construction data is often siloed between the oilfield equipment and services (OFS) industry that produces the data, and the operators who will consume and analyze that data. Not only do the operators experience challenges in obtaining data from OFS, the data they do receive is unreliable, and requires lengthy integration and analysis. This Guidance solves for those challenges by helping operators gather data from a multitude of OFS companies, securely store, and then process the data—all in a single environment. Operators can monitor, visualize, and analyze their operation's data to improve their construction efficiency.
Please note: [Disclaimer]
Architecture Diagram

[Architecture diagram description]
Step 1
Operators consume data and services from many oilfield equipment and services (OFS) companies. Amazon AppFlow automates data flows from applications to Amazon Simple Storage Service (Amazon S3). Deploy data connectors from the AWS Marketplace, or build custom connectors using AWS Lambda.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon S3 and QuickSight were selected for this Guidance because of the capabilities those managed services offer to help operators run and monitor their operational systems effectively when it comes to data. Specifically, QuickSight allows operators to build operational dashboards to track Amazon CloudWatch metrics, which describe the operational health of content delivery services, such as CloudFront, or metrics about objects stored in Amazon S3. These services natively integrate with CloudWatch, which helps operators to seamlessly centralize logs and metrics.
-
Security
Lambda@Edge, a feature of CloudFront, Amazon AppFlow, and AWS Secrets Manager all work together to help operators maintain the integrity of their data, manage user permissions, and establish controls to detect security events. With Lambda@Edge, operators can enforce custom authorization flows before a request can enter the AWS environment. It also segments data uploaded from different sources into different Amazon S3 prefixes to enforce data isolation boundaries. Amazon AppFlow uses Secrets Manager to store sensitive information required to connect to a third-party application, such as passwords and authentication tokens.
-
Reliability
The capabilities of Amazon S3, AWS Glue, and Athena enhance the reliability of operator's workloads as these services support a distributed system design. For example, operators' query data stored in Amazon S3 with Athena is based on table definitions in AWS Glue. These Regional AWS services automatically scale across multiple independent failure zones to preserve application availability in the event of a rare, but possible, Availability Zone failure.
-
Performance Efficiency
This Guidance enhances the performance efficiency for operators through a structured and streamlined allocation of resources. For instance, it walks operators through the process to partition data in the AWS Glue table based on context added by Lambda@Edge, such as which company uploaded the document and which asset the document relates to. Partitioning this data optimizes Athena query time by reducing the volume of data scanned for each query.
-
Cost Optimization
This Guidance uses Amazon S3 for persistent data storage. That is, an Amazon S3 Lifecycle policy automatically moves objects into the Amazon S3 Intelligent-Tiering storage class, reducing storage expenses by automatically moving objects to the cost-optimal storage classes based on access patterns.
-
Sustainability
The Lambda function that processes files scales up based on how quickly files are added to the Amazon S3 file ingestion bucket. After those files are processed, the Lambda function scales back down. This automatic scaling feature of Lambda right-sizes compute usage based on demand, which minimizes compute usage. Preventing the over-provisioning of compute reduces energy usage, minimizing the workload’s environmental impact.
Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content

[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.