AWS for Industries

Improving Safety and Logistics at Well Pads with Amazon Machine Learning Services

Introduction

In remote upstream oil and gas facilities, such as well pads, energy companies frequently have various service contractors bringing in items, performing services, and removing items from the site. These facilities most often do not have permanent staff on location. It can be challenging for operators to know who is accessing the facilities. For safety purposes, ensuring that anyone in the facility is wearing the appropriate personal protection equipment (PPE), such as a hard hat, is imperative. This solution uses custom machine learning models created with Amazon Rekognition that are deployed to an AWS DeepLens camera to:

  1. Identify the PPE-presence or absence of properly worn hard hats
  2. Identify which trucks access the facility by license plate number and any markings on the side of the truck

The camera is running the model at the edge and only transmits back alerts if there are scenes that present people or vehicles. This solution is comparable with edge devices running AWS IoT Greengrass. In this post, we use AWS DeepLens for demonstration purposes.

This solution uses computer vision to improve logistics and safety at oil and gas facilities by using AWS DeepLens, Amazon Rekognition, and Amazon Textract. AWS has other guidance for compute vision projects including deploying a hard hat detection model with AWS DeepLens, building a custom image classifier using your own images with Amazon SageMaker, labeling datasets using Amazon SageMaker Ground Truth, and managing computer vision at the edge with AWS Panorama.

Motivating Use Cases

Hauling water from well pads is a significant percentage of costs for Permian Basin oil production. Vendors that are contracted to remove produced water often use basic ticket systems that contribute to data management issues. Operators with multiple wells often have to access data about produced water from multiple sources because there is no single source of truth.

Operators can find it difficult to validate volumes and rely on vendor water hauling invoice information. This can be due to billing mechanisms, such as vendors using paper tickets or incompatibility with connecting to electronic systems. The rapid growth in the need for water hauling and oil field services in the Permian Basin has led to allegations of vendor bad behavior; in some instances where vendors are suspected of billing operators for more trips than are necessary or even for trips that were not completed. Because the data is siloed in multiple systems, operators do not have an easy mechanism to integrate water hauling data with other production systems of recorded, such as hydrocarbon accounting tools.

The objective of this solution is to create an internal source of truth for truck visits to well pads, delivery points, and facilities. Computer vision methods are used to capture images and measure visit durations by vendor. The solution is designed to be deployed to remote sites with limited connectivity and power requirements. Vendor truck visit details are integrated with reference data, such as site locations or water hauling vendors, to validate and identify anomalies.

Additionally, the same camera can detect worker safety by applying a custom computer vision model to detect hard hats. This demonstrates how a single device and manage multiple models at once.

Overview of AWS Services Used

Amazon Rekognition is a managed service for computer vision models. It has a number of pretrained models that can be deployed directly. They can be augmented with additional objects of interest. Amazon Rekognition can also be trained to classify a custom set of objects, such as hard hats, providing industry-specific value. It can host trained computer vision models as in the points for real-time inference. Amazon Rekognition can be used for object detection in images and video. AWS DeepLens uses a camera and small internal computer that can host recognition models at the edge, which removes the need to send large amounts of video data through the network. It allows real-time application of the machine learning with the computer vision model. Amazon Textract is a machine learning service that automatically extracts text from images. It goes beyond simple Optical Character Recognition (OCR) to detect and analyze the text. Amazon Textract can be used to identify specific data associated with preset key-value pairs.

Well pads have limited space, limited power for running machines, intermittent internet connectivity, and are often in remote locations. Well operators want to limit costs and developing custom computer vision solutions requires extensive investments in data science and engineering resources. Amazon Rekognition, AWS DeepLens, and Amazon Textract are fully managed services that remove the undifferentiated heavy lifting of applying computer vision at the edge and allow operators to focus on their core competencies.

Architectural Diagram of Solution

To demonstrate the solution prototype, we captured residential delivery trucks to simulate industrial trucks on a well pad. Road-facing cameras were installed to run AWS IoT Greengrass with AWS Lambda object recognition analyzed a real-time image stream. Images were processed with an initial machine learning model at the edge using AWS DeepLens and only images containing objects of interest, such as people or vehicles, were sent to the cloud via AWS IoT Core, dramatically reduces the connectivity required for the solution to work.

Additional image analysis of company markings and logos were performed by using Amazon Rekognition to identify the specific vendor company providing service. A virtual ledger of truck visits with camera ID, truck operating company, time and duration of visit, and link to source photos landed in an Amazon Aurora database. Amazon QuickSight dashboards and reports were provided for ease of use in addition to the Amazon CloudWatch metric dashboards and alerts through Amazon Simple Notification Service (Amazon SNS).

Here is a review of how the services support the solution:

  • Ingestion: AWS IoT Greengrass, AWS IoT Core, Amazon S3, AWS DeepLens
  • Processing: AWS Lambda, Amazon Rekognition, Amazon Textract
  • Monitoring: Amazon CloudWatch, Simple Notification Service
  • Visualization: Amazon Aurora MySQL-Compatible Edition, Amazon QuickSight

High-level architectural diagram of solution:

Example architecture using AWS services

Figure 1: Example architecture using AWS services

Steps:

1. AWS DeepLens is designed for processing machine learning models at the edge. Other cameras can be used at the edge if connected to AWS IoT Greengrass. Triggered events, such as a truck arrival, signals on-site devices to capture:

  • Starting state image
  • GPS location
  • Duration of event
  • Location of objects in image
  • Completion state image

2. Images stored in S3 bucket to provide full history of trigger event images. S3 bucket trigger will call Lambda function.

3. Image characteristics will be identified:

  • Vehicle type
  • License plate
  • Company logos

4. Reference data is accessed to determine well pad/facility based on latitude and longitude. Image indicators will identify water hauling vendor and vehicle if plate is identified.

5. Optional: Using a relational data base, such as the serverless RDBMS Amazon Aurora, enables robust data analytics. Datasets stored in database can show complete record of events:

  • Reference Data
      • Wells
      • Trucks
      • Companies
  • Transactional Data
      • SitePhotos
      • WaterHauls

6. Optional: Business logic coded into views can be presented in dashboards, such as Amazon QuickSight, to show key metrics, data details, and detected anomalies, for example reported haul with no photo evidence.

7. The whole system is secured using AWS security tools including AWS Identity and Access Management for role-based access controls. Amazon CloudWatch logs all the system activity and sends alerts to supervisors via text messages if pre-determined thresholds are crossed.

Sample Output

The initial test with the AWS DeepLens camera were conducted in a controlled indoor setting. A toy truck with clear font markings was used to confirm that Amazon Textract was able to effectively detect vehicle markings. The AWS DeepLens object detection model labels both cars and trucks as “car.”

Example of computer vision using Amazon Rekognition

Figure 2: Example of computer vision using Amazon Rekognition

More realistic testing was performed in a residential setting to understand impacts of lighting conditions. While COVID-19 travel restrictions made it difficult to test on an actual well pad, this testing confirmed that presence of vehicles, their markings, and duration on site were able to be easily inferred from the data captured by the solution. The following example shows an Amazon Prime truck correctly identified as it passes through the camera field of view.

Example of image captured by AWS DeepLens

Figure 3: Example of image captured by AWS DeepLens

Amazon CloudWatch Logs show instances of vehicles identified by Amazon Rekognition. When AWS DeepLens identifies a vehicle, a series of API calls are made, generating logs visible in CloudWatch.

Example of Amazon CloudWatch dashboard

Figure 4: Example of Amazon CloudWatch dashboard 

Operational Costs

Estimated operational cost to deploy this solution to 1,000 sites is $161 per month, excluding camera hardware costs. By using AWS managed services, the undifferentiated heavy lifting and associated costs are avoided. Amazon Textract and Amazon Rekognition charges are minimized by only analyzing images flagged by AWS DeepLens to contain vehicles.

The following illustration breaks down cost for 1,000 well pads running this solution 24 hours a day, 365 days a year.

This translates to less than $2 annually for each well pads. Since each implementation has unique characteristics, you can explore the assumptions and analyze different scenarios using the AWS Pricing Calculator.

Other Applications

This solution could also be used to detect the presence and intensity of flare on well pads. Outside of oil and gas, it could be applied to spot objects, such as animals approaching remote facilities or damage after a storm. By using Amazon Textract to identify vehicle markings, this solution could also be used in cities to locate trucks passing through or even identify individual vehicles based on license plates. AWS IoT Core and AWS DeepLens can be updated remotely, allowing you to dispatch new models to the edge to identify additional objects of interest.

Conclusion

Developing, deploying, and managing computer vision models for well pads used to be difficult and expensive. Amazon Rekognition and AWS DeepLens simplify this process so that anyone can build, deploy, and maintain sophisticated models that improve safety and logistics for remote facilities. AWS customers can quickly add new trainings to the model without dedicated data scientists or AWS personnel. Companies can also iterate on their models and push updates to the edge for processing.

Computer vision is a different way to approach the limited monitoring available at remote oil and gas facilities. The model described here allows operators to have “eyes” on their facilities all the time. Importantly, while the service is always on, customers only incur costs when the system detects someone at the facility. The system’s operational costs are recovered if the well is able to produce a single incremental barrel or reduce work related health and safety issues.

Looking towards the future, we expect to see more artificial intelligence and machine learning solutions applied at industrial facilities, including oil and gas production facilities. Automating work previously done by lease operators and production engineers can increase production, reduce maintenance issues, and reduce safety events. This demonstration is one example of how artificial intelligence can be deployed at a facility with minimal capital investment.

You can see additional AWS solutions for the energy industry here.

Scott Bateman

Scott Bateman

Scott Bateman is a Principal Solutions Architect at AWS specializing in the energy industry. Prior to joining AWS, he was a director of business applications at BPX Energy and has worked for over two decades using technology to solve energy business problems.

Kyle Jones

Kyle Jones

Kyle Jones leads Solutions Architecture for Power and Utilities in the Americas at Amazon Web Services. He helps customers transform and decarbonize their operations using technology. Outside of AWS, Jones teaches graduate-level courses in project management and analytics at American University. He holds a doctorate in systems engineering from George Washington University and a master's in applied economics from Harvard University.

William Niven

William Niven

William Niven is an Associate Solutions Architect at AWS where he helps energy companies build and operate cloud-based solutions. Prior to joining AWS, he was an oil Drilling Fluids Engineer for Halliburton.

Haley Niven

Haley Niven

Haley Niven is a Solutions Architect at AWS focused on oil field service companies. She loves finding solutions to complex problems and leading teams to create lasting solutions for her customers.