AWS for Industries

Remote Sensing and Object Detection

Identifying onshore oil pads on satellite images using Amazon Rekognition

In this post, we’re exploring an end-to-end solution for creating an object detection model with Amazon Rekognition to identify oil and gas well pads in satellite images. This low-code solution lets users interact with a pre-trained machine learning (ML) model that identifies well pads with Amazon Rekognition, Plotly Dash, and AWS App Runner. Geoscientists can use this process to build environmental baselining workflows using public satellite imagery and Amazon Rekognition Custom Labels.

Use Cases

Companies in the energy industry often want to validate asset coordinate data, confirm regulatory compliance, and know if other companies are starting up nearby operations. Simultaneously, environmental groups have a stake in identifying and quantifying ecological disturbances. Governmental organizations can utilize satellite imagery to validate that the reported data matches reality. A growing stream of earth observation data is collected daily by various satellite platforms. Processing and converting raw satellite data into insights can be time consuming and expensive. Managed ML services can enable geoscientists to detect objects of interest in satellite images without having to master computer vision methods.

This project identifies and counts well pads within a geographic frame of interest. This could be done as a preliminary step for route optimization later on between locations using Amazon Location Service or for more detailed environmental baselining.

Column 1 Column 2       
Time to read 5 minutes
Time to complete 45 minutes
Cost to complete < $10 (if free tier exceeded)
Learning level 300
Services used Amazon Rekognition, AWS App Runner, Amazon Elastic Container Registry, Amazon S3, AWS Lambda

Solution overview

The solution combines AWS services to create an interactive ML-powered web application. Amazon Rekognition provides pre-trained and customizable computer vision capabilities to extract information and insights from your images and videos. In this example, we use the Amazon Rekognition’s Custom Labels functionality to train Amazon Rekogniton to identify conventional oil well pads. ML tasks using imagery require a set of labeled data with bounding boxes around objects of interest. Traditionally, creating bounding boxes for object identification is a laborious task. Amazon Rekognition Custom Labels simplifies this by providing a labeling interface to crowdsource this task for a large volume of images. The solution uses Amazon Simple Storage Service (Amazon S3) to store training data and model artifacts.

Plotly Dash is a low-code framework for rapidly building data applications and user interfaces in Python and other programming languages. App Runner is a fully-managed service to quickly deploy containerized APIs and web applications – including Plotly Dash – at scale and with no prior infrastructure experience required. App Runner hosts the Dash application, allowing data scientists with little-to-no front-end development experience to create and launch web apps quickly. App Runner manages the Amazon Elastic Container Registry (Amazon ECR) for the application image, as well as the AWS Systems Manager Parameter Store for application environment variables.

The following figure illustrates the solution’s architecture:

Architectural diagram showing geoscientists accessing AWS services through a web-based application

Data preparation

A subject matter expert labeled 20 satellite images of well pads in the West Texas Permian Basin using Amazon Rekognition’s simple graphical interface bounding boxes feature. Ten images were used for training the Amazon Rekognition computer vision model and ten separate images were used for validation. This is the minimum number of training images required to use Amazon Rekognition Custom Labels.

Amazon Rekognition Custom Labels user interface showing examples of user-labeled satellite images of well pads

Although many ML models require larger training datasets, Amazon Rekognition can achieve value-adding accuracy with fewer than a dozen samples. The accuracy of the model can improve with additional training images, but the example above shows the accessibility and speed to value of Amazon Rekognition’s Custom Labels feature. For customers with advanced ML capabilities, similar models could be built by a Data Scientist in Amazon SageMaker using ML Python modules such as Open-CV, Caffe, or PyTorch.

Label performance on the trained model can be viewed and improved through the AWS Console through a simple, no-code interface. The F1 score, average precision, and overall recall are provided for each label identified in the training images.

Amazon Rekognition Custom Labels user interface showing evaluation results for the model including an F1 score of 0.829 based on 10 training and 10 test images

Amazon Rekognition Custom Labels highlights identified areas with the corresponding confidence value.

Amazon Rekognition Custom Labels user interface showing two confidence scores and accuracy of computer vision model for two labeled images

Amazon Rekognition instantiates an inference endpoint once the model has been tuned to achieve the desired accuracy level. The flexibility and control of this managed service allows AWS customers to pay for only what they use, thereby limiting the cost of inference to just a few dollars per session. Amazon Rekognition Custom Labels’ inference endpoint incurs charges while the endpoint is active. This makes sure that you de-provision your inference endpoints when you conclude your analysis.

Deploying the solution as a web app

To apply our new model for a surveillance tool, we used Plotly Dash to create a lightweight web application and user interface. Users can navigate to an area of interest and trigger the well pad detector tool. The tool passes the current map view as an image to the Amazon Rekognition endpoint that we created previously. Amazon Rekognition returns bounding box coordinates for well pads identified in the image, which are displayed on the map. Users can adjust the confidence level for results in real-time to filter out low-confidence objects.

Plotly Dash user interface hosted on Amazon AppRunner showing satellite image of West Texas oil fields

Just as Amazon Rekognition enables low-code computer vision, App Runner lets users easily deploy their surveillance tools. The surveillance tool is packaged as a container based on the python:slim-bullseye image, enabling fast container initialization when scaling out. The final image is stored in Amazon ECR for versioning and integration with App Runner. Environment variables, including the Amazon S3 bucket name and model ARN, are stored in the AWS Systems Manager Parameter Store and read by the application at container startup. If a new or improved model must be implemented, then simply update the model ARN parameter in the parameter store to point to the new endpoint and restart the application containers. App Runner automatically generates a public load-balanced URL to access the surveillance tool. Customers can also associate the URL with their own domain name using Amazon Route 53.

Cleaning up

Delete the following resources to avoid incurring future charges, including:

  • The App Runner instance
  • Docker Image stored in ECR
  • Amazon Rekognition model
  • S3 Bucket
  • Parameters in Systems Manager Parameter Store

Conclusion

In this post, we explored an end-to-end solution for creating an object detection model in Amazon Rekognition to identify conventional oil well pads in satellite images. This solution uses Amazon managed services to deliver the value of ML in a low-code/no-code fashion.

This solution could be expanded to run automatically across large, pre-defined areas of interest to notify development teams of new well pad activity. It can be trained using different spectral ranges by using the variety of publicly available earth observation datasets through Earth on AWS and the Amazon Sustainability Data Initiative.

Amazon Rekognition’s accessible GUI can allow other industries to solve their geospatial problems using remote sensing, including but not limited to agriculture, urban planning, retail stores, and warehouses.

To get started with Amazon Rekognition, check out the Getting started with Rekognition guide, and the What is Rekognition page. To get started with App Runner, check out the Getting started with App Runner guide, and the What is App Runner page.

If you have feedback about this post or would like to learn more about remote sensing on AWS, then submit comments in the Comments section.

Scott Bateman

Scott Bateman

Scott Bateman is an AWS Principal Solutions Architect with over 25 years of technical experience in all segments of the energy industry. As a specialist in geospatial energy concepts, Scott works to define and build cloud-based solutions for energy & utilities customers to accelerate time to value on AWS.

James DuHart

James DuHart

James DuHart is a Solutions Architect at AWS specializing in software engineering, cybersecurity, solutions architecture, and digital transformation. He especially enjoys supporting customers during their journey through digital culture and transformation, and embracing the cloud to innovate faster.

Kyle Jones

Kyle Jones

Kyle Jones leads Solutions Architecture for Power and Utilities in the Americas at Amazon Web Services. He helps customers transform and decarbonize their operations using technology. Outside of AWS, Jones teaches graduate-level courses in project management and analytics at American University. He holds a doctorate in systems engineering from George Washington University and a master's in applied economics from Harvard University.

Joseph Johansson

Joseph Johansson

Joseph Johansson is a Sr. Solutions Architect at AWS specializing in containers, midstream, and digital transformation. He loves diving deep into customer business problems and finding disruptive solutions using AWS.

Haley Niven

Haley Niven

Haley Niven is a Solutions Architect at AWS focused on oil field service companies. She loves finding solutions to complex problems and leading teams to create lasting solutions for her customers.