The Internet of Things on AWS – Official Blog

Building a Digital Twin with Photogrammetry and AWS IoT TwinMaker

Introduction

In this blog post, you will learn how you can use photographs taken by a drone to create a 3D model of real world environments within a digital twin. Digital twins are virtual representations of physical systems that are regularly updated with data to mimic the structure, state, and behavior of the assets they represent. A digital twin can enable quicker and better decision-making, by connecting multiple data sources within a single pane of glass and providing actionable insights. However, building and managing digital twins from scratch is time-consuming, complicated, and costly. It requires a team of developers with varying and specialized skills working together to build integrated solutions that combine data from different sources. The developers must generate live insights from streaming data and create contextualized visualizations to better connect end users to the data. With AWS IoT TwinMaker, you can easily create digital twins of physical environments and build applications that provide an interactive 3D digital representation of large and complex physical structures through the browser.

Overview

One of the key features of AWS IoT TwinMaker is the ability to import existing 3D models (e.g., CAD and BIM models or point cloud scans) into an AWS IoT TwinMaker scene and then overlay data sourced from other systems over this visualization. The AWS IoT TwinMaker scene uses a real-time WebGL viewport and supports the glTF format. While CAD and BIM models represent the structure of an asset as designed, in some cases, such models may not exist, or the asset as built may differ from the design. It is valuable to provide a 3D model within the digital twin that reflects the current reality as closely as possible. There are a number of mechanisms available to create a 3D model of the real world, with two popular approaches being laser scanning and photogrammetry.

Laser scanning uses specialized and often costly equipment to create highly accurate 3D models of physical environments. In contrast, photogrammetry is the process of extracting 3D information from overlapping 2D photographs using computer vision techniques, including Structure from Motion (SfM).

This post focuses on using a low-cost aerial photography platform (a consumer-level quadcopter – the DJI Phantom 4 Pro) combined with photogrammetry to create a photorealistic model of a large area representing an asset modeled in AWS IoT TwinMaker. Following this approach, you can quickly build a 3D model of an asset that may be prohibitively expensive or impossible to create using laser scanning. This model can be updated quickly and frequently by subsequent drone flights to ensure your digital twin closely reflects reality. It is important to note at the outset that this model will favor photorealism over the absolute accuracy of the generated model.

In this blog, we will also describe how you can capture a dataset of georeferenced photographs via automatic flight planning and execution. You can then feed those photographs through a photogrammetry processing pipeline that automatically creates a scene of the resultant 3D visualization within AWS IoT TwinMaker. We use popular free and open-source photogrammetry software to process the data into glTF format for import into AWS IoT TwinMaker. The processing pipeline also supports OBJ files that can be exported from DroneDeploy or other photogrammetry engines.

Solution Walkthrough

Data acquisition

Photogrammetry relies on certain characteristics of source aerial photographs to create an effective 3D model, including:

  • A high degree of overlap between images
  • The horizon not being visible in any of the photographs
  • The capture of both nadir and non-nadir photographs
  • The altitude of capture being based on the desired resolution of the model

While it is possible for a skilled drone pilot to manually capture photographs to be used in photogrammetry, you can achieve more consistent results by automating the flight and capture. A flight planning tool can create an autonomous flight plan that captures images at relevant locations, elevations, and degree of overlap for effective photogrammetry processing. Shown below is the flight planning interface of DroneDeploy, a popular reality capture platform for interior and exterior aerial and ground visual data that we used to capture the images for our example.

DroneDeploy flight planning

Figure 1 – DroneDeploy flight planning interface

We used the flight planning and autonomous operation capabilities of the DroneDeploy platform to capture data that represents an asset to be modeled in AWS IoT TwinMaker. The asset of interest is an abandoned power station in Fremantle, Western Australia. As shown in the previous screenshot, the flight was flown at the height of 160 feet, covering an area of 6 acres over the course of less than 9 minutes, and capturing 149 images. Following, we show two examples of the aerial photographs captured from the drone flight that were subsequently used to generate the 3D model, illustrating the high degree of overlap between images.

Overlapping images

Figure 2 – A high degree of image overlap for effective photogrammetry

Photogrammetry processing pipeline architecture

Once the aerial imagery has been captured, it must be fed through a photogrammetry engine to create a 3D model. DroneDeploy provides a powerful photogrammetry engine with the ability to export 3D models created by the engine in OBJ format, as shown in the following screenshot.

DroneDeploy OBJ export

Figure 3 – Export model

We have created a photogrammetry processing pipeline that leverages the NodeODM component of the popular free and open-source OpenDroneMap platform to process georeferenced images in a completely serverless manner. The pipeline leverages AWS Fargate and AWS Lambda for compute, creating as output a scene in AWS IoT TwinMaker that contains the 3D model created by OpenDroneMap.

The pipeline also supports processing of 3D models created by the DroneDeploy photogrammetry engine, creating a scene in AWS IoT TwinMaker from an OBJ file exported from DroneDeploy.

The photogrammetry processing pipeline architecture is illustrated in the following diagram.

Pipeline Architecture

Figure 4 – Pipeline architecture

The execution of the pipeline using the OpenDroneMap photogrammetry processing engine follows these steps:

  1. A Fargate task is started using the NodeODM image of OpenDroneMap from the public docker.io registry
  2. A set of georeferenced images obtained by a drone flight are uploaded as a .zip file to the landing Amazon S3 bucket
  3. The upload of the zip file results in the publication of an Amazon S3 Event Notification that triggers the execution of the Data Processor Lambda
  4. The Data Processor Lambda unzips the file, starts a new processing job in NodeODM running in Fargate, and uploads all the images to the NodeODM task
  5. The Status Check Lambda periodically polls the NodeODM task to check for completion of the processing job
  6. When the NodeODM processing job is complete, the output of the job is saved in the processed S3 bucket
  7. Saving of the output zip file results in the publication of an Amazon S3 Event Notification that triggers the glTF Converter Lambda
  8. The glTF Lamba converts the OBJ output of the NodeODM processing job to a binary glTF file and uploads it to the workspace S3 bucket, which is associated with the AWS IoT TwinMaker workspace and is produced when the workspace is created by the CloudFormation stack
  9. The glTF Lambda creates a new scene in the AWS IoT TwinMaker workspace with the glTF file

If you are utilizing the DroneDeploy photogrammetry engine to create the 3D model, you can upload the exported OBJ zip file directly to the Processed bucket, and steps 6-8 will complete as normal.

When the photogrammetry processing pipeline completes execution, a new scene will be created in an AWS IoT TwinMaker workspace containing the generated 3D model, as shown below for the asset of interest.

3D scene

Figure 5 – Generated 3D scene in AWS IoT TwinMaker

Prerequisites

An AWS account will be required to set up and execute the steps in this blog. An AWS CloudFormation template will configure and install the necessary VPC and networking configuration, AWS Lambda Functions, AWS Identity and Access Management (IAM) roles, Amazon S3 buckets, AWS Fargate Task, Application Load Balancer, Amazon DynamoDB table, and AWS IoT TwinMaker Workspace. The template is designed to run in the Northern Virginia region (us-east-1). You may incur costs on some of the following services:

  • Amazon Simple Storage Service (Amazon S3)
  • Amazon DynamoDB
  • Amazon VPC
  • Amazon CloudWatch
  • AWS Lambda processing and conversion functions
  • AWS Fargate
  • AWS IoT TwinMaker

Steps

Deploy the photogrammetry processing pipeline

  1. Download the sample Lambda deployment package. This package contains the code for the Data Processor Lambda, Status Check Lambda, and glTF Converter Lambda described above
  2. Navigate to the Amazon S3 console
  3. Create an S3 bucket
  4. Upload the Lambda deployment package you downloaded to the S3 bucket created in the previous step. Leave the file zipped as is
  5. Once the Lambda deployment package has been placed in S3, launch this CloudFormation Template
  6. In the Specify Stack Details screen, under the Parameters section, do the following:
    1. Update the Prefix parameter value to a unique prefix for your bucket names. This prefix will ensure the stack’s bucket names are globally unique
    2. Update the DeploymentBucket parameter value to the name of the bucket you uploaded the Lambda deployment package
    3. If you are processing a large dataset, increase the Memory and CPU values for the Fargate task based on allowable values as described here
  7. Choose Create stack to create the resources for the photogrammetry processing pipeline
  8. Once complete, navigate to the new S3 landing bucket. A link can be found in the Resources tab as shown below
Upload bucket resource

Figure 6 – Upload bucket resource

  1. Upload a zip file containing your images to the S3 bucket

Running the photogrammetry processing pipeline

The photogrammetry processing pipeline will automatically be initiated upon upload of a zip file containing georeferenced images. The processing job can take over an hour (depending on the number of images provided, and the CPU and memory provided within the Fargate processing task), and you can track the job’s progress by looking at the status within the Amazon CloudWatch logs of the Status Check Lambda. When a processing job is active, the Status Check Lambda will output the status of the job when it runs (on a 5-minute schedule). The output includes the progress of the processing job as a percentage value, as shown below.

Job progress

Figure 7 – Photogrammetry job progress

Building a digital twin based on the 3D model

When the photogrammetry processing pipeline has completed and a new scene has been created in the AWS IoT TwinMaker workspace, you can start associating components bound to data sources using the 3D model to provide visual context for the data and provide visual cues based on data-driven conditions.

You can configure a dashboard using the AWS IoT TwinMaker Application Plugin for Grafana to share your digital twin with other users.

Clean up

Be sure to clean up the work in this blog to avoid charges. Delete the following resources when finished in this order

  1. Delete any created scenes from your AWS IoT TwinMaker workspace
  2. Delete all files in the Landing, Processed, and Workspace S3 Buckets
  3. Delete the CloudFormation Stack

Conclusion

In this blog, you created a serverless photogrammetry processing pipeline that can process drone imagery via open-source software into a 3D model and created a scene in AWS IoT TwinMaker based on the generated 3D model. In addition, the pipeline can process 3D models created by other photogrammetry engines, such as that provided by DroneDeploy, and exported to OBJ. Although the pipeline has been used to demonstrate the processing of drone imagery, any georeferenced image data could be used. The ability to quickly create a photorealistic 3D model of large real-world assets using only consumer-grade hardware enables you to maintain up-to-date models that can be bound to data sources and shared with other users, allowing them to make decisions based on data displayed within a rich visual context. The pipeline described in this blog is available in this GitHub repo.

Now that you have a visual asset, you can combine it with real-world data from diverse sources by using built in connectors, or creating your own as described in the AWS IoT Twinmaker user guide.


About the Author

Greg BiegelGreg Biegel is a Senior Cloud Architect with AWS Professional Services in Perth, Western Australia. He loves spending time working with customers in the Mining, Energy, and Industrial sector, helping them to achieve valuable business outcomes. He has a PhD from Trinity College Dublin and over 20 years of experience in software development.