AWS Machine Learning Blog
Create synthetic data for computer vision pipelines on AWS
Collecting and annotating image data is one of the most resource-intensive tasks on any computer vision project. It can take months at a time to fully collect, analyze, and experiment with image streams at the level you need in order to compete in the current marketplace. Even after you’ve successfully collected data, you still have a constant stream of annotation errors, poorly framed images, small amounts of meaningful data in a sea of unwanted captures, and more. These major bottlenecks are why synthetic data creation needs to be in the toolkit of every modern engineer. By creating 3D representations of the objects we want to model, we can rapidly prototype algorithms while concurrently collecting live data.
In this post, I walk you through an example of using the open-source animation library Blender to build an end-to-end synthetic data pipeline, using chicken nuggets as an example. The following image is an illustration of the data generated in this blog post.
What is Blender?
Blender is an open-source 3D graphics software primarily used in animation, 3D printing, and virtual reality. It has an extremely comprehensive rigging, animation, and simulation suite that allows the creation of 3D worlds for nearly any computer vision use case. It also has an extremely active support community where most, if not all, user errors are solved.
Set up your local environment
We install two versions of Blender: one on a local machine with access to a GUI, and the other on an Amazon Elastic Compute Cloud (Amazon EC2) P2 instance.
Install Blender and ZPY
Install Blender from the Blender website.
Then complete the following steps:
- Run the following commands:
- Copy the necessary Python headers into the Blender version of Python so that you can use other non-Blender libraries:
- Override your Blender version and force installs so that the Blender-provided Python works:
- Download
zpy
and install from source: - Change the NumPy version to
>=1.19.4
andscikit-image>=0.18.1
to make the install on3.10.2
possible and so you don’t get any overwrites: - To ensure compatibility with Blender 3.2, go into
zpy/render.py
and comment out the following two lines (for more information, refer to Blender 3.0 Failure #54): - Next, install the
zpy
library: - Download the add-ons version of
zpy
from the GitHub repo so you can actively run your instance: - Save a file called
enable_zpy_addon.py
in your/home
directory and run the enablement command, because you don’t have a GUI to activate it:If
zpy-addon
doesn’t install (for whatever reason), you can install it via the GUI. - In Blender, on the Edit menu, choose Preferences.
- Choose Add-ons in the navigation pane and activate
zpy
.
You should see a page open in the GUI, and you’ll be able to choose ZPY. This will confirm that Blender is loaded.
AliceVision and Meshroom
Install AliceVision and Meshrooom from their respective GitHub repos:
FFmpeg
Your system should have ffmpeg
, but if it doesn’t, you’ll need to download it.
Instant Meshes
You can either compile the library yourself or download the available pre-compiled binaries (which is what I did) for Instant Meshes.
Set up your AWS environment
Now we set up the AWS environment on an EC2 instance. We repeat the steps from the previous section, but only for Blender and zpy
.
- On the Amazon EC2 console, choose Launch instances.
- Choose your AMI.There are a few options from here. We can either choose a standard Ubuntu image, pick a GPU instance, and then manually install the drivers and get everything set up, or we can take the easy route and start with a preconfigured Deep Learning AMI and only worry about installing Blender.For this post, I use the second option, and choose the latest version of the Deep Learning AMI for Ubuntu (Deep Learning AMI (Ubuntu 18.04) Version 61.0).
- For Instance type¸ choose p2.xlarge.
- If you don’t have a key pair, create a new one or choose an existing one.
- For this post, use the default settings for network and storage.
- Choose Launch instances.
- Choose Connect and find the instructions to log in to our instance from SSH on the SSH client tab.
- Connect with SSH:
ssh -i "your-pem" ubuntu@IPADDRESS.YOUR-REGION.compute.amazonaws.com
Once you’ve connected to your instance, follow the same installation steps from the previous section to install Blender and zpy
.
Data collection: 3D scanning our nugget
For this step, I use an iPhone to record a 360-degree video at a fairly slow pace around my nugget. I stuck a chicken nugget onto a toothpick and taped the toothpick to my countertop, and simply rotated my camera around the nugget to get as many angles as I could. The faster you film, the less likely you get good images to work with depending on the shutter speed.
After I finished filming, I sent the video to my email and extracted the video to a local drive. From there, I used ffmepg
to chop the video into frames to make Meshroom ingestion much easier:
Open Meshroom and use the GUI to drag the nugget_images
folder to the pane on the left. From there, choose Start and wait a few hours (or less) depending on the length of the video and if you have a CUDA-enabled machine.
You should see something like the following screenshot when it’s almost complete.
Data collection: Blender manipulation
When our Meshroom reconstruction is complete, complete the following steps:
- Open the Blender GUI and on the File menu, choose Import, then choose Wavefront (.obj) to your created texture file from Meshroom.
The file should be saved inpath/to/MeshroomCache/Texturing/uuid-string/texturedMesh.obj
. - Load the file and observe the monstrosity that is your 3D object.
Here is where it gets a bit tricky. - Scroll to the top right side and choose the Wireframe icon in Viewport Shading.
- Select your object on the right viewport and make sure it’s highlighted, scroll over to the main layout viewport, and either press Tab or manually choose Edit Mode.
- Next, maneuver the viewport in such a way as to allow yourself to be able to see your object with as little as possible behind it. You’ll have to do this a few times to really get it correct.
- Click and drag a bounding box over the object so that only the nugget is highlighted.
- After it’s highlighted like in the following screenshot, we separate our nugget from the 3D mass by left-clicking, choosing Separate, and then Selection.
We now move over to the right, where we should see two textured objects:texturedMesh
andtexturedMesh.001
. - Our new object should be
texturedMesh.001
, so we choosetexturedMesh
and choose Delete to remove the unwanted mass.
- Choose the object (
texturedMesh.001
) on the right, move to our viewer, and choose the object, Set Origin, and Origin to Center of Mass.
Now, if we want, we can move our object to the center of the viewport (or simply leave it where it is) and view it in all its glory. Notice the large black hole where we didn’t really get good film coverage from! We’re going to need to correct for this.
To clean our object of any pixel impurities, we export our object to an .obj file. Make sure to choose Selection Only when exporting.
Data collection: Clean up with Instant Meshes
Now we have two problems: our image has a pixel gap creating by our poor filming that we need to clean up, and our image is incredibly dense (which will make generating images extremely time-consuming). To tackle both issues, we need to use a software called Instant Meshes to extrapolate our pixel surface to cover the black hole and also to shrink the total object to a smaller, less dense size.
- Open Instant Meshes and load our recently saved
nugget.obj
file.
- Under Orientation field, choose Solve.
- Under Position field, choose Solve.
Here’s where it gets interesting. If you explore your object and notice that the criss-cross lines of the Position solver look disjointed, you can choose the comb icon under Orientation field and redraw the lines properly.
- Choose Solve for both Orientation field and Position field.
- If everything looks good, export the mesh, name it something like
nugget_refined.obj
, and save it to disk.
Data collection: Shake and bake!
Because our low-poly mesh doesn’t have any image texture associated with it and our high-poly mesh does, we either need to bake the high-poly texture onto the low-poly mesh, or create a new texture and assign it to our object. For sake of simplicity, we’re going to create an image texture from scratch and apply that to our nugget.
I used Google image search for nuggets and other fried things in order to get a high-res image of the surface of a fried object. I found a super high-res image of a fried cheese curd and made a new image full of the fried texture.
With this image, I’m ready to complete the following steps:
- Open Blender and load the new
nugget_refined.obj
the same way you loaded your initial object: on the File menu, choose Import, Wavefront (.obj), and choose thenugget_refined.obj
file. - Next, go to the Shading tab.
At the bottom you should notice two boxes with the titles Principled BDSF and Material Output. - On the Add menu, choose Texture and Image Texture.
An Image Texture box should appear. - Choose Open Image and load your fried texture image.
- Drag your mouse between Color in the Image Texture box and Base Color in the Principled BDSF box.
Now your nugget should be good to go!
Data collection: Create Blender environment variables
Now that we have our base nugget object, we need to create a few collections and environment variables to help us in our process.
- Left-click on the hand scene area and choose New Collection.
- Create the following collections: BACKGROUND, NUGGET, and SPAWNED.
- Drag the nugget to the NUGGET collection and rename it nugget_base.
Data collection: Create a plane
We’re going to create a background object from which our nuggets will be generated when we’re rendering images. In a real-world use case, this plane is where our nuggets are placed, such as a tray or bin.
- On the Add menu, choose Mesh and then Plane.
From here, we move to the right side of the page and find the orange box (Object Properties). - In the Transform pane, for XYZ Euler, set X to 46.968, Y to 46.968, and Z to 1.0.
- For both Location and Rotation, set X, Y, and Z to 0.
Data collection: Set the camera and axis
Next, we’re going to set our cameras up correctly so that we can generate images.
- On the Add menu, choose Empty and Plain Axis.
- Name the object Main Axis.
- Make sure our axis is 0 for all the variables (so it’s directly in the center).
- If you have a camera already created, drag that camera to under Main Axis.
- Choose Item and Transform.
- For Location, set X to 0, Y to 0, and Z to 100.
Data collection: Here comes the sun
Next, we add a Sun object.
- On the Add menu, choose Light and Sun.
The location of this object doesn’t necessarily matter as long as it’s centered somewhere over the plane object we’ve set. - Choose the green lightbulb icon in the bottom right pane (Object Data Properties) and set the strength to 5.0.
- Repeat the same procedure to add a Light object and put it in a random spot over the plane.
Data collection: Download random backgrounds
To inject randomness into our images, we download as many random textures from texture.ninja as we can (for example, bricks). Download to a folder within your workspace called random_textures
. I downloaded about 50.
Generate images
Now we get to the fun stuff: generating images.
Image generation pipeline: Object3D and DensityController
Let’s start with some code definitions:
We first define a basic container Class with some important properties. This class mainly exists to allow us to create a BVH tree (a way to represent our nugget object in 3D space), where we’ll need to use the BVHTree.overlap
method to see if two independent generated nugget objects are overlapping in our 3D space. More on this later.
The second piece of code is our density controller. This serves as a way to bound ourselves to the rules of reality and not the 3D world. For example, in the 3D Blender world, objects in Blender can exist inside each other; however, unless someone is performing some strange science on our chicken nuggets, we want to make sure no two nuggets are overlapping by a degree that makes it visually unrealistic.
We use our Plane
object to spawn a set of bounded invisible cubes that can be queried at any given time to see if the space is occupied or not.
See the following code:
In the following snippet, we select the nugget and create a bounding cube around that nugget. This cube represents the size of a single pseudo-voxel of our psuedo-kdtree object. We need to use the bpy.context.view_layer.update()
function because when this code is being run from inside a function or script vs. the blender-gui, it seems that the view_layer
isn’t automatically updated.
Next, we slightly update our cube object so that its length and width are square, as opposed to the natural size of the nugget it was created from:
Now we use our updated cube object to create a plane that can volumetrically hold num_objects
amount of nuggets:
We take our plane object and create a giant cube of the same length and width as our plane, with the height of our nugget cube, CUBE1:
From here, we want to create voxels from our cube. We take the number of cubes we would to fit num_objects
and then cut them from our cube object. We look for the upward-facing mesh-face of our cube, and then pick that face to make our cuts. See the following code:
Lastly, we calculate the center of the top-face of each cut we’ve made from our big cube and create actual cubes from those cuts. Each of these newly created cubes represents a single piece of space to spawn or move nuggets around our plane. See the following code:
Next, we develop an algorithm that understands which cubes are occupied at any given time, finds which objects overlap with each other, and moves overlapping objects separately into unoccupied space. We won’t be able get rid of all overlaps entirely, but we can make it look real enough.
See the following code:
Image generation pipeline: Cool runnings
In this section, we break down what our run
function is doing.
We initialize our DensityController
and create something called a saver using the ImageSaver
from zpy
. This allows us to seemlessly save our rendered images to any location of our choosing. We then add our nugget
category (and if we had more categories, we would add them here). See the following code:
Next, we need to make a source object from which we spawn copy nuggets from; in this case, it’s the nugget_base
that we created:
Now that we have our base nugget, we’re going to save the world poses (locations) of all the other objects so that after each rendering run, we can use these saved poses to reinitialize a render. We also move our base nugget completely out of the way so that the kdtree doesn’t sense a space being occupied. Finally, we initialize our kdtree-cube objects. See the following code:
The following code collects our downloaded backgrounds from texture.ninja, where they’ll be used to be randomly projected onto our plane:
Here is where the magic begins. We first regenerate out kdtree-cubes for this run so that we can start fresh:
We use our density controller to generate a random spawn point for our nugget, create a copy of nugget_base
, and move the copy to the randomly generated spawn point:
Next, we randomly jitter the size of the nugget, the mesh of the nugget, and the scale of the nugget so that no two nuggets look the same:
We turn our nugget copy into an Object3D
object where we use the BVH tree functionality to see if our plane intersects or overlaps any face or vertices on our nugget copy. If we find an overlap with the plane, we simply move the nugget upwards on its Z axis. See the following code:
Now that all nuggets are created, we use our DensityController
to move nuggets around so that we have a minimum number of overlaps, and those that do overlap aren’t hideous looking:
In the following code: we restore the Camera
and Main Axis
poses and randomly select how far the camera is to the Plane
object:
We decide how randomly we want the camera to travel along the Main Axis
. Depending on if we want it to be mainly overhead or if we care very much about the angle from which it sees the board, we can adjust the top_down_mostly
parameter depending on how well our training model is picking up the signal of “What even is a nugget anyway?”
In the following code, we do the same thing with the Sun
object, and randomly pick a texture for the Plane
object:
Finally, we hide all our objects that we don’t want to be rendered: the nugget_base
and our entire cube structure:
Lastly, we use zpy
to render our scene, save our images, and then save our annotations. For this post, I made some small changes to the zpy
annotation library for my specific use case (annotation per image instead of one file per project), but you shouldn’t have to for the purpose of this post).
Voila!
Run the headless creation script
Now that we have our saved Blender file, our created nugget, and all the supporting information, let’s zip our working directory and either scp
it to our GPU machine or uploaded it via Amazon Simple Storage Service (Amazon S3) or another service:
Log in to your EC2 instance and decompress your working_blender folder:
Now we create our data in all its glory:
The script should run for 500 images, and the data is saved in /path/to/working_blender_dir/nugget_data
.
The following code shows a single annotation created with our dataset:
Conclusion
In this post, I demonstrated how to use the open-source animation library Blender to build an end-to-end synthetic data pipeline.
There are a ton of cool things you can do in Blender and AWS; hopefully this demo can help you on your next data-starved project!
References
- Easily Clean Your 3D Scans (blender)
- Instant Meshes: A free quad-based autoretopology program
- How to 3D Scan an Object for Synthetic Data
- Generate synthetic data with Blender and Python
About the Author
Matt Krzus is a Sr. Data Scientist at Amazon Web Service in the AWS Professional Services group