The Internet of Things on AWS – Official Blog

Build an efficient development environment for AWS IoT Greengrass

Introduction

This post shows you how to set up a clean and efficient development environment for AWS IoT Greengrass. With this environment you can rapidly iterate on your ideas and automate your process to build edge compute systems from scratch. Building a repeatable development environment for edge systems can take a long time. These tools will reduce the amount of time it takes you to get started and give you a base to build your production applications.

Overview

The Ubuntu 20.04 based virtual machine that you build can:

  • Start from scratch in minutes
  • Connect to real USB devices
  • Interact with AWS services using your AWS credentials
  • Capture images from a USB camera and save them to your host system
  • Run AWS IoT Greengrass with a single component

We will use GreenEyes to implement a digital video recorder (DVR) platform. This platform utilizes multiple AWS services for edge computing, like AWS IoT Greengrass and FreeRTOS.

Prerequisites

All operating systems

You must have the AWS Command Line Interface v2 installed and have credentials set up in the default location $HOME/.aws. The project will bring these credentials into the VM so you can interact with AWS right away.

MacOS and Linux

You must have git and bash installed. All recent versions of Linux and MacOS should have these installed already.

Windows

You need a shell that can run bash scripts. We recommend Git Bash which comes with Git for Windows. If you have Windows Terminal installed make sure you enable the Git Bash profile for Windows Terminal during the Git for Windows install process. You can then select Git Bash from the Windows Terminal drop-down instead of using the default terminal.

Tools overview

Virtualization with VirtualBox

You need a virtualization platform. It is possible to do a lot in containers and many people prefer them. In this case though you need to load some kernel modules and access USB devices which isn’t always possible in container based systems. Additionally, one goal is to make the experience to be as similar as possible across Windows, MacOS, and Linux.

The virtualization platform you are going to use is VirtualBox. VirtualBox is cross-platform, free, and supports USB passthrough. Tests show that the performance is sufficient for single stream video capture and processing.

Automated builds with Vagrant

You need tools to make setting up virtual machines as easy as possible. Being able to start, stop, and rebuild machines on the command-line will save time and avoid manual configuration steps.

Vagrant automates building and configuring your VMs. Vagrant is a cross-platform, and can use VirtualBox as a back-end which supports USB passthrough.

Vagrant can also share files between the host and the guest operating system. Results show up immediately and you can monitor your system right from the host.

Camera selection

USB cameras are not all equal. Luckily all UVC webcams should be equal. UVC stands for USB video class. This is a standard that provides a standard interface to capture video. Some webcams do not support this standard and require additional software.

For simplicity, this post limits the scope to UVC webcams only. This system can be adapted to use other cameras but that is beyond the scope of this post.

A reliable UVC camera that you can start with is the Logitech C922. It can be found online at various stores either new or used. There are also some variations of the C922 like the C920s, and the C920e. These should work as well and we will add information to the documentation to indicate when we’ve tested them to be sure.

As long as the camera you’re using is on the UVC device list or claims to support UVC it should work. However, the default code uses the Logitech C922. If you use another camera there are some changes you’ll need to make to the configuration. Once you set up VirtualBox and Vagrant you can test the camera and validate your setup.

Install tools

VirtualBox

VirtualBox is one of the easiest dependencies for you to set up. You can go to the VirtualBox wiki’s downloads page and click on whichever VirtualBox platform corresponds to the platform you’ll be running on. You will need administrator access to install VirtualBox so make sure you use a computer that you have full control over.

This post is based on VirtualBox 6.1.32 on MacOS. If you are on a different platform or are using a newer version some of the screens may be slightly different. If you run into issues feel free to share screenshots with us by filing an issue in our GreenEyes repo.

VirtualBox extension pack

You also need to add support for USB 3.x so that your USB camera will work correctly. To get support for USB 3.x you need to install the Oracle VM VirtualBox Extension Pack which can be found at the same download page you downloaded VirtualBox from.

NOTE: If you already have the VirtualBox GUI open when you install the extension pack you will need to close and re-open it.

Vagrant

Head over to the Vagrant downloads page and install the build appropriate for your operating system. Some operating systems can install Vagrant with a package manager (Homebrew, apt, etc.), some require downloading a binary build.

Code

GreenEyes repository overview

The GreenEyes repository contains scripts and documentation for each post in this series. Scripts that are meant to be run on the host are in the greeneyes/blog-posts/001/host directory. Scripts that are meant to be run on the guest are in the greeneyes/blog-posts/001/guest directory.

In the host directory for this post there is only a Vagrantfile. There are no additional scripts to run.

Cloning the GreenEyes repository

Navigate in your shell to a directory where you’d like to store the repository and clone it like this:

git clone https://github.com/awslabs/greeneyes/

Environment

Initializing the environment

If your system is running MacOS or Linux then you are ready to go and all of the dependencies are present.

If your system is running Windows then Hyper-V needs to be enabled if it isn’t already. Enable Hyper-V in an administrator terminal session with this command:

powershell Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

Starting the VM

NOTE: Any time you use a vagrant command you must be in the greeneyes/blog-posts/001/host directory for this blog post.

Run vagrant up in the greeneyes/blog-posts/001/host directory. When this completes the VM is bootstrapped with the necessary kernel modules and USB 3.0 will be enabled. This command can take a while depending on a number of factors including your Internet bandwidth, CPU, RAM, etc. On the low end you can expect about three and a half minutes.

If the VM fails to provision

It is possible for the VM to fail to run the provisioning steps due to temporary network connectivity issues. When this happens an error is printed that looks like this:

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

If you see this message, run vagrant destroy to delete the VM and then run vagrant up again. Otherwise the VM will not have the proper dependencies and configuration to continue.

Validating USB 3.0 support

Run vagrant ssh and you’ll be dropped into a shell in the VM. To verify that USB support is working run this command:

dmesg | grep -i xhci

The output should contain some lines similar to this:

[    1.287174] usb usb3: Product: xHCI Host Controller
[    1.287928] usb usb3: Manufacturer: Linux 5.4.0-107-generic xhci-hcd

If those lines are there (there may be more as well) then USB 3.0 support is working. Exit the shell by running the exit command.

Now run vagrant halt to stop the VM so you can set up USB passthrough for your webcam.

Configuring USB passthrough

NOTE: The default Vagrant configuration configures USB 3.0 passthrough and adds a device filter for the Logitech C922. You can skip this section if you are using the Logitech C922. Otherwise, follow this section to add a device filter for your webcam.

Plug your USB camera into your computer. You will need to select it from a list on one of the next screens and it’s better to have it ready to go instead of reloading the interface.

Start the VirtualBox GUI. The interface should look like this:

The VirtualBox GUI showing the virtual machine in a powered off state

Now choose the “Settings”, then “Ports”, and then “USB”.

The screen should have “Enable USB Controller” checked, “USB 3.0 (xHCI) Controller) selected, an empty “USB Device Filters” list, and two USB icons on the right side.

Choose the icon with the USB cable and the plus sign as shown here:

The VirtualBox GUI showing USB 3.0 enabled but no devices in the allow list

Select the camera you connected from the list and it will show up in the list like this:

The VirtualBox GUI showing USB 3.0 enabled and the Logitech C922 camera in the allow list
Choose “OK” and you are ready to restart your instance and test it.

Testing USB passthrough

It is time to get into your VM and try to capture a picture from your camera.

Run vagrant up to start the VM. Open a terminal in the VM by running vagrant ssh.

In the shell run a capture test with this command:

~/guest/bash-capture-loop/capture-one

The output from the capture test is just a single < character if it was successful. You should see a new .jpg in the ~/shared directory. The name of the file will be a number that represents the UNIX epoch time at which the image was captured. That image is a single frame from your camera. You can view this on the host’s shared directory and verify the image is valid. Vagrant provides a shared directory feature that maps ~/shared on the VM to shared on your host.

Testing your AWS credentials

Open a terminal in the VM, if you haven’t already, by running vagrant ssh. In the terminal run this command:

aws sts get-caller-identity

If your credentials are loaded correctly you should see output like this:

{
  "UserId": "AXXXXXXXXXXXXXXXXXXXXX",
  "Account": "123456789012",
  "Arn": "arn:aws:iam::123456789012:user/me"
}

If your credentials are missing you will see output like this:

Unable to locate credentials. You can configure credentials by running "aws configure".

Validate your AWS credentials by trying this same command on your host system. If it does not work review the Configuration basics documentation and set your credentials up again if necessary.

Installing AWS IoT Greengrass

Open a terminal in the VM, if you haven’t already, by running vagrant ssh. In the terminal run this command:

gg-install

This script will download the latest version of the AWS IoT Greengrass Nucleus and provision AWS IoT Greengrass on your VM. When it completes you should see a message like this:

Successfully set up Nucleus as a system service
Greengrass S3 access policy created [...] for bucket [...]

You can monitor all the AWS IoT Greengrass logs by running:

gg-logs

The gg-logs script will also pick up new log files when they are created so you don’t need to restart it to see what a component is doing.

NOTE: To exit the gg-logs script press CTRL and backslash (CTRL+\).

Deploy the Bash Capture Loop component

The capture-one program you ran before is actually a Bash script. There is another program called capture-loop that wraps capture-one in a loop, captures an image once per second, and keeps the last 10 images in the shared directory.

capture-loop is also a Bash script. It was written as a Bash script to demonstrate that components can be written in any language you like.

To deploy the Bash Capture Loop component first make sure you have a terminal open that is monitoring the logs with the gg-logs command you ran before. Then open a new terminal and run this command:

gg-cloud-deploy

After a few seconds you should see log messages that look like this:

2022-04-29T15:31:37.495Z [WARN] (Copier) greeneyes.BashCaptureLoop: stderr. <. {scriptName=services.greeneyes.BashCaptureLoop.lifecycle.Run, serviceName=greeneyes.BashCaptureLoop, currentState=RUNNING}

This indicates that the Bash Capture Loop is running. You should also see images showing up on your host computer in the shared directory.

Understanding the cloud deploy command

The gg-cloud-deploy command uses the AWS IoT Greengrass Development Kit Command-Line Interface, also known as GDK, to build, publish, and deploy any AWS IoT Greengrass components in the guest directory for you automatically using the AWS IoT Greengrass cloud services.

It does the following:

  • Validates that the AWS IoT Greengrass CLI from the dev tools package is installed. This is done in the initial deployment by the gg-install script with the --deploy-dev-tools true option.
  • Locates any components in the guest directory that are compatible with GDK. It does this by looking for a gdk-config.json or gdk-config.json.template file.
  • If gdk-config.json is present but not gdk-config.json.template then it uses gdk-config.json without making any modifications. This allows it to support components that are not part of this post.
  • If there is a gdk-config.json.template file it populates the any placeholder values and then overwrites gdk-config.json with the updated information. This makes it easier to reuse a component’s configuration across multiple AWS IoT Greengrass VMs without having to update the Amazon Simple Storage Service (S3) bucket each time.
  • Builds each component with the gdk component build command. This creates a ZIP archive of the components that can be published to S3.
  • Publishes each component with the gdk component publish command. This uploads the ZIP archive to S3.
  • Deploys the latest version of each component with gg-cli deployment create. To get the latest version it queries your privately deployed Bash Capture Loop component with the AWS CLI.

Additional convenience scripts

In addition to gg-cloud-deploy and gg-install there are several convenience scripts in the greengrass directory. By default the greengrass directory is in the vagrant user’s PATH so you can use these tools from anywhere inside of the VM. The other scripts that you can use are:

  • gg-cli – Runs the AWS IoT Greengrass CLI
  • gg-names – Prints the names that are used to configure and manage the current VM’s instance of AWS IoT Greengrass
  • gg-password – Prints the username and password information for the AWS IoT Greengrass local developer console.
  • gg-start – Starts AWS IoT Greengrass using systemd
  • gg-status – Shows the AWS IoT Greengrass status using systemd
  • gg-stop – Stops AWS IoT Greengrass using systemd

Cleanup

The cleanup process is to remove the AWS resources that the scripts created and then to destroy the Vagrant VM.

The resources created by installing AWS IoT Greengrass and running the cloud deploy script are:

  • One S3 bucket for AWS IoT Greengrass deployment artifacts
  • One S3 object per deploy
  • One thing group
  • One AWS IoT certificate
  • Two AWS IoT policies
    • One for the token exchange service to get AWS Security Token Service (STS) credentials from the AWS Identity and Access Management (IAM) role using the AWS IoT Greengrass Core’s certificate
    • One for the AWS IoT Greengrass core to access AWS IoT and AWS IoT Greengrass services
  • One IAM role that can be assumed with the AWS IoT Greengrass core certificate
  • Two IAM policies
    • One default policy created by AWS IoT Greengrass to allow it to send logs to CloudWatch Logs
    • One policy created by gg-install to allow AWS IoT Greengrass to access the S3 bucket gg-cloud-deploy creates for deployment artifacts
  • One AWS IoT role alias to point the AWS IoT Credentials Provider service to the IAM role
  • One AWS IoT Greengrass core device
  • One AWS IoT thing

If you would prefer to delete these resources manually you can run the gg-names script to find the names of the resources.

To do an automated cleanup you can use the Superfluid tool. Download the tool and run this command:

superfluid greeneyes cleanup THING_NAME

Replace THING_NAME with the name of the thing reported by gg-names. This queries AWS for the thing, finds the resources related to that thing for this post series, and cleans them up automatically for you. It first displays what resources it wants to clean up, prompts for confirmation, and then cleans the resources up. It also logs if operations failed so the resources can be cleaned up later.

Recap

In this post, you set up an AWS IoT Greengrass component development environment. Using an Ubuntu VM and Vagrant, you deployed an AWS IoT Greengrass component that can capture images from a camera. The VM has a USB passthrough to connect to a USB web camera. You can modify the code for the component inside the VM and redeploy it to see your changes immediately.

Authors

Tim Mattison is a Principal Technologist in the IoT Ecosystem Services group at Amazon Web Services. Originally a firmware engineer, he has moved up and down the stack from Linux kernel drivers to GUIs. He enjoys the challenges involved with removing friction for developers and is always on the lookout for ways to improve. He primarily works on AWS IoT content that shows how to weave multiple services together across the product lifecycle from rapid prototyping to production.
Nenad Ilic is an Internet of Things specialist with more than a decade of experience. Currently, he works as a Senior Developer Advocate at Amazon Web Services, where he helps developers across the industry accelerate their Edge Software Development and builds infrastructure so other developers can express themselves through code. In his free time he likes to experiment with electric skateboards and share his experience with the broader community.