AWS Machine Learning Blog
Build a work-from-home posture tracker with AWS DeepLens and GluonCV
April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life. |
Working from home can be a big change to your ergonomic setup, which can make it hard for you to keep a healthy posture and take frequent breaks throughout the day. To help you maintain good posture and have fun with machine learning (ML) in the process, this post shows you how to build a posture tracker project with AWS DeepLens, the AWS programmable video camera for developers to learn ML. You will learn how to use the latest pose estimation ML models from GluonCV to map out body points from profile images of yourself working from home and send yourself text message alerts whenever your code detects bad posture. GluonCV is a computer vision library built on top of the Apache MXNet ML framework that provides off-the-shelf ML models from state-of-the-art deep learning research. With the ability run GluonCV models on AWS DeepLens, engineers, researchers, and students can quickly prototype products, validate new ideas, and learn computer vision. In addition to detecting bad posture, you will learn to analyze your posture data over time with Amazon QuickSight, an AWS service that lets you easily create and publish interactive dashboards from your data.
This tutorial includes the following steps:
- Experiment with AWS DeepLens and GluonCV
- Classify postures with the GluonCV pose key points
- Deploy pre-trained GluonCV models to AWS DeepLens
- Send text message reminders to stretch when the tracker detects bad posture
- Visualize your posture data over time with Amazon QuickSight
The following diagram shows the architecture of our posture tracker solution.
Prerequisites
Before you begin this tutorial, make sure you have the following prerequisites:
- An AWS account
- An AWS DeepLens device. Available on the following Amazon websites:
Experimenting with AWS DeepLens and GluonCV
Normally, AWS developers use Jupyter notebooks hosted in Amazon SageMaker to experiment with GluonCV models. Jupyter notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. In this tutorial you are going to create and run Jupyter notebooks directly on an AWS DeepLens device, just like any other Linux computer, in order to enable rapid experimentation.
Starting with version AWS DeepLens software version 1.4.5, you can run GluonCV pretrained models directly on AWS DeepLens. To check the version number and update your software, go to the AWS DeepLens console, under Devices select your DeepLens device, and look at the Device status section. You should see the version number similar to the following screenshot.
To start experimenting with GluonCV models on DeepLens, complete the following steps:
- SSH into your AWS DeepLens device.
To do so, you need the IP address of AWS DeepLens on the local network. To find the IP address, select your device on the AWS DeepLens console. Your IP address is listed in the Device Details section.
You also need to make sure that SSH is enabled for your device. For more information about enabling SSH on your device, see View or Update Your AWS DeepLens 2019 Edition Device Settings.
Open a terminal application on your computer. SSH into your DeepLens by entering the following code into your terminal application:
When you see a password prompt, enter the SSH password you chose when you set up SSH on your device.
- Install Jupyter notebook and GluonCV on your DeepLens. Enter each of the following commands one at a time in the SSH terminal. Press Enter after each line entry.
- Generate a default configuration file for Jupyter notebook:
- Edit the Jupyter configuration file in your SSH session to allow access to the Jupyter notebook running on AWS DeepLens from your laptop.
- Add the following lines to the top of the config file:
- Save the file (if you are using the nano editor, press Ctrl+X and then Y).
- Open up a port in the AWS DeepLens firewall to allow traffic to Jupyter notebook. See the following code:
- Run the Jupyter notebook server with the following code:
You should see output like the following screenshot:
- Copy the link and replace the IP portion (
DeepLens or 127.0.0.1
). See the following code:For example, the URL based on the preceding screenshot is
http://10.0.0.250:8888/?token=7adf9c523ba91f95cfc0ba3cacfc01cd7e7b68a271e870a8
. - Enter this link into your laptop web browser.
You should see something like the following screenshot.
- Choose New to create a new notebook.
- Choose Python3.7.
Capturing a frame from your camera
To capture a frame from the camera, first make sure you aren’t running any projects on AWS DeepLens.
- On the AWS Deeplens console, go to your device page.
- If a project is deployed, you should see a project name in the Current Project pane. Choose Remove Project if there is a project deployed to your AWS DeepLens.
- Now go back to the Jupyter notebook running on your AWS DeepLens, enter the following code into your first code cell:
- Press Shift+Enter to execute the code inside the cell.
Alternatively, you can press the Run button in the Jupyter toolbar as shown in the screenshot below:
You should see the size of the image captured by AWS DeepLens similar to the following text:
The three numbers show the height, width, and number of color channels (red, green, blue) of the image.
- To view the image, enter the following code in the next code cell:
You should see an image similar to the following screenshot:
Detecting people and poses
Now that you have an image, you can use GluonCV pre-trained models to detect people and poses. For more information, see Predict with pre-trained Simple Pose Estimation models from the GluonCV model zoo.
- In a new code cell, enter the following code to import the necessary dependencies:
- You load two pre-trained models, one to detect people (yolo3_mobilenet1.0_coco) in the frame and one to detect the pose (simple_pose_resnet18_v1b) for each person detected. To load the pre-trained models, enter the following code in a new code cell:
- Because the yolo_mobilenet1.0_coco pre-trained model is trained to detect many types of objects in addition to people, the code below narrows down the detection criteria to just people so that the model runs faster. For more information about the other types of objects that the model can predict, see the GluonCV MSCoco Detection source code.
- The following code shows how to use the people detector to detect people in the frame. The outputs of the people detector are the class_IDs (just “person” in this use case because we’ve limited the model’s search scope), the confidence scores, and a bounding box around each person detected in the frame.
- Enter the following code to feed the results from the people detector into the pose detector for each person found. Normally you need to use the bounding boxes to crop out each person found in the frame by the people detector, then resize each cropped person image into appropriately sized inputs for the pose detector. Fortunately GluonCV comes with a detector_to_simple_pose function that takes care of cropping and resizing for you.
- The following code overlays the results of the pose detector onto the original image so you can visualize the result:
After completing steps 1-6, you should see an image similar to the following screenshot.
If you get an error similar to the ValueError output below, make sure you have at least one person in the camera’s view.
So far, you experimented with a pose detector on AWS DeepLens using Jupyter notebooks. You can now collect some data to figure out how to detect when someone is hunching, sitting, or standing. To collect data, you can save the image frame from the camera out to disk using the built-in OpenCV module. See the following code:
Classifying postures with the GluonCV pose key points
After you have collected a few samples of different postures, you can start to detect bad posture by applying some rudimentary rules.
Understanding the GluonCV pose estimation key points
The GluonCV pose estimation model outputs 17 key points for each person detected. In this section, you see how those points are mapped to human body joints and how to apply simple rules to determine if a person is sitting, standing, or hunching.
This solution makes the following assumptions:
- The camera sees your entire body from head to toe, regardless of whether you are sitting or standing
- The camera sees a profile view of your body
- No obstacles exist between camera and the subject
The following is an example input image. We’ve asked the actor in this image to face the camera instead of showing the profile view to illustrate the key body joints produced by the pose estimation model.
The following image is the output of the model drawn as lines and key points onto the input image. The cyan rectangle shows where the people detector thinks a person is in the image.
The following code shows the raw results of the pose detector. The code comments show how each entry maps to point on the a human body:
Deploying pre-trained GluonCV models to AWS DeepLens
In the following steps, you convert your code written in the Jupyter notebook to an AWS Lambda inference function to run on AWS DeepLens. The inference function optimizes the model to run on AWS DeepLens and feeds each camera frame into the model to get predictions.
This tutorial provides an example inference Lambda function for you to use. You can also copy and paste code sections directly from the Jupyter notebook you created earlier into the Lambda code editor.
Before creating the Lambda function, you need an Amazon Simple Storage Service (Amazon S3) bucket to save the results of your posture tracker for analysis in Amazon QuickSight. If you don’t have an Amazon S3 Bucket, see How to create an S3 bucket.
To create a Lambda function to deploy to AWS DeepLens, complete the following steps:
- Download aws-deeplens-posture-lambda.zip onto your computer.
- On the Lambda console, choose Create Function.
- Choose Author from scratch and choose the following options:
- For Runtime, choose Python 3.7.
- For Choose or create an execution role, choose Use an existing role.
- For Existing role, enter
service-role/AWSDeepLensLambdaRole
.
- After you create the function, go to function’s detail page.
- For Code entry type¸ choose Upload zip.
- Upload the aws-deeplens-posture-lambda.zip you downloaded earlier.
- Choose Save.
- In the AWS Lambda code editor, select the lambda_funtion.py file and enter an Amazon S3 bucket where you want to store the results.
- Choose Save.
- From the Actions drop-down menu, choose Publish new version.
- Enter a version number and choose Publish. Publishing the function makes it available on the AWS DeepLens console so you can add it to your custom project.
- Give your AWS DeepLens Lambda function permissions to put files in the Amazon S3 bucket. Inside your Lambda function editor, click on Permissions, then click on the AWSDeepLensLambda role name.
- You will be directed to the IAM editor for the AWSDeepLensLambda role. Inside the IAM role editor, click Attach Policies.
- Type in S3 to search for the AmazonS3 policy and check the AmazonS3FullAccess policy. Click Attach Policy.
Understanding the Lambda function
This section walks you through some important parts of the Lambda function.
You load the GluonCV model with the following code:
You run the model frame-per-frame over the images from the camera with the following code:
The following code shows you how to send the text prediction results back to the cloud. Viewing the text results in the cloud is a convenient way to make sure the model is working correctly. Each AWS DeepLens device has a dedicated iot_topic automatically created to receive the inference results.
Using the preceding key points, you can apply the geometric rules shown in the following sections to calculate angles between the body joints to determine if the person is sitting, standing, or hunching. You can change the geometric rules to suit your setup. As a follow-up activity to this tutorial, you can collect the pose data and train a simple ML model to more accurately predict when someone is standing or sitting.
Sitting vs. Standing
To determine if a person is standing or sitting, use the angle between the horizontal (ground) and the line connecting the hip and knee.
Hunching
When a person hunches, their head is typically looking down and their back is crooked. You can use the angles between the ear and shoulder and the shoulder and hip to determine if someone is hunching. Again, you can modify these geometric rules as you see fit. The following code inside the provided AWS DeepLens Lambda function determines if a person is hunching:
Deploying the Lambda inference function to your AWS DeepLens device
To deploy your Lambda inference function to your AWS DeepLens device, complete the following steps:
- On the AWS DeepLens console, under Projects, choose Create new project.
- Choose Create a new blank project.
- For Project name, enter
posture-tracker
. - Choose Add model.
To deploy a project, AWS DeepLens requires you to select a model and a Lambda function. In this tutorial, you are downloading the GluonCV models directly onto AWS DeepLens from inside your Lambda function so you can choose any existing model on the AWS DeepLens console to be deployed. The model selected on the AWS DeepLens console only serves as a stub and isn’t be used in the Lambda function. If you don’t have an existing model, deploy a sample project and select the sample model.
- Choose Add function.
- Choose the Lambda function you created earlier.
- Choose Create.
- Select your newly created project and choose Deploy to device.
- On the Target device page, select your device from the list.
- Choose Review.
- On the Review and deploy page, choose Deploy.
To verify that the project has deployed successfully, you can check the text prediction results sent back to the cloud via AWS IoT Greengrass. For instructions on how to view the text results, see Viewing text output of custom model in AWS IoT Greengrass.
In addition to the text results, you can view the pose detection results overlaid on top of your AWS DeepLens live video stream. For instructions on viewing the live video stream, see Viewing AWS DeepLens Output Streams.
The following screenshot shows what you will see in the project stream:
Sending text reminders to stand and stretch
In this section, you use Amazon Simple Notification Service (Amazon SNS) to send reminder text messages when your posture tracker determines that you have been sitting or hunching for an extended period of time.
- Register a new SNS topic to publish messages to.
- After you create the topic, copy and save the topic ARN, which you need to refer to in the AWS DeepLens Lambda inference code.
- Subscribe your phone number to receive messages posted to this topic.
Amazon SNS sends a confirmation text message before your phone number can receive messages.
You can now change the access policy for the SNS topic to allow AWS DeepLens to publish to the topic.
- On the Amazon SNS console, choose Topics.
- Choose your topic.
- Choose Edit.
- On the Access policy tab, enter the following code, be sure to replace YOUR_AWS_ACCOUNT_ID with your AWS account ID. See How to find your Account ID.
- Update the AWS DeepLens Lambda function with the ARN for the SNS topic. See the following code:
Visualizing your posture data over time with Amazon QuickSight
This next section shows you how to visualize your posture data with Amazon QuickSight. You first need to store the posture data in Amazon S3.
Storing the posture data in Amazon S3
The following code example records posture data one time every second; you can adjust this interval to suit your needs. The code writes the records to a CSV file every 60 seconds and uploads the results to the Amazon S3 bucket you created earlier.
Your Amazon S3 bucket now starts to fill up with CSV files containing posture data. See the following screenshot.
Using Amazon QuickSight
You can now use Amazon QuickSight to create an interactive dashboard to visualize your posture data. First, make sure that Amazon QuickSight has access to the S3 bucket with your pose data.
- On the Amazon QuickSight console, from the menu bar, choose Manage QuickSight.
- Choose Security & permissions.
- Choose Add or remove.
- Select Amazon S3.
- Choose Select S3 buckets.
- Select the bucket containing your pose data.
- Choose Update.
- On the Amazon QuickSight landing page, choose New analysis.
- Choose New data set.
You see a variety of options for data sources.
- Choose S3.
A pop-up window appears that asks for your data source name and manifest file. A manifest file tells Amazon QuickSight where to look for your data and how your dataset is structured.
- To build a manifest file for your posture data files in Amazon S3, open your preferred text editor and enter the following code:
- Save the text file with the name
manifest.json
. - In the New S3 data source window, select Upload.
- Upload your manifest file.
- Choose Connect.
If you set up the data source successfully, you see a confirmation window like the following screenshot.
To troubleshoot any access or permissions errors, see How do I allow Amazon QuickSight access to my S3 bucket when I have a deny policy?
- Choose Visualize.
You can now experiment with the data to build visualizations. See the following screenshot.
The following bar graphs show visualizations you can quickly make with the posture data.
For instructions on creating more complex visualizations, see Tutorial: Create an Analysis.
Conclusion
In this post, you learned how to use Jupyter notebooks to prototype with AWS DeepLens, deploy a pre-trained GluonCV pose detection model to AWS DeepLens, send text messages using Amazon SNS based on triggers from the pose model, and visualize the posture data with Amazon QuickSight. You can deploy other GluonCV pre-trained models to AWS DeepLens or replace the hard-coded rules for classifying standing and sitting positions with a robust machine learning model. You can also dive deeper with Amazon QuickSight to reveal posture patterns over time.
For a detailed walkthrough of this tutorial and other tutorials, sample code, and project ideas with AWS DeepLens, see AWS DeepLens Recipes.
About the Authors
Phu Nguyen is a Product Manager for AWS DeepLens. He builds products that give developers of any skill level an easy, hands-on introduction to machine learning.
Raj Kadiyala is an AI/ML Tech Business Development Manager in AWS WWPS Partner Organization. Raj has over 12 years of experience in Machine Learning and likes to spend his free time exploring machine learning for practical every day solutions and staying active in the great outdoors of Colorado.