AWS Public Sector Blog
Automating fall detection with AWS DeepLens
What if someone in a hospital room or public train station suddenly falls due to a stroke or other health issue? An automated monitoring system like AWS DeepLens, a deep learning-enabled video camera for developers, could detect such falls and contact emergency services in a timely manner. Using AWS DeepLens, I created the following solution depicted end-to-end in the below diagram.
Detecting people when they fall using live monitoring through an AWS DeepLens device is done through classification. The video capture from AWS DeepLens is analyzed frame-by-frame. Using a transformation pipeline, created in the AWS Deep Learning AMI (DLAMI), it prepares and transforms a dataset. Note that in production environments the pipeline should be created with tools like AWS Glue or Amazon EMR. Then, Amazon SageMaker trains an image classification algorithm to predict when a fallen person is detected. The algorithm is deployed with an AWS DeepLens inference AWS Lambda function to the device. When the detection is positive, the Internet of Things (IoT) capabilities of AWS DeepLens and an AWS Lambda function trigger an emergency voice call using Amazon Connect.
With approximately 125 images of both empty rooms and of people sitting or standing upright and another 125 images of people lying on the floor, I built my own dataset. Although this was a balanced dataset and I planned to use transfer learning, I had to increase the number of data samples for the neural network. I decided to create a data transformation and augmentation pipeline.
Next, I transformed the image resolutions to a typical size for convolutional neural networks (CNNs). The ideal resolution is a 224 x 224 height/width with three RGB layers. I used Imagemagick, a complementary tool. After that, I performed augmentations and increased the number of images with a Python tool called Augmentor.
Lastly, I transformed the dataset into recordIO format by activating the MXNet Python 3.6 environment in the DLAMI instance to get to the im2rec tool. To train the algorithm, I used Amazon SageMaker’s built-in image classification algorithm using Jupyter Notebooks.
I defined a variable bucket with the name of the S3 bucket where the training dataset in recordIO format would be uploaded, starting the title with “deeplens.” I obtained the built-in algorithm’s images and defined the training image variable.
Before starting the training job, I defined the hyperparameters that the algorithm uses. An important hyperparameter was the number of training sample variables, which I matched to the number of training samples in my project. I used a learning rate of 0.1, 50 network layers, and 100 epochs after a few tests.
Next, I set up the data for training, launched a successful test, and then evaluated the model. I obtained good value for the training and validation algorithm accuracy. These stats were strong enough to move to a real deployment on the AWS DeepLens device.
The AWS DeepLens deployments must include the AWS DeepLens inference AWS Lambda function. For more information, including AWS Identity and Access Management (AWS IAM) role requirements, see Create and Publish an AWS DeepLens Inference AWS Lambda Function.
The deployment process included creating a new project, importing the model, and selecting the correct AWS DeepLens inference Lambda function for the project. The final step was configuring the alerting system. I chose to alert when AWS DeepLens detects that someone has fallen via a voice call to a phone number with Amazon Connect, a cloud-based contact center service.
To successfully make the Amazon Connect API call, I ran an instance of Amazon Connect, which is required. I used a contact flow to call Amazon Connect to start the outbound call API.
The AWS DeepLens inference function creates an IoT client, where it publishes messages with the results. A conditional check can trigger a new AWS Lambda function. When the inference is positive (meaning it detects someone has fallen), it calls the Amazon Connect API start outbound voice contact to initiate an outgoing call. The destination number is specified in the API call, alerting that someone has fallen in the space being monitored by the AWS DeepLens device.
I used the following configuration:
- Trigger: AWS IoT
- SQL version: 2016-03-23
- Rule query statement: SELECT fallen AS fallen FROM ‘$aws/things/deeplens_X[your_ARN]/infer’ WHERE fallen> .05
This set a trigger to inspect the IoT messages, evaluating the value of the labels included. If the probability of label “fallen” is bigger than 0.5, it executed the code of the new AWS Lambda function.
This project is just one example of what can be built with AWS DeepLens and its integration with the AWS environment. Projects like this one can address many other use cases and example such as helping people with reduced mobility.
Learn more about AWS DeepLens. And check out some of my other blog posts on the AWS Public Sector Blog, including, “Creating a serverless GPS monitoring and alerting solution,” “Grandma Emergency Button – A Simple Emergency Alert Solution with AWS IoT Button,” “Using a Serverless Architecture to Collect and Prioritize Citizen Feedback,” and “Develop and Extract Value from Open Data.”
Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us.