AWS Machine Learning Blog
Find Distinct People in a Video with Amazon Rekognition
Note: AWS released Amazon Rekognition Video on November 29, 2017 which is now the preferred approach for analyzing videos and finding distinct people. Nevertheless, we continue to make this blog post available for educational purposes on how to use Amazon Rekognition.
Amazon Rekognition makes it easy to detect, search for, and compare faces in images to find matches. In this post, we show how to use Amazon Rekognition to find distinct people in a video and identify the frames that they appear in. You could use face detection in videos, for example, to identify actors in a movie, find relatives and friends in a personal video library, or track people in video surveillance.
First, we explain how the serverless solution finds distinct people in a video. Then, we explain how to implement the solution in your AWS account with AWS CloudFormation and to test it with a sample video.
How it works
The following diagram shows how this solution works:
Amazon Rekognition currently supports image analysis only. Therefore, we need to extract frames of the input video into images. We use Amazon Elastic Transcoder to create video thumbnails, a service that makes it easy to convert media files in the cloud with no need to manage the underlying infrastructure.
This is what happens in greater detail:
- You upload a video file into an S3 bucket.
- Amazon S3 invokes the first of the two AWS Lambda functions to create a new job in Amazon Elastic Transcoder (the code for this follows this list).
- The Elastic Transcoder job creates video thumbnails in .png format for every second of input video and uploads them into the S3 bucket. (It also creates a transcoded video, which we don’t use for this post.)
- When the job completes, Elastic Transcoder sends a notification to an SNS topic and Amazon Simple Notification Service (Amazon SNS) invokes another Lambda function.
- The second Lambda function creates a new collection in Amazon Rekognition. A collection is a container for the faces that Amazon Rekognition detected in images by using the IndexFaces. Note that the image bytes don’t persist in Amazon Rekognition. Instead, Amazon Rekognition extracts and stores facial features in the collection. It then retrieves the list of thumbnail objects created by Elastic Transcoder for that video in the S3 bucket and does the following:
- Calls the IndexFaces operation for each thumbnail. The solution uses concurrent threads to increase the throughput of requests to Amazon Rekognition and to reduce the time needed to complete the operation. In the end, the collection contains as many faces as there are faces detected in each thumbnail.
- For each face stored in the collection, calls the SearchFaces operation to search for faces that are similar to that face and in which it has a confidence in the match that is higher than 97%. The following code shows how this works:
- Find faces in the collection that match each face that it detected. It starts from the first face that appears in the video and associates that face with a peopleId of 1. Then, it recursively propagates the peopleId to the matching faces. In other words, if faceA matches faceB and faceB matches faceC, the function decides that faceA, faceB and faceC correspond to the same person and assigns them all the same peopleId. To avoid false positives, the Lambda function propagates the peopleId from faceA to faceB only if there are at least two faces that match faceB that also match faceA. When the peopleId 1 has fully propagated, the function associates a peopleId of 2 to the next face appearing in the video that has no peopleId associated with it. It continues this process until all of the faces have a peopleId. The following code shows how this works:
In our solution, we arbitrarily chose to return people that appear in at least five consecutive frames. The Lambda function creates and uploads a JSON file to the S3 bucket with the following code:
It also creates and uploads a visual representation to the S3 bucket. You will see an example in the next section. Finally, the Lambda function deletes the collection from Amazon Rekognition.
Implementing and testing the solution
To implement and test the solution in your AWS account, you will use AWS CloudFormation to provision the required resources in the AWS North Virginia Region.
CloudFormation creates the following resources:
- An S3 bucket that stores input videos, video thumbnails, and the files created with this solution.
- An SNS topic where Elastic Transcoder publishes an event when a job completes.
- An IAM role that grants Elastic Transcoder the required permissions to access Amazon S3 and Amazon SNS.
- A pipeline and a preset in Elastic Transcoder. The pipeline is a queue for Elastic Transcoder jobs that defines how input and output files are stored in Amazon S3 and which notifications to send. The preset specifies settings, including thumbnail settings, for transcoding media files.
- An IAM role that grants Lambda the required permissions to access Amazon S3 and Amazon Rekognition.
- A Lambda function that Amazon S3 invokes when a new video is uploaded into the S3 bucket.
- The second Lambda function that Amazon SNS invokes. This Lambda function processes the video thumbnails to find distinct people.
Some of the resources that AWS CloudFormation creates are custom resources. Therefore, AWS CloudFormation creates the related Lambda functions and IAM roles for Lambda beforehand.
To deploy and test the solution
- Choose Create stack to create an AWS CloudFormation stack. Then, follow the on-screen instructions.
After creating these resources, AWS CloudFormation creates a copy of the video Democratizing LoRaWAN and IoT with The Things Network and stores it in the S3 bucket. This saves you from manually copying the video to test the solution. This triggers the solution. It can take up to 10 minutes after you start creating the stack for the solution to process the video. - After the video’s been processed, in the AWS CloudFormation console, choose Outputs and note the name of the S3 bucket.
- Open the Amazon S3 console to browse the objects in this S3 bucket. You should see a new folder called output, which contains two files: the JSON document and the visual representation of each face in .png format, as follows:
The solution has detected seven people in the video. For each person, the visual representation shows four randomly selected views of that person’s face and red vertical lines that indicate where that person appears in a frame. - You can now clean up the resources by deleting the AWS CloudFormation stack. AWS CloudFormation does not delete the S3 bucket because it contains objects. You need to delete the S3 bucket manually.
Conclusion
In this post, we’ve shown how to use Amazon Rekognition, Amazon Elastic Transcoder, AWS Lambda, and Amazon S3 to identify people who appear in a video and to detect the frames in which they appear.
You can adapt this solution to your own requirements. For example, you could return additional attributes for the people that the solution finds, like an estimated age range or their name if they are famous individuals or celebrities.
If you have comments, submit them in the Comments section. If you have questions, start a new thread on the Amazon Rekognition forum.
Next Steps
Take your knowledge to the next level. Learn how to classify a large number of images with Amazon Rekognition and AWS Batch.
About the Authors
Nicolas Malaval is a Consultant for AWS Professional Services. He lives in Paris and works with our enterprise customers, helping them adopt cloud technology and innovate with AWS.
Rudy Krol is a Solution Architect for Amazon Web Services. He gained experience in software development before joining AWS. He is now specialized in serverless and IoT, helping our customers in France embrace the latest technologies on their innovative projects.