Detect, Analyze, and Compare Faces with Amazon Rekognition

TUTORIAL

Introduction

In this tutorial, you will learn how to use the face recognition features in Amazon Rekognition using the AWS Management Console. Amazon Rekognition is a deep learning-based image and video analysis service.

As a developer, you might face the challenge of facial recognition and comparison if you are developing an employee verification system, or need to automate video editing or provide secondary authentication for other applications. To solve this, you could develop your own machine learning model, develop an API, and manage your own infrastructure. This option is expensive, requires advanced knowledge, and is time intensive.

An easier route is to use Amazon Rekognition, which can detect faces in an image or video, find facial landmarks such as the position of eyes, and detect emotions such as happy or sad in near-real time or in batches without management of infrastructure or modeling.

In this tutorial, you will use Amazon Rekognition to analyze an image and then compare it to other images to see if the faces are the same.

This tutorial is a demo of the functionality that is available when using the AWS CLI or the Rekognition API. For production or proof of concept implementations, we recommend using these programmatic interfaces rather than the Amazon Rekognition console.

 AWS experience

Beginner

 Time to complete

10 minutes

 Cost to complete

Free Tier eligible

 Requires

  • AWS Account
  • Recommended browser: The latest version of Chrome or Firefox

[**]Accounts created within the past 24 hours might not yet have access to the services required for this tutorial.

 Services used

 Last updated

July 11, 2022

Implementation

  • Open the AWS Management Console, so you can keep this step-by-step guide open. When the screen loads, enter your user name and password to get started. Then type Rekognition in the search bar and select Rekognition to open the service console.

    Enter the Amazon Rekognition Console
  • In this step, you will use the facial analysis feature in Amazon Rekognition to see the detailed JSON response you can receive from analyzing one image.

    a) To start, select Facial analysis in the panel navigation on the left. This feature allows you to analyze faces in an image and receive a JSON response.
    Select Facial Analysis in the panel navigation

    b) Open and save the first sample image for this tutorial here.

    652499668

    c) Click the orange Upload button and select the sample image you just saved.

    Click the blue Upload button and select the sample image you just saved

    d) Notice that under the Results dropdown, you can click through and see quick results for each face that was detected.

    Notice that under the Results drop down you can click through and see quick results for each face that was detected
     
    e) Click on the Response dropdown to see the JSON results. Notice that under the emotions results, there are numerous detected emotions. Happy has a 99.98% confidence rating.
     
    As a developer, detecting emotions in images and videos makes it possible to quickly catalog a digital library by emotion. Another use case for detecting emotions is to amplify ad targeting so users receive a personalized experience tailored to the current emotion.
    Click on the Response drop down to see the JSON results
    JSON Results: Detected emotions: Happy, confused, calm
  • In this step, you will use the face comparison feature to see the detailed JSON response from comparing two different images that don't match.

    a) Select Face comparison in the panel navigation on the left.

    Select face comparison

     

    b) Open and save the second sample image for this tutorial here.

    Second sample image for tutorial

     

    c) Click on the orange Upload button for the reference face and select the image you just saved.

    Click upload button for the reference face

     

    d) Click on the orange Upload button for the comparison face and select our first sample image we used in step 2.

    Click the blue upload button for the comparison face

     

    e) Notice that in Results dropdown you can see that our reference wasn’t a match for any of the detected faces in our comparison faces image.

    Results drop down shows reference wasn't a match
     
    f) Click on the Response dropdown to see the JSON results. Notice that the “Similarity” score for each of the detected faces never exceeds 1. The similarity score ranges from 1-100 and the threshold can be adjusted when using the API.

    As a developer, comparing faces at scale can be used in applications to track persons of interest, create a face-based employee verification system, or provide a VIP experience to guests staying at a hospitality venue.

    Similarity score in response drop down
  • In this step, you will use the face comparison feature to see the detailed JSON response from comparing two different images that have a match.

    a) Open and save the third and final sample image for this tutorial here.

    Third and final sample image for tutorial

     

    b) Click on the orange Upload button for the reference face and select the image you just saved.

    Click the blue upload button

     

    c) Notice that the reference face that was compared to our other photo detected a 99% similarity score and detected that all other faces were not a match.

    The reference face that was compared to our other photo detected a 97% similarity score

     

    d) Click on the Response dropdown to see the details of each comparison.

    Click on the Response drop down to see the details of each comparison.

Conclusion

You have learned how to use the console to analyze and compare faces. You can also perform this feature using the API so you can operate at scale. Use Amazon Rekognition when you need to perform facial analysis at scale without worrying about infrastructure or training a model for identifying persons of interest, cataloging a digital library, creating a face-based employee verification system, or performing sentiment analysis.

Next steps

Build a facial recognition system

Easily perform facial analysis on live feeds by creating a serverless video analytics environment using Amazon Rekognition Video
Next »

Build a media analysis solution

Get started with automated metadata extraction using the AWS Media Analysis Solution
Next »

Explore the console

Start today for free including DocumentDB, Neptune, additional instances, and more!
Next »

Was this page helpful?