AWS Machine Learning Blog

Detect real and live users and deter bad actors using Amazon Rekognition Face Liveness

Financial services, the gig economy, telco, healthcare, social networking, and other customers use face verification during online onboarding, step-up authentication, age-based access restriction, and bot detection. These customers verify user identity by matching the user’s face in a selfie captured by a device camera with a government-issued identity card photo or preestablished profile photo. They also estimate the user’s age using facial analysis before allowing access to age-restricted content. However, bad actors increasingly deploy spoof attacks using the user’s face images or videos posted publicly, captured secretly, or created synthetically to gain unauthorized access to the user’s account. To deter this fraud, as well as reduce the costs associated with it, customers need to add liveness detection before face matching or age estimation is performed in their face verification workflow to confirm that the user in front of the camera is a real and live person.

We are excited to introduce Amazon Rekognition Face Liveness to help you easily and accurately deter fraud during face verification. In this post, we start with an overview of the Face Liveness feature, its use cases, and the end-user experience; provide an overview of its spoof detection capabilities; and show how you can add Face Liveness to your web and mobile applications.

Face Liveness overview

Today, customers detect liveness using various solutions. Some customers use open-source or commercial facial landmark detection machine learning (ML) models in their web and mobile applications to check if users correctly perform specific gestures such as smiling, nodding, shaking their head, blinking their eyes, or opening their mouth. These solutions are costly to build and maintain, fail to deter advanced spoof attacks performed using physical 3D masks or injected videos, and require high user effort to complete. Some customers use third-party face liveness features that can only detect spoof attacks presented to the camera (such as printed or digital photos or videos on a screen), which work well for users in select geographies, and are often completely customer-managed. Lastly, some customer solutions rely on hardware-based infrared and other sensors in phone or computer cameras to detect face liveness, but these solutions are costly, hardware-specific, and work only for users with select high-end devices.

With Face Liveness, you can detect in seconds that real users, and not bad actors using spoofs, are accessing your services. Face Liveness includes these key features:

  • Analyzes a short selfie video from the user in real time to detect whether the user is real or a spoof
  • Returns a liveness confidence score—a metric for the confidence level from 0–100 that indicates the probability for a person being real and live
  • Returns a high-quality reference image—a selfie frame with quality checks that can be used for downstream Amazon Rekognition face matching or age estimation analysis
  • Returns up to four audit images—frames from the selfie video that can be used for maintaining audit trails
  • Detects spoofs presented to the camera, such as a printed photo, digital photo, digital video, or 3D mask, as well as spoofs that bypass the camera, such as a pre-recorded or deepfake video
  • Can easily be added to applications running on most devices with a front-facing camera using open-source pre-built AWS Amplify UI components

In addition, no infrastructure management, hardware-specific implementation, or ML expertise is required. The feature automatically scales up or down in response to demand, and you only pay for the face liveness checks you perform. Face Liveness uses ML models trained on diverse datasets to provide high accuracy across user skin tones, ancestries, and devices.

Use cases

The following diagram illustrates a typical workflow using Face Liveness.

You can use Face Liveness in the following user verification workflows:

  • User onboarding – You can reduce fraudulent account creation on your service by validating new users with Face Liveness before downstream processing. For example, a financial services customer can use Face Liveness to detect a real and live user and then perform face matching to check that this is the right user prior to opening an online account. This can deter a bad actor using social media pictures of another person to open fraudulent bank accounts.
  • Step-up authentication – You can strengthen the verification of high-value user activities on your services, such as device change, password change, and money transfers, with Face Liveness before the activity is performed. For example, a ride-sharing or food-delivery customer can use Face Liveness to detect a real and live user and then perform face matching using an established profile picture to verify a driver’s or delivery associate’s identity before a ride or delivery to promote safety. This can deter unauthorized delivery associates and drivers from engaging with end-users.
  • User age verification – You can deter underage users from accessing restricted online content. For example, online tobacco retailers or online gambling customers can use Face Liveness to detect a real and live user and then perform age estimation using facial analysis to verify the user’s age before granting them access to the service content. This can deter an underage user from using their parent’s credit cards or photo and gaining access to harmful or inappropriate content.
  • Bot detection – You can avoid bots from engaging with your service by using Face Liveness in place of “real human” captcha checks. For example, social media customers can use Face Liveness for posing real human checks to keep bots at bay. This significantly increases the cost and effort required by users driving bot activity because key bot actions now need to pass a face liveness check.

End-user experience

When end-users need to onboard or authenticate themselves on your application, Face Liveness provides the user interface and real-time feedback for the user to quickly capture a short selfie video of moving their face into an oval rendered on their device’s screen. As the user’s face moves into the oval, a series of colored lights is displayed on the device’s screen and the selfie video is securely streamed to the cloud APIs, where advanced ML models analyze the video in real time. After the analysis is complete, you receive a liveness prediction score (a value between 0–100), a reference image, and audit images. Depending on whether the liveness confidence score is above or below the customer-set thresholds, you can perform downstream verification tasks for the user. If liveness score is below threshold, you can ask the user to retry or route them to an alternative verification method.

The sequence of screens that the end-user will be exposed to is as follows:

  1. The sequence begins with a start screen that includes an introduction and photosensitive warning. It prompts the end-user to follow instructions to prove they are a real person.
  2. After the end-user chooses Begin check, a camera screen is displayed and the check starts a countdown from 3.
  3. At the end of the countdown, a video recording begins, and an oval appears on the screen. The end-user is prompted to move their face into the oval. When Face Liveness detects that the face is in the correct position, the end-user is prompted to hold still for a sequence of colors that are displayed.
  4. The video is submitted for liveness detection and a loading screen with the message “Verifying” appears.
  5. The end-user receives a notification of success or a prompt to try again.

Here is what the user experience in action looks like in a sample implementation of Face Liveness.

Spoof detection

Face Liveness can deter presentation and bypass spoof attacks. Let’s outline the key spoof types and see Face Liveness deterring them.

Presentation spoof attacks

These are spoof attacks where a bad actor presents the face of another user to camera using printed or digital artifacts. The bad actor can use a print-out of a user’s face, display the user’s face on their device display using a photo or video, or wear a 3D face mask that looks like the user. Face Liveness can successfully detect these types of presentation spoof attacks, as we demonstrate in the following example.

The following shows a presentation spoof attack using a digital video on the device display.

The following shows an example of a presentation spoof attack using a digital photo on the device display.

The following example shows a presentation spoof attack using a 3D mask.

The following example shows a presentation spoof attack using a printed photo.

Bypass or video injection attacks

These are spoof attacks where a bad actor bypasses the camera to send a selfie video directly to the application using a virtual camera.

Face Liveness components

Amazon Rekognition Face Liveness uses multiple components:

  • AWS Amplify web and mobile SDKs with the FaceLivenessDetector component
  • AWS SDKs
  • Cloud APIs

Let’s review the role of each component and how you can easily use these components together to add Face Liveness in your applications in just a few days.

Amplify web and mobile SDKs with the FaceLivenessDetector component

The Amplify FaceLivenessDetector component integrates the Face Liveness feature into your application. It handles the user interface and real-time feedback for users while they capture their video selfie.

When a client application renders the FaceLivenessDetector component, it establishes a connection to the Amazon Rekognition streaming service, renders an oval on the end-user’s screen, and displays a sequence of colored lights. It also records and streams video in real-time to the Amazon Rekognition streaming service, and appropriately renders the success or failure message.

AWS SDKs and cloud APIs

When you configure your application to integrate with the Face Liveness feature, it uses the following API operations:

  • CreateFaceLivenessSession – Starts a Face Liveness session, letting the Face Liveness detection model be used in your application. Returns a SessionId for the created session.
  • StartFaceLivenessSession – Is called by the FaceLivenessDetector component. Starts an event stream containing information about relevant events and attributes in the current session.
  • GetFaceLivenessSessionResults – Retrieves the results of a specific Face Liveness session, including a Face Liveness confidence score, reference image, and audit images.

You can test Amazon Rekognition Face Liveness with any supported AWS SDK like the AWS Python SDK Boto3 or the AWS SDK for Java V2.

Developer experience

The following diagram illustrates the solution architecture.

The Face Liveness check process involves several steps:

  1. The end-user initiates a Face Liveness check in the client app.
  2. The client app calls the customer’s backend, which in turn calls Amazon Rekognition. The service creates a Face Liveness session and returns a unique SessionId.
  3. The client app renders the FaceLivenessDetector component using the obtained SessionId and appropriate callbacks.
  4. The FaceLivenessDetector component establishes a connection to the Amazon Rekognition streaming service, renders an oval on the user’s screen, and displays a sequence of colored lights. FaceLivenessDetector records and streams video in real time to the Amazon Rekognition streaming service.
  5. Amazon Rekognition processes the video in real time, stores the results including the reference image and audit images which are stored in an Amazon Simple Storage Service (S3) bucket, and returns a DisconnectEvent to the FaceLivenessDetector component when the streaming is complete.
  6. The FaceLivenessDetector component calls the appropriate callbacks to signal to the client app that the streaming is complete and that scores are ready for retrieval.
  7. The client app calls the customer’s backend to get a Boolean flag indicating whether the user was live or not. The customer backend makes the request to Amazon Rekognition to get the confidence score, reference, and audit images. The customer backend uses these attributes to determine whether the user is live and returns an appropriate response to the client app.
  8. Finally, the client app passes the response to the FaceLivenessDetector component, which appropriately renders the success or failure message to complete the flow.

Getting Started

You can test the Amazon Rekognition Face Liveness feature in the AWS console via a no code user experience. You can also setup a React based demo application (must be in us-east-1 region) in under 10 minutes using this cloud formation template and associated Git repository.

Conclusion

In this post, we showed how the new Face Liveness feature in Amazon Rekognition detects if a user going through a face verification process is physically present in front of a camera and not a bad actor using a spoof attack. Using Face Liveness, you can deter fraud in your face-based user verification workflows.

Get started today by visiting the Face Liveness feature page for more information and to access the developer guide. Amazon Rekognition Face Liveness cloud APIs are available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Mumbai), and Asia Pacific (Tokyo) Regions.


About the Authors

Zuhayr Raghib is an AI Services Solutions Architect at AWS. Specializing in applied AI/ML, he is passionate about enabling customers to use the cloud to innovate faster and transform their businesses.

Pavan Prasanna Kumar is a Senior Product Manager at AWS. He is passionate about helping customers solve their business challenges through artificial intelligence. In his spare time, he enjoys playing squash, listening to business podcasts, and exploring new cafes and restaurants.

Tushar Agrawal leads Product Management for Amazon Rekognition. In this role, he focuses on building computer vision capabilities that solve critical business problems for AWS customers. He enjoys spending time with family and listening to music.