AWS for Industries

Improving fraud prevention in financial institutions by building a liveness detection architecture

Facial recognition is an important tool for identifying a person. Facial recognition systems are already popular worldwide and are used to prevent fraud, particularly in financial institutions such as banks and insurance companies.

Although facial recognition systems are relatively new, counterfeiters have already developed ways to bypass them including impersonation, or with the use of masks, photos, or videos. These counterfeiting attacks can weaken facial recognition to authenticate or authorize payments. However, this flaw in recognition systems can be overcome by liveness detection. Our approach to performing liveness detection is by challenging (nose movement), and responding using AWS services such as Amazon Rekognition, Amazon API Gateway, AWS Lambda and Amazon DynamoDB.

For example, in the new Instant Payment System (PIX) of the Brazilian Central Bank (BACEN), biometric recognition, including facial recognition, is allowed as an option to authorize payments and transactions. Security validations may include requesting offline authentication when generating the QR Code, such as the use of biometrics.  Adding the liveness detection step improves the mitigation and prevention of fraud in this type of financial operation.

The  overall benefit of the AWS liveness detection solution is to improve the level of security while maintaining a high level of user experience in a complete frictionless flow.  The following article describes the what, why, and how to achieve this in further detail.

What is liveness detection?

Liveness detection is any technique used to detect a spoof attempt by determining whether the source of a biometric sample is a live human being or a fake representation. This is accomplished through algorithms, that analyze data collected from sensors/cameras to determine whether the source is live or reproduced.

There are two main categories of liveness detection:

  • Active: Prompts the user to perform an action that cannot be easily replicated with a spoof. It might also incorporate multiple modalities, such as speaker recognition. It can analyze the movement of the mouth, eyes, nose, etc. to determine liveness.
  • Passive: Uses algorithms to detect indicators of a non-live image without user interaction. Capture of high-quality biometric data during enrollment improves the performance of matching.

Why liveness detection?

The importance of distinction in modern society has been strengthened by the need for large-scale identity management systems whose functionality depends on the accurate deduction of an individual’s identity on the framework of financial institutions applications.

Some examples of these applications include performing remote financial transactions or on-boarding a new user/member/participant. One of the main challenges faced by banks is connected with fraud prevention and avoiding possible data/money losses, including through digital access channels. Nowadays, financial institutions are particularly focused on avoiding unauthorized access to customer bank accounts.

Below are other reasons that support the use of biometric (facial) recognition associated with liveness detection:

  1. Avoid recurring money losses from scams.
  2. Alternative to password in authentication/authorization process.
  3. Registration of new customers in remote regions of a country.

Where could this solution be applied?

Liveness detection can be applied in a broad range of situations where authentication and authorization are required for any financial institution.  These include:

  1. Apps of banks, insurers on personal mobile devices.
  2. ATMs (Automatic Teller Machines).
  3. Regulators such as Credit Protection Service, Bank Federations, etc.
  4. Virtual cards, such as insurance, meal vouchers, etc.
  5. Incorporate in Know Your Customer (KYC) workflow of employees and customers of the financial institution.

Advantages

The advantages of a liveness detection solution for financial institutions include:

  1. Wide applicability (websites, mobile apps, ATMs, kiosks, etc.).
  2. Ability to deploy as a password-less and friction less authentication.
    1. Alternative to current contactless demand.
  3. The ability to detect if there is more than one person inside the bounding box.
  4. Face match and tracker in bounding box, preventing someone else from continuing the process.
  5. Ability to provide an active challenge (e.g., a random nose movement, avoiding repetitive movements, etc.)
  6. Scalable and high availability solution.

How?

Facial biometrics methods take into account multiple points on the face during the identification process. Most of them can be divided into 2D (two dimensions) and 3D (3 dimensions) solutions:

The 2D solution takes into account the height and width of the face during the process of identifying and measuring the nodal points of the face (distance between the eyes or nose-mouth, etc.). Depends on the cooperation of the user, who must look directly at the camera (mobile phone or notebook) so that it captures its image and makes the recognition.

The 3D solution captures the topography of the face in detail. As in the 2D system, the image is transformed into a biometric code, a kind of unique facial signature. Does not require the individual to be standing looking at the camera for recognition to be made.

Our solution mimics a 3D solution, using a facial recognition system (Amazon Rekognition), and calculating the Euclidean distance, normalized by the total points amount, resulting in liveness detection solution (active mode).

Our liveness detection solution uses 30 facial landmarks, captured via Amazon Rekognition (DetectFaces operation). This makes it possible to capture the movement of the nose and rotation of the head.

The task of the nose movement is accomplished with a square on the screen, which appears in a random way, in which the customer/user must associate the tip of the nose within the square once.

DetectFaces operation returns the following information for each detected face:

  • Bounding box – The coordinates of the bounding box that surrounds the face.
  • Confidence – The level of confidence that the bounding box contains a face.
  • Facial landmarks – An array of facial landmarks. For each landmark (such as the left eye, right eye, and mouth), the response provides the x and y coordinates.
  • Facial attributes – A set of facial attributes, such as whether the face has a beard. For each such attribute, the response provides a value.
  • Quality – Describes the brightness and the sharpness of the face.
  • Pose – Describes the rotation of the face inside the image.
  • Emotions – A set of emotions with confidence scores.

You can use the combination of the BoundingBox and Pose data to draw the bounding box around faces that your application displays.

By default, the DetectFaces API returns only the following five facial attributes: BoundingBox, Confidence, Pose, Quality and landmarks. The default landmarks returned are: eyeLeft, eyeRight, nose, mouthLeft, and mouthRight (5). To get all the facial landmarks (30) depicted in the illustration below in the response, you must invoke the API specifying the attributes parameter with value ALL.

Definition of solution

The serverless architecture below is a possible solution to detect life in a payment authentication and authorization process, for example, using an app on a cell phone or a computer.

The table below describes each component of the architecture:

Component Description
1 Client App Client application through which end users could access the liveness detection challenge experience. The application could capture and upload video frames and invoke a server-side API to run the liveness detection logic in the cloud. In addition, to provide real-time feedback to users as they interact with the on-device camera, some part of the challenge logic could also run locally in the client application.
2 API  Gateway Endpoint Amazon  API Gateway could be used to expose a REST/HTTP API to start and verify a liveness detection challenge.
3 Lambda Function The server-side API logic to start and verify a challenge could run in an AWS Lambda function.
4 Amazon Rekognition  Image Amazon Rekognition Image could be used by the AWS Lambda function to verify a challenge. It provides the Detect Faces  APIs, capable of identifying faces in an image along with their position and  landmarks (eyes or nose or mouth, etc.)
5 DynamoDB  Table An Amazon DynamoDB table could store information about each user’s challenge attempts, such as user ID,  timestamp and, challenge-related parameters (face area position coordinates, nose target position coordinates, etc.)

Mobile client Application

A mobile application could be developed to allow users to access the liveness detection challenge provided by the architecture above.

  1. To start a task within the application, a request would be issued to the backend API; the API response would contain all the challenge parameters. In the case of a challenge where the user must move the tip of the nose to an area, the target nose area would be included in the API response.
  2. A screen displaying a live preview of the on-device camera with instructions of what the user must do would then be displayed, like the screenshots below:
  1. The challenge logic would also run locally in the client application to provide real-time feedback to the user. After successful completion of the challenge locally (e.g. user moved the tip of the nose to the target area), frames captured during the process would then be sent to the API for a final validation of the challenge in the cloud.

The challenge would fail for a couple of reasons such as:

  1. More than one face detected;
  2. The face detected is different than the first face detected (face tracker feature);
  3. No face is detected after the first face is detected (e.g. user stops looking at the camera);
  4. Face moves outside the face area after entering it;
  5. Timeout in one of the states (e.g. user takes more than 10 seconds to move the nose to the designated position).

Conclusion

In this blog post (part 1 of 2), we showed an architecture to create a liveness detection solution, in order to complement facial recognition. It can be used in the process of authentication and authorization, combining only AWS services, such as Amazon Rekognition, Amazon API Gateway, AWS Lambda and Amazon DynamoDB.. Additionally, you can enhance the security factor by chaining other challenges based on their document numbers (through Amazon Textract) and other additional Multi-Factor Authentication (Amazon Cognito).

The starting point of building the solution was to attend a growing demand from the majority of financial institutions in the Latin American region. For this reason, both the web and mobile solutions are under construction. This is being accomplished as a cross-sectional work between the AWS R&D Innovation LATAM team and the Solution Architect focused on financial Services (LATAM).

Rafael Werneck

Rafael Werneck

Rafael Werneck is a R&D Solutions Architect at AWS, based in Brazil. Previously, he worked as a Software Development Engineer on Amazon.com.br and Amazon RDS Performance Insights.

Henrique Fugita

Henrique Fugita

Henrique Fugita is a R&D Solutions Architect at AWS in Brazil. He helps customers envision the art of the possible on AWS by working with them on innovative prototyping engagements. With over 15 years of experience in software development and solutions architecture, he currently focuses on artificial intelligence and machine learning.

João Aragão Pereira

João Aragão Pereira

João Paulo Aragão Pereira is an AWS solutions architect, focused on the financial services sector (LATAM) and its main topics: fraud prevention and detection, Open Banking, modernization of legacy systems, liveness detection, Instant Payment Systems. I have been worked with banking and insurance architectures for over 15 years.