AWS Partner Network (APN) Blog

How Slalom Created Personalized, Interactive Event Experiences Using Amazon Rekognition

By Chris Mendoza, Technology Enablement Consultant at Slalom

Slalom-Logo-1.1
Slalom-APN-Badge-3
Connect with Slalom-1
Rate Slalom-1

Amazon Rekognition makes it easy to add highly accurate image and video analysis to your applications.

The service’s core functionality allowed Slalom, an AWS Partner Network (APN) Premier Consulting Partner, to create three personalized, interactive experiences for attendees at REALIZE, the company’s inaugural, one-day client summit in Chicago.

Using Amazon Rekognition, Slalom created the following interactive, guest-friendly experiences:

  • Personalized compliment-delivering booth that recognized guests using facial analysis and provided them a personalized compliment like, “You are a role model for our more junior developers. Keep it up!”
    .
  • Interactive photo mosaic wall that recognized more than 500 unique faces and delivered information about each person.
    .
  • Catering station using sentiment analysis that registered non-verbal cues on guests’ faces to read their expression, and then order a menu item corresponding to their sentiment.

All three experiences relied on a centralized back-end of serverless AWS Lambda functions for custom logic, Amazon DynamoDB for storage and session management, Amazon CloudFront and Amazon Simple Storage Service (Amazon S3) for hosting web apps, and Amazon CloudWatch log streams for real-time event analytics.

In this post, I’ll walk you through the key decisions Slalom made to build these experiences, share the impact they had on the guest experience at REALIZE, and provide our reference architecture and instructions so you can recreate them for your own event.

Objectives for the Event and Experiences

In March 2019, Slalom hosted an inaugural, one-day summit in Chicago called REALIZE. We brought together more than 100 clients, nonprofit leaders, and alliance partners through innovative, interactive, and community-focused experiences.

Slalom employees brought the event attendance to over 600 total guests. The day combined four keynotes, 12 unique breakout sessions, and 10 interactive experiences.

Guests were invited to a unique happy hour where they could network and explore custom-made interactive experiences. Five of these experiences were built using Amazon Web Services (AWS) and two were built in support of local nonprofit partners. Each experience was imagined, designed, and executed by a volunteer team from Slalom Chicago.

Our goal was to create dynamic, surprising experiences that brought three of Slalom’s Core Values to life:

  • Smile
  • Drive Connection and Teamwork
  • Focus on Outcomes

Experiences involved varying levels of physical and digital touchpoints, and were integrated into the event venue itself to create an immersive environment.

Video: Amazon Rekognition at Slalom’s REALIZE event (1:49)

Personalized Compliment-Delivering Booth

Our “Smile” experience was designed to be irresistibly fun and share Slalom’s commitment to finding joy in our work and celebrating the contributions of our team.

When entering the compliment booth, guests started the experience by saying, “Alexa, compliment me!” which then prompted them to look at our tablet camera.

After the guest took a selfie, we used facial recognition to identify the guest, retrieve their personalized compliment, and reply back on the Amazon Echo Show (audibly and on screen), all in real-time.

Slalom-Amazon Rekognition-1

This experience was one of our most impactful and meaningful for guests. Compliments were not randomly generated, but instead individually (and secretly!) sourced from within the Slalom community.

The team used Amazon Rekognition’s IndexFaces operation to create a Face Collection, which stored facial information that was used to identify known faces and return specific metadata.

Slalom prepared personalized compliments for all REALIZE guests, including more than 500 employees and 100 client, partner, and community leaders. Compliments included can-do statements like, “You always make people feel welcome. Thanks for being so encouraging!”

Interactive Photo Mosaic Wall

The “Drive Connection and Teamwork” experience featured a 40-foot-long photo mosaic wall with more than 500 pictures—one for each Slalom Chicago employee. It was designed to better connect our team by sharing a little of what makes each person unique and revealing previously unknown commonalities.

Slalom-Amazon Rekognition-2

Utilizing the same Face Collection used for the “Smile!” experience, we provided devices at the installation that allowed REALIZE attendees to take a picture of any face on the mosaic or scan people in the room.

The device would then display personalized information about that Slalom employee, like a fun fact or how long they’ve worked for Slalom.

Slalom-Amazon Rekognition-3

Catering Station Using Sentiment Analysis

Since the way to our hearts is through our stomachs, we designed a catering station where attendees could order appetizers using Amazon Rekognition’s facial analysis.

In our “Focus on Outcomes” experience, guests struck their best happy, sad, or angry expression and Amazon Rekognition returned details on the scanned face.

Sentiment was the most important feature returned, and the service matched that to a corresponding menu item to place an order. As people’s expressions were captured and analyzed in real-time, guests were directed to open a specific, themed cabinet where their food would be waiting.

Designing the Guest Experience

These meaningful, guest-friendly experiences suited REALIZE’s scale, venue, and objectives beautifully, and were a big hit with attendees. In just about five hours, guests scanned 507 unique faces with more than 1,000 scans overall.

To maintain the element of surprise for event attendees, all 500+ employee photos and 100+ external guest photos were added to the Face Collection from publicly available photos and headshots. Keeping the Face Collection up-to-date required Slalom staff to upload faces and populate each guest’s profile with information right up to the day of REALIZE.

We couldn’t ask guests to upload better photos without revealing the secret, so our experiences needed to recognize different image resolutions. In many cases, we had to rely on one photo to match to the Face Collection, despite resolution size, condition of the photo, or even fluctuating lighting conditions at the venue.

While the functionality for these experiences is device-agnostic and web-based, we opted to provide dedicated devices to keep the experience location-based and to encourage interaction between guests and hosts. Slalom team members acted as hosts to greet guests and share insight into their experience’s development.

Building with Privacy in Mind

Guest privacy was a key consideration for the Slalom team. While attendees consented to and opted into participating in the experiences, we acknowledged the sensitivity around biometric data and adhered to the Illinois Biometric Information Privacy Act (740 ILCS 14/).

This act stipulates the following provisions for biometric data and information:

  • Obtain consent from individuals if the company intends to collect or disclose their personal biometric identifiers.
  • Destroy biometric identifiers in a timely manner (≤ 3 years).
  • Securely store biometric identifiers.

The experiences as created can be configured to only cater to event attendees who have opted in as participants. Therefore, we could guarantee that no results are returned for people who you do not intend to match with. This is a useful safeguard against sharing the biometric data of guests who may prefer to opt out.

REALIZE experiences do not persist any image data. As guests took a picture, it was immediately sent to Amazon Rekognition and deleted, with a returned hash of the response, and no way of linking the identified individual with the image itself.

Long-term, this would normally impact future enhancements to our training data for a facial recognition model, but the Amazon Rekognition service is automatically trained on reference data by AWS. The hash can provide relevant matching criteria without having to store or refer to the original image.

Developing Your Own Facial Recognition Experiences

Below is a simplified step-by-step guide outlining the facial recognition experiences set up by the Slalom team.

We started from instructions provided by the AWS Machine Learning Blog and created custom integrations with Amazon Rekognition and complementary AWS services to fit our needs. Device-agnostic web apps served as the front-ends, designed for each station.

The figure below shows the application workflow separated into two distinct parts:

  • Indexing (blue workflow) represents the process of importing faces into the Face Collection for analysis.
  • Analysis (black workflow) represents the process of querying the Face Collection represents the process of querying matches within the index.

Slalom-Amazon Rekognition-4

Figure 1 – Application workflow.

Follow these steps to recreate our experience for your own event:

Step 1: Create an IAM Role with full permissions to the following AWS services: Amazon S3, AWS Lambda, Amazon DynamoDB, and Amazon Rekognition. This will be temporary to help us quickly set up. We’ll attach this new IAM Role to the Amazon Elastic Compute Cloud (Amazon EC2) instance we’ll be spinning up.

Step 2: Go into your AWS console and spin up an Amazon EC2 instance. This will be used to access the AWS account via the command line to run AWS Command Line Interface (CLI) commands.

Note that if you have the ability to access the AWS account via your local computer’s CLI, feel free to use that method.

Step 3: Create a DynamoDB table for our application. Log on to the Amazon EC2 instance we just spun up and run the following CLI command to create the table:

aws dynamodb create-table \
--table-name realize-2019-face-recognization-tbl2 \
--attribute-definitions \
AttributeName=RekognitionId,AttributeType=S \
--key-schema AttributeName=RekognitionId,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

Step 4: Create a new collection in Amazon Rekognition by running this CLI command:

aws rekognition create-collection --collection-id <WhatEverNameYouWantHere>

Step 5: Create another IAM Role with full permissions to these AWS services: Amazon S3, Amazon DynamoDB, AWS CloudWatch Logs, Amazon Rekognition, and AWS Lambda Execution. You can fine-tune the permissions if you choose to do so. This is our AWS Lambda execution role.

Step 6: Create Lambda functions for our application. Login to the Amazon EC2 instance we spun up and run the following CLI command:

aws lambda create-function \
	--function-name IndexFaces-realize-2019-face-recognization \
	--runtime python2.7 \
	--role <LAMBDA EXECUTION ROLE YOU CREATED> \
	--handler index.lambda_handler \
	--timeout 10 \
	--memory-size 128 

aws lambda create-function \
	--function-name realize-2019-detect-face \
	--runtime python2.7 \
	--role <LAMBDA EXECUTION ROLE YOU CREATED> \
	--handler lambda_function.lambda_handler \
	--timeout 180 \
	--memory-size 512 

The two CLI commands above will create two Lambda functions:

  • IndexFaces-realize-2019-face-recognization: This Lambda indexes faces of individual employees. It gets invoked by an Amazon S3 event when an employee’s picture (including metadata) lands in a specific S3 folder. The picture gets an identifier (face id) from the Amazon Rekognition service and saves the contents of the metadata with the id to the DynamoDB table created in Step 3.
    .
  • realize-2019-detect-face: This Lambda fetches the record from DynamoDB and passes it to the front-end application to display. It’s invoked by an API endpoint from Amazon API Gateway. The API submits the base64 of the image it wants to get the content for, and gets it id from Amazon Rekognition. It then searches in the DynamoDB table from Step 3 and returns the result if a record is found/matched.

Step 7: Modify code in the Lambda functions, and in both files modify the collection id with whatever you named it in Step 4. Take all the files in the code folder and compress them to create a single zip file. Then, go to the corresponding code in the AWS Lambda console and upload it there.

Step 8: In the Amazon API Gateway console, create a new endpoint and POST method that invokes the realize-2019-detect-face Lambda. Make sure to enable CORS and deploy.

Step 9: Deploy front-end web files. Create an Amazon S3 bucket, or use an existing bucket, and place the files in the S3_Files folder into that bucket. Change the property of that file and make all three files publicly visible by right-clicking and selecting “make public.”

Note that you will need to change the API endpoint in the index.html file to the new API endpoint you ended up creating.

In the bucket you want to add the initial images of employee (or whatever group you’re creating this experience for) with their metadata, create an Event that executes the IndexFaces-realize-2019-face-recognization function. We recommend creating the event on a specific folder where you’ll land the image files with metadata.

Next, add employee images with metadata to the bucket with the event in Step 10 by running the below CLI command:

aws s3 cp /home/ec2-user/Realize_Client_Photos/EMPLOYEE_NAME.jpg s3://<bucket with event from step 10>/<folder that the event is triggered on>/ --metadata '{"full_name":"EMPLOYEE NAME","practice":"Business Advisory Services","title":"Consultant","tenure":"1.39166666666667"}'

The CLI command above will populate your Face Collection database with the appropriate metadata to return when a match is found.

Figure 2 displays the metadata returned by the application at REALIZE when a match is found:

Slalom-Amazon Rekognition-5

Figure 2 – Sample of Face Collection metadata definitions and values.

Step 10: After adding employees to your DynamoDB database, go to the URL of the index.html file you added in Step 9. You can find it by going to the Amazon S3 location of the file in the AWS console.

Now when you take a picture of that employee, the Lambda will fire and return the results from the DynamoDB back to your front-end and user.

Summary

Since Chicago’s REALIZE event in March, other Slalom teams and markets have leveraged this solution to build their own creative facial recognition experiences as part of their local market REALIZE events.

For example, one Slalom team is considering building a similar photo wall with digital instead of static images. Additionally, Slalom invited attendees of the AWS Summit in Chicago to earn Slalom-branded swag at our booth, courtesy of an iterated version of this solution that used Amazon Rekognition architecture.

With the power of Amazon Rekognition’s capability to read faces, objects, scenes, and sentiments across many mediums, teams can leverage this service creatively to suit their venue, scale, and experience.

To learn more the Slalom and AWS partnership and how Slalom helps clients build for the future with modern data and technology solutions, visit slalom.com.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.


Slalom-Logo-1.1
Connect with Slalom-1

Slalom – APN Partner Spotlight

Slalom is an AWS Premier Consulting Partner. They are a modern consulting firm focused on strategy, technology, and business transformation. Slalom’s teams are backed by regional innovation hubs, a global culture of collaboration, and partnerships with the world’s top technology providers.

Contact Slalom | Practice Overview

*Already worked with Slalom? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.