DeepAds Advertising

Inspiration

As consumers embrace a proliferation of new digital channels, today’s brands have an increasingly hard time running effective marketing campaigns. To reach the right audiences at the right time wherever they are, they must rely on real-time rich data to deliver highly relevant, effective, and measurable ads. The age of smart advertising is here. Coupled with an increase in the amount of customer data available to marketers, we are seeing an astonishing rise in the application of machine learning across a number of industries. DeepLens represents a perfect opportunity for these two trends to overlap, allowing marketers to target adverts even more effectively based on real-time video feed data. The follow on effect of these targeted ads could provide increased benefits for brands across many parts of their value chain.

To leverage this opportunity we created DeepAds.

What it does

DeepAds is an advertising platform allowing real-time targeting of consumers based on a set of distinct and learned characteristics. Depending on who is then in the DeepLens frame and which of these characteristics they display, DeepAds will serve the most relevant advert.

Our current implementation of DeepAds distinguishes consumers based on their gender, so variations on product adverts are served up differently to females and males. You can read more in section 7 about how we would guard against gender stereotyping.

How we built it

After receiving the DeepLens, we did a quick tech spike to understand the DeepLens’ basic capabilities. The DeepLens allows us to run deep learning models locally and understand what the camera sees, all with several simple steps. The team then brainstormed about what problem we can solve with the help of DeepLens’ deep learning capability. There are many possible applications, but we are most interested in the idea of using facial recognition technologies to drive better advertising engagement and consumer experiences, which we named as DeepAds.

By putting the DeepLens in front of the advertising billboard or screen and running facial detection and custom deep learning models to extract facial features, DeepAds allows marketers to understand more about their audience and how their advertisements are performing with specific audience segments. There is a long list of features that could help identify consumer’s characteristics and can be fed into the deep learning model. To accomplish the project in time and for to be able to demonstrate DeepAds’ purpose, gender was picked as the classification output.

DeepAds contains the following major components:

  1. Deployment Framework with Greengrass
  2. Deep Learning Module with Amazon SageMaker
  3. Ads Controller with AWS Lambda

Detailed explanation of each component is given below:

1. Deployment Framework: we followed the instructions from the AWS DeepLens documentation and used the AWS GreenGrass service to deploy a Lambda function and model to the device. We then setup a local develop environment for the lambda function in order to speed up the development process.

2. Deep Learning Module: we first prepared the training data with 2000 photos that were labeled with gender and uploaded to AWS S3. Followed by formatting using MXNet RecordIO, we then trained the gender classification model on SageMaker. We used a small dataset to accelerate the training time for gender recognition which might lead to low accuracy. Based on the studies, the gender recognition accuracy should be able to reach 90%* with bigger dataset and better tuned model. We used the python module “awscam” to load the model to DeepLens. A latest version of Intel DeepLearning Deployment Toolkit is installed on the DeepLens device and used to optimise the MXNet gender model.

3. Ads Controller: to prove the concept and demonstrate the capability of targeted advertising, we defined two types of project video output stream, one is the “Advertising screen” and the other is the live stream “Analysis screen”. Analysis screen draws the detected information on top of the input video stream. Currently it draws a facial detection bounding box, the count of people and a list of potential characteristics. In future, it can generate a real-time analytic report. Advertising screen shows the targeted advertising based on the audience. We designed three different advertising images to target female, male and group of audiences representatively. When the audiences in front of the DeepLens change, the advertising screen will display the relevant image. Currently it reuses the project video output stream, but it can be a different device or screen that connected by AWS IoT.

Challenges

  1. Figuring out how to do more complex tasks. Considering DeepLens and SageMaker are both still in their nascent stages, one of the main challenges is the lack of documentation and tutorials. It was relatively simple to set up the device and use the default template models, but when we tried to use custom models and modify the video stream, we found it’s difficult to get detailed information about what to do.
  2. Finding out how to fix an issue. The DeepLens forum is helpful sometimes, but there aren’t enough established QA and only have a very small community on the forum to help troubleshooting. It leads to the result of spending unpredictably long time on troubleshooting during the project.
  3. Building our own model with the Intel DeepLearning deployment toolkit. Model conversion using Intel DeepLearning deployment toolkit with SageMaker trained model always fails, making it difficult for us to test our own model.
  4. Debugging and deploying our program. It’s difficult to debug a Lambda function because an important library called "awscam" is only available on the device. We needed to either setup a development environment locally on the device or we had to wait for the lengthy deployment process to finish to be able to debug.
    Dealing with the limited hardware capacity of the DeepLens device. The DeepLens struggles to handle inserting graphical overlays onto the project stream which caused us to reconsider our original concept of how to display advertising images on top of the video stream.

Accomplishments that we're proud of

  • Developing a feasible cool idea that helps industry
  • Training a custom deep learning model on SageMaker
  • Using OpenCV to modify the project video output stream

What we learned

  • DeepLens’ capabilities
  • How to use SageMaker to train a custom model and use it on DeepLens
  • How to develop software on the DeepLens platform 

What's next

A brand new advertisement system
We believe that machine learning should be a fundamental driver for future marketing strategies. For brands to remain relevant they need to generate insights from an increasingly complex data set. There are 2 main areas of development for DeepAds beyond this point.

1. Classifier Maturity: our current implementation of DeepAds distinguishes on consumer gender, however there are many ways that we would look to classify consumers moving forward. As well as gender we would build out our classifier to cover:

  • Location
  • Outfit
  • Facial features, e.g. beard, hairstyle, skin colour etc.
  • Mood
  • Height
  • Movement
  • Age
  • Activity


2. Stereotyping safeguards: a critical part of the next phase of development for DeepAds would be to implement safeguards against perpetuating stereotypes. This is fundamental to building a platform that provides value to consumers in a safe and appropriate manner.

For this we would look to implement a number of features:

  • Create more robust rules around defaulting to neutral adverts when the model hasn’t been able to classify with high accuracy
  • Implement mandatory A/B testing of adverts at scheduled points regardless of classification
  • Place an increased weighting on mood as a measurable characteristic for our model, evaluating how people are reacting to the advert being shown and using this to tailor future ads.

Built with

python
amazon-web-services
deeplens
sagemaker
mxnet
opencv
sagemake
machine-learning
lambda

Try it out