Learn about Dee: The DeepLens Educating Entertainer – The second place winner of the AWS DeepLens Challenge Hackathon
|April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life.|
Matthew Clark is a software developer turned architect. He lives in Manchester in the north of England, and he’s soon to be the proud owner of a new kitchen. He’s also the creator of Dee – the DeepLens Educating Entertainer, which won second place in the AWS DeepLens Challenge.
Dee is an example of how image recognition can be used to make a fun, interactive, and educational game for young or less able children. The DeepLens device asks children to answer questions by showing the device a picture of the answer. For example when the DeepLens device asks, “What has wheels?” the child is expected to show it an appropriate picture, say of a bicycle or a bus. Right answers are praised and wrong ones are given some hints on how to get it right. The premise of the game is to help children learn through interaction and receive positive reinforcement.
Matthew had no machine learning experience prior to getting a DeepLens device at AWS re:Invent 2017. However, he soon got up to speed with deep learning (DL) concepts by building Dee using a combination of technologies that were all new to him including; AWS DeepLens, Python, Amazon Polly, AWS Greengrass, and AWS Lambda.
We interviewed Alex about his experience with AWS DeepLens and asked him to do a deep dive into how he created his winning entry.
Getting started with machine learning
Matthew remembers the moment at re:Invent when he heard about AWS DeepLens that marks the start of his journey into learning machine learning (ML):
“I had absolutely no knowledge of ML before I picked up a DeepLens. I knew it was something I had to get into if I was to stay on the cutting edge of tech development. So when Andy Jassy announced DeepLens at re:Invent, it seemed like the perfect opportunity to learn.”
Matthew was hot off the mark securing his place in the AWS DeepLens workshop. He found it helpful in getting him started, even if there was one small (but rather amusing) problem that he recalls from the session:
“The workshop worked well as an introduction. But there was one problem. Whilst the workshop was starting, everyone was being given a fresh hotdog, which several people in the room started to eat. A few minutes later the instructor pointed out that this wasn’t a snack; it was a prop for image recognition! Fortunately, those who had eaten their hotdog realized that a picture of a hotdog, from their phone, was enough to test the DeepLens’ recognition abilities.”
Inspiration for Dee
While Matthew thought the workshop was great, he realized hotdog recognition isn’t going to change the world. He wanted to use AWS DeepLens on use cases that could have a more meaningful impact. He was also interested in leveraging the capability to run deep learning models locally on the camera and exploring the benefits that could bring. He found an idea to bring these two things together:
“For me, the fascinating part of DeepLens was that the video recognition models were developed on the cloud, and then run locally. From that I began thinking “what is the advantage of the inference happening locally, other than cost?” and I soon realized that anything involving children would really benefit, because it would remove any privacy concerns over a child’s video leaving the device. It also means it could work anywhere, simply, without the need for Wi-Fi or other connectivity.”
Once Matthew had decided on the children’s education theme, he found it easy to think of something that could work for his three-year-old son. In addition to young children, Matthew was excited about how the use of technology combined with the positive reinforcement aspects of Dee could help children with autism or Asperger’s:
“Young children, and some older ones with special learning needs, can struggle to interact with electronic devices. They may not be able to read a tablet screen, or use a computer keyboard, or speak clearly enough for voice recognition. But with video recognition, this can change. Technology can now understand the child’s world, and discover when they do something, such as pick up an object or perform an action. And that leads to whole new ways of interaction.”
Building with AWS DeepLens
Matthew started by using one of the AWS DeepLens prebuilt models – object detection – but this presented a challenge and a key learning for him around model training:
“My initial hope was that, rather than pictures, the child could show toys to Dee. Picking up, for example, a toy plane or cuddly sheep, would be more exciting than a piece of paper. But in testing, the object detection model did not see toys as being the same as their real counterparts. A toy plane is just too different from a real plane. Training a model to work on toys would fix this, of course, but I was unable to find a good and large enough training data set. This is something to work on for the future!”
However, by using the existing object detection dataset, this gave Matthew the time to focus on the Lambda functions. A Lambda function, running on the DeepLens device using AWS Greengrass, handles the interaction. It picks a question at random, speaks it, and then analyses the model response to see how the user answered. Matthew loaded this with messages such as “Let’s do more!” and “Good choice!” to help the participant feel positive and engaged about the experience.
Dee is designed to not require Wi-Fi access, as mentioned earlier in this blog. This is to ensure there are no internet connection, privacy, or cost concerns for kids. This was tricky when it came to speech, and using Amazon Polly, which typically requires connectivity to the cloud. To overcome this, Matthew wrote a script to capture all required phrases and store them locally. Which means the AWS Lambda function he created includes 69 MP3 files.
What’s next for Dee?
Matthew is excited about what can be achieved though technology and believes the potential for Dee is huge:
“I’m now improving Dee so it’s got a concept of ‘levels’ – so that it can tell how skilled the child is, and offers more challenging questions as appropriate. I’ve shown Dee to a few other children and adults too and there’s a lot of interest. Everyone has ideas on how to make it even better!”
He also has other ideas he wants to work on to expand Dee’s curriculum:
“If she could recognize a wider range of things, a much more varied set of questions could be asked such as: “Can you hold up three fingers?”, to test counting skills, or “Which one is the letter A?” to test the alphabet, amongst others.”
Having built his foundational skills on this first Dee project, Matthew knows he needs to advance into model training to take things to the next level:
“Of course, training new models will be a key part of this. And with services such as Amazon SageMaker making training more straightforward, the possibility emerges for end users being able to train their own models. A teacher could, for example, train Dee to recognize certain objects in the classroom, or a caregiver could train Dee to respond to specific objects that are important to someone with autism.”
Matthew’s family gets to benefit from this hackathon win, at home and away:
“Most of the money will go on a new kitchen. We will also use some to take our son to a theme park, now he’s old enough to enjoy it.”
Similar to the journey Alex Shultz has taken with his experience building ReadToMe, which we published last week, Matthew has been on a journey with AWS DeepLens, building new skills though hands-on experience with machine learning, literally. He has gone from having no machine learning experience to building a project that his son and other children can now benefit from.
Congratulations to Matthew and the Clark family on this well-deserved win!
Hopefully, Matthew’s story has inspired you to want to learn more about AWS DeepLens. You can view all of the projects from the AWS DeepLens Challenge on the DeepLens Community Projects webpage. For more general information, take a look at the AWS DeepLens Website or browse AWS DeepLens posts on the AWS Machine Learning blog.
The AWS DeepLens Challenge was a virtual hackathon brought to you by AWS and Intel to encourage developers to get creative with their AWS DeepLens. To learn more about the contest, check out the DeepLens Challenge website. Entries are now closed.
Learn about ReadToMe – The first place winner of the AWS DeepLens Challenge Hackathon
About the Author
Sally Revell is a Principal Product Marketing Manager for AWS DeepLens. She loves to work on innovative products that have the potential to impact people’s lives in a positive way. In her spare time, she loves to do yoga, horseback riding and being outdoors in the beauty of the Pacific Northwest.