AWS Machine Learning Blog

Learn about ASLens – A deep learning project that uses AWS DeepLens to translate the American Sign Language alphabet to speech

April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life.

Chris Coombs travelled all the way from Melbourne, Australia to attend AWS re:Invent 2017. He was delighted to be one of the lucky few to get his hands on AWS DeepLens. He put it to good use by entering the AWS DeepLens Challenge Hackathon. A man of many talents, he is a husband and father of two little ones, and he’s an ex-lawyer turned software engineer/cloud architect. For the AWS DeepLens Challenge Hackathon he created the project called ASLens.

ASLens uses AWS DeepLens to capture video of a person signing in American Sign Language (ASL). Then it runs a deep learning model (built with Amazon SageMaker) against each frame. When a letter from the ASL alphabet is recognised, AWS DeepLens plays the audio of that letter (using an MP3 file generated using Amazon Polly). ASLens runs locally on the AWS DeepLens, so an internet connection is not required, eliminating bandwidth issues and increasing speed.

Chris is familiar with AWS: He’s a co-organizer of the AWS Melbourne User Group, and he works for Datacom as their AWS APN Cloud Warrior. He was keen to further build his skills by getting hands-on with machine learning using AWS DeepLens. He built ASLens leveraging a range of AWS services including Amazon SageMaker, AWS Lambda, Amazon Polly, and, of course, AWS DeepLens.

We interviewed Chris to learn more about his experience with AWS DeepLens, and we asked him to tell us a bit more about how he created ASLens.

ASLens in action

Getting started with machine learning

Chris saw AWS DeepLens as an opportunity get started with model training:

“I already had experience with creating chatbots using AI services (such as Amazon Lex) but after taking an online AI course from Stanford I was eager to get hands on with AI algorithms and the practicalities of training my own model.”

Unknowingly, Chris had pre-registered for the AWS DeepLens workshop before re:Invent, when it was a in the catalogue as a generic session on the Machine Learning track:

“Weeks before flying out to re:Invent I’d reserved a space in one of the AI workshops, it was serendipitous that this turned out to be one of the DeepLens sessions.”

https://twitter.com/8_b1t_chr15/status/936316435265044480

As a fan of Silicon Valley and of Australian “sausage sizzle” he had a lot of fun at the workshop and back home in Australia putting the “Hot Dog Not Hot Dog” model to the test!

“The workshop itself was good fun, as a Silicon Valley uber-fan I particularly enjoyed the ‘not hot dog’ example. As it turns out I ended up sitting next to someone also from Melbourne, who jokingly asked whether a ‘sausage sizzle’ would register as a hot dog. For those who aren’t aware, a ‘sausage sizzle’ is an Australian take on the classic hot dog (a chicken sausage on a slice of bread with ketchup) and a staple at social events. Soon after returning to Australia I demoed the DeepLens to the User Group and we determined once and for all (using the ‘not hot dog’ project) that the Australian ‘sausage sizzle’ is most definitely not a hot dog! Strewth!”

Inspiration for ASLens

Chris was motivated by the challenge of the hackathon and views it as a “great excuse to build something new and exciting.”

The inspiration for the project itself, came from his family:

“My wife (who taught our children to ‘baby sign’) came up with the idea for ASLens. If I’m honest I thought the idea was too complex to be achievable, but never one to shy away from a challenge, I gave it a go – and it worked!”

Chris found it easy to get started and recruited his daughter into the learning process, getting her going with machine learning nice and early:

“Getting started with the DeepLens was a piece of cake. My daughter and I went through all of the demo projects; in the end I think she tested every single one of her toys against the ‘object detection’ project.”

Building with AWS DeepLens

The family collaboration continued as Chris moved on from testing the sample projects that come pre-packaged with DeepLens to building and training his own model for the first time with Amazon SageMaker:

“Training the DeepLens model was my first opportunity to use the new SageMaker service. First we had to collect the sign language data, which was a family effort, as I enlisted my wife to help find and crop images of American Sign Language from the internet. After uploading the training data it took less than an hour to get a fully working model out of SageMaker!”

https://twitter.com/8_b1t_chr15/status/961509672850153472

It wasn’t all smooth running to get all of the functionality going, but he got there with the help of his fellow developers in the forums:

“In truth I really struggled to get the DeepLens to play audio, and almost pivoted to an automated shopping list generator; but as the DeepLens leverages Greengrass under the hood, I was eventually able to find the answer to my problem on the Greengrass forums.”

You can read more about how Chris built ASLens and access the GitHub repo on the ASLens community project page.

Since publishing the project, Chris has been approached to share his experience:

“Probably the best part of building ASLens is the number of people who have reached out to ask for advice on building their own version of ASLens. I’ve been very happy to share what I’ve learned and look forward to seeing their projects come to fruition.”

What’s next for ASLens

Chris sees an opportunity to broaden the capabilities of ASLens:

“ASLens is currently limited to the ASL alphabet, and omits J and Z as they are not static. I’d like to continue my work on ASLens by identifying words, including movement and expression.” 

In conclusion

Congratulations to Chris and his family on creating this great project!

Hopefully, Chris’s story has inspired you to want to learn more about AWS DeepLens. You can view all of the projects from the AWS DeepLens Challenge on the DeepLens Community Projects webpage. For more general information, take a look at the AWS DeepLens website or browse AWS DeepLens posts on the AWS Machine Learning blog.

The AWS DeepLens Challenge was a virtual hackathon brought to you by AWS and Intel to encourage developers to get creative with their AWS DeepLens. To learn more about the contest, check out the AWS DeepLens Challenge website. Entries are now closed.

 


About the Author

Sally Revell is a Principal Product Marketing Manager for AWS DeepLens. She loves to work on innovative products that have the potential to impact people’s lives in a positive way. In her spare time, she loves to do yoga, horseback riding and being outdoors in the beauty of the Pacific Northwest.