Backyard Birder

Inspiration

I live in an area that is visited by many a variety of birds (and squirrels). I thought it wold be interesting to explore the abilities of Amazon's DeepLens to be able to identify bird species by both visual and song characteristics as well as keep track of how many squirrels disrupt the feeders each day.

I had a difficult time finding good seed image libraries for bird species and wanted to get my hands dirty with this new hardware. I thought I could leverage the ease of getting a sample project running on DeepLens to do a course collection of bird images that I could further develop into my species identification model.

What it does

The AWS DeepLens hardware allows locally running image processing to occur using models trained using a variety of machine learning methodologies. The graphics processing hardware allows for low latency video stream processing at the edge (i.e. disconnected from the cloud).

My application, using an included sample MXNet trained model (deeplens-object-detection), identifies the visitors to my bird feeders in my yard. The sample model is able to identify birds with reasonable accuracy. There is no classification for squirrels in this model but they score relatively likely against the "cat" and "dog" classifier (this is a hackathon, isn't it?). Upon a high identification probability score of recognizing a bird or a squirrel, a message is published to a message topic (MQTT) where a lister process will tally the counts and store them to a database where a daily squirrel vs. bird scorecard can be accessed via the web.

Created By: Paul Langdon

How I built it

Initially I utilized a pre-trained model, but this is the first step in a broader project that is a "train the trainer" application. Using existing coarse models as the source for generating new seed images to train a new model using AWS SageMaker, in this case, to specifically identify bird species.

Check out this video of the detection: youtube

This project will give you some information about the DeepLens framework, hardware and a references to a guide for setting up your own device and customizing a model to do your own detection.

I encourage you to use it as a starting point to exploring this platform. The code to this project can be found on github at https://github.com/plangdon/deeplens_birds.

The architecture of this solution can be adapted to support your trained model and notification/action pipeline.

Solution Architecture
In designing this architecture, I had the following goals:

  • Push as much processing and resource utilization to "the edge"
    • video bandwidth is large, sending video stream to cloud for processing would be expensive and have high latency
    • notifications or counts are orders of magnitude smaller than video and easy sent to cloud for storage and action
    • constant video processing in the cloud would incur lots of CPU costs
  • Support image capture locally so device could be deployed to remote location without 100% connectivity or on low-bandwidth networks (cellular)
    • images captured of birds would be incorporated into future training for specific species identification
    • locally stored images can be off-loaded from device on demand when network is enabled or physically removed via removable media (sneaker-net)
  • Support notifications to the cloud using secure, messaging protocol

Challenges

Currently the SageMaker only supports training new models with Apache MXNext, I have had a little experience training model using Tensor Flow and was hoping to import my existing models. It was a good exercise for me looking at how MXNet works and expand my scope of ML frameworks. I look forward to the addition of future ML Frameworks to the DeepLens platform so I can continue trying them out to find the best options for the job.

Accomplishments that I'm proud of

Even though I was able to quickly get up and running using the sample models, I was happy I took the extra time building out my own model using SageMaker and Jupyter notebooks. Python is not my strongest language and working with Jupyter notebooks made it a little easier to work through the process. I recommend after you get a couple of the sample models up and running you take the time to train your own model as that really extends the value of this platform.

What I learned

It was interesting to explore the Apache MXNet framework, I had previously used Tensor Flow and I like to make sure I am keeping on top of multiple options when considering technologies. Apache MXNet is a very mature project and active user community.

I learned how this customized hardware and software integration allows for very fast deployment of deep learning methodologies and a quick learning tool for developers to begin exploring the technical area.

I was also very happy to find the extensive documentation that was put together by the AWS DeepLens team. It was a very well documented step by step that was helpful when digging into a totally new platform.

What's next

This first pass of my project is great for weeding out the birds from the squirrels, but I am interested in extending the model to do more detailed identifications. The images captured during the first phase of the project will be a good resource as I extend the training to more specifically identify bird species.

As I mentioned above, currently the SageMaker platform only supports MXNet models, I am interested in porting over my Tensor Flow trained models. Here is a link to my git project with my training information for bird species identification using Tensor Flow.

I am also very interested in seeing if I can adapt the hardware to include bird songs as part of the identification criteria.

Built with

amazon-web-services
sagemaker
lambda
greengrass
node.js
aws-iot
mxnet