The second challenge was planning training cycles with unique datasets for the project. Training required acquiring a significant collection of new assets, annotating those assets, and training the model on those assets. Collecting and annotating a measurable data set required 100+ hours of manual work. Training the model on the assets took 500+ hours, which had to be mitigated with cost.
Navigating around some of the issues with Intel's model optimizer and inference engine consumed a lot of time that would otherwise have been spent on enhancements. Much of the final week was spent reverse engineering Intel's framework, fixing bugs in the optimizer, and flashing the DeepLens device. The experience, however, proved educational, and I now have a detailed understanding of Intel's deep learning toolset.
Accomplishments that I'm proud of
- Building a functional SSD r-CNN-driven model\
- Broadening the scope of my knowledge to include mxnet and SageMakers' APIs with only prior experience with OpenCV and Numpy
- Acquiring a greater understanding of the technologies underlying Greengrass, DeepLens, and Sagemaker
- Uncanny patience while annotating 1000s of images
What I learned
Building an intelligent application driven by DeepLens, or any ML project, is as much of a project management task as it is a development task. There are a number of variables that affect resource capacity and may result in unexpected downtime that must be addressed as part of planning.
Jiaan will be expanded to include a broader scope of threat classifications suitable for a live environment where it can be implemented and donated to the community.