Try it out
Obviously a better initial dataset would help training.
More effort into the emotion model would be useful as its 80% accuracy probably isn't enough for what we'd want out system to do.
Right now we're just uploading one "face" to the IoT event. For this to work better, we'd want to individually assign faces when we get them and track them through the field of view.