As the letters J and Z include motion, I excluded these from the training set.
I spent a significant amount of time, using trial and error, to get AWS Polly MP3s to play on the AWS DeepLens. For anyone else struggling with this, in summary: add the ggc_user to the audio group, add resources to the Greengrass group (and the Lambda functions therein) - repeat after every deploy!
Accomplishments that I'm proud of
I still can’t believe it works! It’s like magic! My wife came up with the idea, and I thought it was too big to work. Whilst I was confident I could master the AWS DeepLens hardware, I was concerned that I lacked the experience to create the appropriate model. Thankfully, Amazon SageMaker takes care of all of the machine learning heavy lifting, which meant I could focus on collating training data (and getting audio to play on the AWS DeepLens device).
What I learned
What's next for DeepLens ASLens