Build your own object classification model in SageMaker and import it to DeepLens
We are excited to launch a new feature for AWS DeepLens that allows you to import models trained using Amazon SageMaker directly into the AWS DeepLens console with one click. This feature is available as of AWS DeepLens software version 1.2.3. You can update your AWS DeepLens software by re-booting your device or by using the command sudo apt-get install awscam on the Ubuntu terminal. For this tutorial, you need the MXNet version 0.12. You can update the MXNet version by using the command sudo pip3 install mxnet==0.12.1. You can access the Ubuntu terminal via SSH or uHDMI.
To demonstrate the capability, we will walk you through building a model to classify common objects. This object classification model is based on Caltech-256 dataset and is trained using ResNet network. Through this walk through tutorial, you will build an object classifier that can identify 256 commonly found objects.
To build you own model, you first need to identify a dataset. You can bring your own dataset or use an existing one. In this tutorial, we show you how to build an object detection model in Amazon SageMaker using Caltech-256 image classification dataset.
In order to follow through with this tutorial, ensure that your DeepLens software version is updated to version 1.2.3 and above and MXNet version 0.12.
To build this model in Amazon SageMaker, Visit Amazon SageMaker console (https://console.aws.amazon.com/sagemaker/home?region=us-east-1#/dashboard)
Create notebook instance. Provide the name for your notebook instance and select an instance type (for example ml.t2.medium). Choose to create a new role or use an existing role. Choose Create notebook instance.
Once your notebook instance is created, open the notebook instance you just created.
You will see the Jupyter notebook hosted in your instance.
Create a new notebook by choosing New and conda_mxnet_p36 kernel.
Let’s start by importing the necessary packages. Importing boto3 SDK for Python allows you to access Amazon services like S3. get_execution_role will allow Amazon SageMaker to assume the role created during instance creation and accesses resources on your behalf.
Next we define a bucket which hosts the dataset that will be used. In this example, the dataset is Caltech- 256. Create a bucket in your S3. The name for your bucket must contain the prefix ‘deeplens’. In this example, the bucket is ‘deeplens-imageclassification’.
Next we define the containers. Containers are docker containers and the training job defined in this notebook will run in the container for your region.
Next let’s import the dataset and upload it to your S3 bucket. We will download the train and validation sets for Caltech-256 and upload it to the S3 bucket created earlier.
Next let’s define the network that we will use to train the dataset. For this tutorial, we will use ResNet network. ResNet is the default image classification model in Amazon SageMaker. In this step, you can customize the hyper parameters of the network to train your dataset.
num_layers lets you define the network depth. ResNet supports multiple network depths. For example: 18, 34, 50, 101, 152, 200 etc. For this example we choose the network depth as 50.
Next we need to specify the input image dimensions. The dataset that we used in this example has the dimensions 224 x 224 and has 3 color channels: RGB.
Next we specify the number of training samples in the training set. For Caltecg-256, the number of training samples are 15420.
Next, we specify the number of output classes for the model. In this example, the number of output classes for Caltech-256 is 257.
Batch size refers to the number of training examples utilized in one iteration. You can customize this number based on the computation resources available to you. Epoch is when the entire dataset is processed by the network once. Learning rate determines how fast the weights or coefficients of your network change. You can customize batch size, epochs and learning rates. You can refer to the definitions here: https://docs.aws.amazon.com/sagemaker/latest/dg/IC-Hyperparameter.html.
To train a model in Amazon SageMaker, you create a training job. The training job includes the following information:
- The URL of the Amazon Simple Storage Service (Amazon S3) bucket where you’ve stored the training data.
- The compute resources that you want Amazon SageMaker to use for model training. Compute resources are ML compute instances that are managed by Amazon SageMaker.
- The URL of the S3 bucket where you want to store the output of the job.
- The Amazon Elastic Container Registry path where the training code is stored.
In this sample, we pass the default image classifier (ResNet) built in Amazon SageMaker. The checkpoint_frequency determines the frequency by which model files are stored during training. Since we only need the final model file for deeplens, it is set equal to the number of epochs.
Please make a note of job_name_prefix, S3OutputPath, InstanceType, InstanceCount.
In the next step, you can check the status of the Job in CloudWatch.
To check the status, go to SageMaker dashboard and choose Jobs. Select the Job you have defined and scroll down to the details page on Job to “monitor” section. You will see a link to logs which will open CloudWatch.
Once you run the notebook, it will create a model which can be directly imported into AWS DeepLens as a project. Once the training is complete, your model is ready to be imported in to AWS DeepLens.
Ensure that the mxnet version on your AWS DeepLens is 0.12. In case you need to upgrade, you can type the following code in your Ubuntu terminal.
Now Log into AWS DeepLens Console (https://console.aws.amazon.com/deeplens/home?region=us-east-1#projects)
Create new project
Choose – Create a new blank project
Name project – e.g. imageclassification
Select Add Model – this will open new page, “Import model to AWS Deeplens”
Select Amazon SageMaker trained model, in the Model setting, Amazon SageMaker training job ID drop down, select the imageclassification model you selected. In Model name choose model name e.g. imageclassification, keep description as image classification.
Go back to import model screen, select the imageclassification model you imported earlier, click Add model. Once model is added, you need to add a lambda function by choosing Add function.
To create a AWS DeepLens lambda function, you can follow the blog post: Dive deep into AWS DeepLens Lambda functions and the new model optimizer.
To provide an easy reference, we have provided the instructions for the lambda function for image classification below.
To create an inference Lambda function, use the AWS Lambda console and follow the steps below:
- Choose Create function. You customize this function to run inference for your deep learning models.
- Choose Blueprints
- Search for the greengrass-hello-world blueprint.
- Give your Lambda function the same name as your model e.g. imageclassification_lambda.
- Choose an existing IAM role: AWSDeepLensLambdaRole. You must have created this role as part of the registration process.
- Choose Create function.
- In Function code, make sure the handler is greengrassHelloWorld.function_handler.
- In the GreengrassHello file, remove all of the code. You will write the code for inference Lambda function in this file.
Replace existing code with code below
- To add the text file to your lambda function: In the Function code block, choose File. Then choose New File, add following code, then save file as caltech256_labels.txt
- Save the lambda function
Now deploy the lambda function by selecting Actions dropdown button. And then select Publish new version
- This will pop up new box. You can keep version description blank, and choose Publish. This will publish the lambda function.
- Once done, add the lambda function to the project and choose Create new project to finish the project creation.
You will see your project created in the Projects list.
Once the project is created, select the project and choose Deploy to device. Choose your target AWS DeepLens device. Choose Review.
Now you are ready to deploy your own object detection model. Choose Deploy.
Congratulations! You have built your own object classification model based on a dataset and deployed it to AWS DeepLens for inference.
You can also refer to the notebook for training the dataset in Amazon SageMaker on Github.
About the Authors
Mahendra Bairagi is a Senior Solutions Architect Specialists in IoT, helping customers build intelligence everywhere. He has extensive experience on AWS IoT and AI specific services, along with expertise in AWS analytics, mobile and serverless services. Prior to joining Amazon Web Services, He had long tenure as entrepreneur, IT leader, enterprise architect and software developer at AWS partner organization and fortune 500 corporations. In spare time he coaches junior Olympics archery development team, build bots, RC planes and help animal shelters.
Jyothi Nookula is a Senior Product Manager for AWS DeepLens. She loves to build products that delight her customers. In her spare time, she loves to paint and host charity fund raisers for her art exhibitions.