Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To export projects and learn more use this step-by-step guide.
End of Life
Q: What happens to my AWS DeepLens resources after the end of life (EOL) date?
After January 31, 2024, all references to AWS DeepLens models, projects, and device information are deleted from the AWS DeepLens service. You can no longer discover or access the AWS DeepLens service from your AWS console and applications that call the AWS DeepLens API no longer work.
Q: Will I be billed for AWS DeepLens resources remaining in my account after the EOL date?
Resources created by AWS DeepLens, such as Amazon S3 buckets, AWS Lambda functions, AWS IoT things, and AWS Identity and Access Management (IAM) roles continue to exist on their respective services after January 31, 2024. To avoid being billed after AWS DeepLens is no longer supported, follow all of these steps to delete these resources.
Q: How do I delete my AWS DeepLens resources?
To delete the resources used by AWS DeepLens and learn how to restore your AWS DeepLens device to factory settings, see Delete your AWS DeepLens device resources.
Q: Can I deploy my AWS DeepLens projects after the end of life (EOL) date?
You can deploy AWS DeepLens projects until January 31, 2024. After that date, you do not have access to the AWS DeepLens console or API and any application that calls on the AWS DeepLens API does not work.
Q: Will my AWS DeepLens device continue to receive security updates?
AWS DeepLens is not updated after January 31, 2024. While some applications deployed on AWS DeepLens devices might continue to run after the EOL date, AWS does not provide remedies related to and is not responsible for any issue arising from AWS DeepLens software or hardware.
Q: How can I continue to get hands-on experience with AWS AI/ML?
We suggest you try our other hands-on machine learning tools. With AWS DeepRacer, use a cloud-based 3D racing simulator to create reinforcement learning models for an autonomous 1/18th scale race car. Learn and experiment in a no-setup, free development environment with Amazon SageMaker Studio Lab. Automate your image and video analysis with Amazon Rekognition, or use AWS Panorama to improve your operations with computer vision at the edge.
Q: What is AWS DeepLens?
AWS DeepLens is the world’s first deep-learning enabled video camera for developers of all skill levels to grow their machine learning skills through hands-on computer vision tutorials, example code, and pre-built models.
Q: How is AWS DeepLens different from other video cameras in the market?
AWS DeepLens is the world's first video camera optimized to run machine learning models and perform inference on the device. It comes with 6 sample projects at launch, that you can deploy to your AWS DeepLens in less than 10 minutes. You can run the sample projects as is, connect them with other AWS services, train a model in Amazon Sagemaker and deploy it to AWS DeepLens, or extend the functionality by triggering a lambda function when an action takes place. You can also apply more advanced analysis on the cloud using Amazon Rekognition. AWS DeepLens provides the building blocks for your machine learning needs.
Q: What sample projects are available?
There are 7 sample projects available:
1. Object Detection
2. Hot Dog Not Hot Dog
3. Cat and Dog
4. Artistic Style Transfer
5. Activity Detection
6. Face Detection
7. Bird Classification
Q: Does AWS DeepLens include Alexa?
No, AWS DeepLens does not have Alexa or any far-field audio capabilities. However, AWS DeepLens has a 2D microphone array that is capable of running custom audio models, with additional programming required.
Q: What are the product specifications of the device?
- Intel Atom® Processor
- Gen9 graphics
- Ubuntu OS 16.04 LTS
- 100 GFLOPS performance
- Dual band Wi-Fi
- 8GB RAM
- 16GB storage
- Expandable storage via microSD card
- 4MP camera with MJPEG
- H.264 encoding at 1080p resolution
- 2 USB ports
- Micro HDMI
- Audio out
Q: Why do I have "v1.1" marked on the bottom of my device?
AWS DeepLens (2019 Edition) is marked with “v1.1” on the bottom of the device. We have made significant improvements to the user experience, including onboarding, tutorials and additional sensor compatibility support such as depth sensor from Intel Real Sense.
The orginal AWS DeepLens cannot be upgraded to v1.1 via software updates. Some of the device modifications including the simplified onboarding were hardware changes.
Q: What deep learning frameworks can I run on the device?
AWS DeepLens (2019 Edition) is optimized for Apache MXNet, TensorFlow and Caffe.
Q: What kind of performance can I expect with AWS DeepLens?
Performance is measured on images inferred per second and latency. Different models will have varying inference per second. The baseline inference performance is 14 images/second on AlexNet, and 5 images/second on ResNet 50 for batch size of 1. The characteristics of the network that the DeepLens is connected to will determine the latency performance.
Q: What MXNet network architecture layers does AWS DeepLens support?
AWS DeepLens offers support for 20 different network architecture layers. The layers supported are:
Q: What comes in the box and how do I get started?
Inside the box, developers will find a Getting Started guide, the AWS DeepLens device, a region specific power cord and adapter, USB cable and a 32GB microSD card. Setup and configuration of the DeepLens device can be done in minutes using the AWS DeepLens console, and by configuring the device through a browser on your laptop or PC.
There are three 10-Minute Tutorials designed to help guide you through getting started:
1. Create and Deploy a Project
2. Extend a Project
3. Build an AWS DeepLens Project with Amazon SageMaker
Q: Why is an USB port marked as registration?
On AWS DeepLens (2019 Edition) the USB port marked as registration will be used during the onboarding process to register your AWS DeepLens to your AWS account.
The USB port for registration is configured as a slave port. Hence, it cannot be used for keyboard or other master port setup. If you need more ports to connect, we recommend using a USB hub.
Q: Can I train my models on the device?
No, AWS DeepLens is capable of running inference or predictions using trained models. You can train your models in Amazon SageMaker, a machine learning platform to train and host your models. AWS DeepLens offers a simple 1-click deploy feature to publish trained models from Amazon SageMaker.
Q: What AWS services are integrated with AWS DeepLens?
DeepLens is pre-configured for integration with AWS Greengrass, Amazon SageMaker and Amazon Kinesis Video Streams. You can integrate with many other AWS services, such as Amazon S3, Amazon Lambda, Amazon Dynamo, Amazon Rekognition using AWS DeepLens.
Q: Can I SSH into AWS DeepLens?
Yes, we have designed AWS DeepLens to be easy to use, yet accessible for advanced developers. You can SSH into the device using the command: ssh aws_cam@
Q: What programming languages are supported by AWS DeepLens?
You can define and run models on the camera data stream locally in Python 2.7.
Q: Do I need to be connected to internet to run the models?
No. You can run the models that you have deployed to AWS DeepLens without being connected to the internet. However, you need internet to deploy the model from the cloud to the device initially. After transferring your model, AWS DeepLens can perform inference on the device locally without requiring cloud connectivity. However, if you have components in your project that require interaction with cloud, you will need to have internet for those components.
Q: Can I run my own custom models on AWS DeepLens?
Yes. You can also create your own project from scratch, using the AWS SageMaker platform to prepare data and train a model using a hosted notebook, and then publish the trained model to your AWS DeepLens for testing and refinement. You can also import an externally-trained model into AWS DeepLens by specifying the S3 location for model architecture and network weights files.