ROS + ROS Extensions
Robot Operating System, or ROS, is the most widely used open source robotics software framework, providing software libraries that help you build robotics applications. AWS RoboMaker provides cloud extensions for ROS so that you can offload to the cloud the more resource-intensive computing processes that are typically required for intelligent robotics applications and free up local compute resources. AWS RoboMaker supports the following ROS versions: ROS Kinetic, ROS Melodic, and ROS2 Dashing (BETA). Learn more about ROS here.
RoboMaker cloud extensions for ROS include services such as Amazon Kinesis Video Streams for video streaming, Amazon Rekognition for image and video analysis, Amazon Lex for speech recognition, Amazon Polly for speech generation, and Amazon CloudWatch for logging and monitoring. RoboMaker provides each of these cloud services as open source ROS packages, so you can extend the functions on your robot by taking advantage of cloud APIs, all in a familiar software framework.
Learn more about each of the cloud service extensions in the code repository.
ROS1 Cloud Extensions
ROS2 Cloud Extensions
AWS RoboMaker includes sample robotics applications to help you get started quickly. These provide the starting point for the voice command, recognition, monitoring, and fleet management capabilities that are typically required for intelligent robotics applications. Sample applications come with robotics application code (instructions for the functionality of your robot) and simulation application code (defining the environment in which your simulations will run). You can get started with the samples here.
Learn the basics of how to structure your robot applications and simulation applications, edit code, build, launch new simulations, and deploy applications to robots. Start from a basic project template including a robot in an empty simulation world.
- Use Gazebo to build new simulation worlds by inserting models, control the camera view, and play and pause a simulation application
- Use Amazon CloudWatch Logs and an Amazon S3 output bucket to view logs for the robot and simulation applications
- Use the terminal to run ROS commands
Navigation and person recognition
Learn about robot navigation, video streaming, face recognition, and text-to-speech. A robot navigates between goal locations in a simulated home and recognizes faces in photos. The robot streams camera images to Amazon Kinesis Video Streams, receives face recognition results from Amazon Rekognition, and speaks the names of recognized people using Amazon Polly.
- Use rqt to view the simulated camera images that are streamed to Amazon Kinesis Video Streams
- Use rviz to view the robot's SLAM (simultaneous localization and mapping) map and its planning state.
- Use the terminal to view Amazon Rekognition results
Command a robot through natural language text and voice in a simulated bookstore using Amazon Lex. Default commands include “move <direction> <rate>,” “turn <direction> <rate>,” and “stop.” The robot acknowledges and executes each command.
- Use the terminal to send natural language movement commands to be interpreted by Amazon Lex (e.g. “move forward 5,” “rotate clockwise 5,” and “stop”)
- Use Amazon CloudWatch Metrics to monitor the execution of commands, distances to nearest detected obstacles, and collisions
Monitor health and operational metrics for a robot in a simulated bookstore using Amazon CloudWatch Metrics and Amazon CloudWatch Logs. Streamed metrics include speed, distance to nearest obstacle, distance to current goal, collision count, robot CPU utilization, and RAM usage.
- Use Amazon CloudWatch Metrics to view robot health and performance
- Use Gazebo and drop obstacles near the robot and view the resulting metrics
Object following using reinforcement learning
Teach a robot to track and follow an object through reinforcement learning in simulation using the Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.
- Use Gazebo to experiment with different locations of an object to track
- Use rviz to view the robot as it trains in simulation
- Use the Coach Reinforcement Library to train and evaluate models
Self-driving using reinforcement learning
Teach a racecar to drive in a simulation through reinforcement learning using Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.
- Use Gazebo and rviz to view the car as it trains in simulation
- Use Amazon CloudWatch Logs to track a car's performance
- Use the Coach Reinforcement Library to train and evaluate models
We have created additional environments you can use with your robots. They can be used to test facial recognition, navigation, obstacle avoidance, machine learning and can be modified for your scenarios.
A small house with kitchen, living room, home gym and pictures you can customize to test image recognition. There are plenty of obstacles for your robot to navigate.
Navigate among shelves of books in this simulated bookstore. It includes different obstacles including chairs and tables for your robot to navigate.
Use machine learning to teach your robot to stay on this racetrack. The racetrack is oval with clear edge markers. Ready, set, race!
Workshops and tutorials
Getting Started Videos
HW developer kits
Building robots and adding advanced functionality requires many choices by developers. To remove uncertainly and speed development, AWS partners have created a number of robotic development kits that include complete HW solutions, pre-installed SW, and extensive documentation and tutorials.
Intel – UP Squared RoboMaker Developer Kit
The UP Squared RoboMaker Developer kit is the easiest way to get started with your robotics project powered by AWS RoboMaker. It’s a starter package designed to be a fast and easy way for developers to add artificial intelligence (AI) and vision into their robots. This kit provides a clear tutorial for how to build hardware from the module level and how to use cloud services to shorten the development time. Developers have been able to add machine vision into their robots within a single day and working robotics demos in just a few days. With expertise from Intel, AWS and AAEON, this kit aims at providing developers a path from prototype to field deployment.
This kit features an UP Squared board with an Intel® Atom™ processor x7-E3950, Intel® RealSense™ D435i camera, an Intel® Movidius™ Myriad™ X VPU, is fully compatible to AWS RoboMaker cloud services and extends the open-source robotics software framework, Robot Operating System (ROS).
Learn about the UP squared Robomaker kit and order today
Learn more about the partnerships with Intel and Aaeon
Nvidia – JetBot AI Kit Featuring ROS & AWS RoboMaker
Nvidia accelerates robotic development from Cloud to Edge with AWS RoboMaker. Robotic simulation and development can now be easily done in the cloud and deployed across millions of robots and other autonomous machines powered by Jetson. This includes NVIDIA’s open source reference platform, JetBot, powered by the Jetson Nano. Jetbot is easy to set up and use, is compatible with many accessories and includes interactive tutorials showing you how to harness the power of AI to follow objects, avoid collisions and more. The JetBot AI Kit powered by Nvidia and featuring ROS and AWS RoboMaker includes the board, a complete robot chassis, wheels, and controllers along with a battery and 8MP camera. Extensive documentation is provided to accompany the kit.
Qualcomm – Robotics RB3 Platform with integrated support for AWS RoboMaker
Qualcomm Technologies’ support of Amazon Web Services’ AWS RoboMaker is helping to transform innovation in robotics. With high-performance heterogeneous computing, on-device machine learning and computer vision, hi-fidelity sensor processing for perception, odometry for localization, mapping, and navigation, and 4G LTE and Wi-Fi connectivity, the Qualcomm Robotics RB3 platform provides developers the tools to build robots that can accelerate innovation, revolutionize logistics and enhance our daily lives. Qualcomm Robotics RB3 development kit’s integrated support for AWS Robomaker helps develop, test, and deploy intelligent robotics applications at scale and provides an edge-to-cloud solution to make building intelligent robotics applications more accessible.
To learn more about the Qualcomm Robotics RB3 kit and to buy now
Learn about the Qualcomm’s commitment to robotic innovation
Developer documentation and an extensive step-by-step Developer documentation is available here: https://developer.qualcomm.com/project/aws-robomaker-rb3