Documentation
Robotics software
Robot Operating System, or ROS, is the most widely used open source robotics software framework, providing software libraries that help you build robotics applications. AWS RoboMaker provides cloud extensions for ROS so that you can offload to the cloud the more resource-intensive computing processes that are typically required for intelligent robotics applications and free up local compute resources. AWS RoboMaker supports the following ROS versions: ROS Kinetic, ROS Melodic, and ROS2 Dashing (BETA). Learn more about ROS here.
RoboMaker cloud extensions for ROS include services such as Amazon Kinesis Video Streams for video streaming, Amazon Rekognition for image and video analysis, Amazon Lex for speech recognition, Amazon Polly for speech generation, and Amazon CloudWatch for logging and monitoring. RoboMaker provides each of these cloud services as open source ROS packages, so you can extend the functions on your robot by taking advantage of cloud APIs, all in a familiar software framework.
Learn more about each of the cloud service extensions in the code repository.
ROS1 Cloud Extensions
Amazon Kinesis Video Streams & Amazon Rekognition ROS Extension »
ROS2 Cloud Extensions
Sample applications
AWS RoboMaker includes sample robotics applications to help you get started quickly. These provide the starting point for the voice command, recognition, monitoring, and fleet management capabilities that are typically required for intelligent robotics applications. Sample applications come with robotics application code (instructions for the functionality of your robot) and simulation application code (defining the environment in which your simulations will run). You can get started with the samples here.
Hello world
Learn the basics of how to structure your robot applications and simulation applications, edit code, build, launch new simulations, and deploy applications to robots. Start from a basic project template including a robot in an empty simulation world.
- Use Gazebo to build new simulation worlds by inserting models, control the camera view, and play and pause a simulation application
- Use Amazon CloudWatch Logs and an Amazon S3 output bucket to view logs for the robot and simulation applications
- Use the terminal to run ROS commands
Navigation and person recognition
Learn about robot navigation, video streaming, face recognition, and text-to-speech. A robot navigates between goal locations in a simulated home and recognizes faces in photos. The robot streams camera images to Amazon Kinesis Video Streams, receives face recognition results from Amazon Rekognition, and speaks the names of recognized people using Amazon Polly.
- Use rqt to view the simulated camera images that are streamed to Amazon Kinesis Video Streams
- Use rviz to view the robot's SLAM (simultaneous localization and mapping) map and its planning state.
- Use the terminal to view Amazon Rekognition results
Voice commands
Command a robot through natural language text and voice in a simulated bookstore using Amazon Lex. Default commands include “move <direction> <rate>,” “turn <direction> <rate>,” and “stop.” The robot acknowledges and executes each command.
- Use the terminal to send natural language movement commands to be interpreted by Amazon Lex (e.g. “move forward 5,” “rotate clockwise 5,” and “stop”)
- Use Amazon CloudWatch Metrics to monitor the execution of commands, distances to nearest detected obstacles, and collisions
Robot monitoring
Monitor health and operational metrics for a robot in a simulated bookstore using Amazon CloudWatch Metrics and Amazon CloudWatch Logs. Streamed metrics include speed, distance to nearest obstacle, distance to current goal, collision count, robot CPU utilization, and RAM usage.
- Use Amazon CloudWatch Metrics to view robot health and performance
- Use Gazebo and drop obstacles near the robot and view the resulting metrics
Object following using reinforcement learning
Teach a robot to track and follow an object through reinforcement learning in simulation using the Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.
- Use Gazebo to experiment with different locations of an object to track
- Use rviz to view the robot as it trains in simulation
- Use the Coach Reinforcement Library to train and evaluate models
Self-driving using reinforcement learning
Teach a racecar to drive in a simulation through reinforcement learning using Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.
- Use Gazebo and rviz to view the car as it trains in simulation
- Use Amazon CloudWatch Logs to track a car's performance
- Use the Coach Reinforcement Library to train and evaluate models
Simulation Assets
We have created additional environments you can use with your robots. They can be used to test facial recognition, navigation, obstacle avoidance, machine learning and can be modified for your scenarios.
House

A small house with kitchen, living room, home gym and pictures you can customize to test image recognition. There are plenty of obstacles for your robot to navigate.
Bookstore

Navigate among shelves of books in this simulated bookstore. It includes different obstacles including chairs and tables for your robot to navigate.
Racetrack

Use machine learning to teach your robot to stay on this racetrack. The racetrack is oval with clear edge markers. Ready, set, race!
Workshops and tutorials
Hello World! Getting Started with AWS RoboMaker
In this workshop, you will learn how to get started with AWS RoboMaker to build smart robotic applications. You will also have the opportunity to manage and deploy robot applications in both a simulated environment as well as to a production robot (requires a TurtleBot 3 Burger).
Finding Martians with AWS RoboMaker and the JPL Open Source Rover
In this workshop, you will become familiar with AWS RoboMaker and will learn to simulate the NASA JPL Mars Open Source Rover. In doing so, you will learn to integrate AWS RoboMaker with services such as machine learning, monitoring, and analytics so your Mars Rover can stream data, navigate, communicate, comprehend, and learn.
How to Train a Robot Using Reinforcement Learning
Run ROS on multiple machines with AWS RoboMaker
In many cases, a robotic developer or researcher will need to run Robot Operating System (ROS) on multiple machines. In this tutorial, you will learn how to setup ROS on a virtual machine running on AWS, how to connect your physical robot to the virtual machine, and how to create a multi-machine distributed ROS system. Doing so will streamline development of your robotic application.