Documentation

Robotics software

Robot Operating System, or ROS, is the most widely used open source robotics software framework, providing software libraries that help you build robotics applications. AWS RoboMaker provides cloud extensions for ROS so that you can offload to the cloud the more resource-intensive computing processes that are typically required for intelligent robotics applications and free up local compute resources. AWS RoboMaker supports the following ROS versions: ROS Kinetic, ROS Melodic, and ROS2 Dashing (BETA). Learn more about ROS here.

RoboMaker cloud extensions for ROS include services such as Amazon Kinesis Video Streams for video streaming, Amazon Rekognition for image and video analysis, Amazon Lex for speech recognition, Amazon Polly for speech generation, and Amazon CloudWatch for logging and monitoring. RoboMaker provides each of these cloud services as open source ROS packages, so you can extend the functions on your robot by taking advantage of cloud APIs, all in a familiar software framework.

Learn more about each of the cloud service extensions in the code repository.

ROS1 Cloud Extensions

ROS2 Cloud Extensions

Sample applications

AWS RoboMaker includes sample robotics applications to help you get started quickly. These provide the starting point for the voice command, recognition, monitoring, and fleet management capabilities that are typically required for intelligent robotics applications. Sample applications come with robotics application code (instructions for the functionality of your robot) and simulation application code (defining the environment in which your simulations will run). You can get started with the samples here

Hello world

Learn the basics of how to structure your robot applications and simulation applications, edit code, build, launch new simulations, and deploy applications to robots. Start from a basic project template including a robot in an empty simulation world.

  • Use Gazebo to build new simulation worlds by inserting models, control the camera view, and play and pause a simulation application
  • Use Amazon CloudWatch Logs and an Amazon S3 output bucket to view logs for the robot and simulation applications
  • Use the terminal to run ROS commands
 
Learn more in the code repository or in documentation.

Learn about robot navigation, video streaming, face recognition, and text-to-speech. A robot navigates between goal locations in a simulated home and recognizes faces in photos. The robot streams camera images to Amazon Kinesis Video Streams, receives face recognition results from Amazon Rekognition, and speaks the names of recognized people using Amazon Polly.

  • Use rqt to view the simulated camera images that are streamed to Amazon Kinesis Video Streams
  • Use rviz to view the robot's SLAM (simultaneous localization and mapping) map and its planning state.
  • Use the terminal to view Amazon Rekognition results
 
Learn more in the code repository or in documentation.

Voice commands

Command a robot through natural language text and voice in a simulated bookstore using Amazon Lex. Default commands include “move <direction> <rate>,” “turn <direction> <rate>,” and “stop.” The robot acknowledges and executes each command.

  • Use the terminal to send natural language movement commands to be interpreted by Amazon Lex (e.g. “move forward 5,” “rotate clockwise 5,” and “stop”)
  • Use Amazon CloudWatch Metrics to monitor the execution of commands, distances to nearest detected obstacles, and collisions
 
Learn more in the code repository or in documentation.

Robot monitoring

Monitor health and operational metrics for a robot in a simulated bookstore using Amazon CloudWatch Metrics and Amazon CloudWatch Logs. Streamed metrics include speed, distance to nearest obstacle, distance to current goal, collision count, robot CPU utilization, and RAM usage.

  • Use Amazon CloudWatch Metrics to view robot health and performance
  • Use Gazebo and drop obstacles near the robot and view the resulting metrics
 
Learn more in the code repository or in documentation.

Object following using reinforcement learning

Teach a robot to track and follow an object through reinforcement learning in simulation using the Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.

  • Use Gazebo to experiment with different locations of an object to track
  • Use rviz to view the robot as it trains in simulation
  • Use the Coach Reinforcement Library to train and evaluate models
 
Learn more in the code repository or in documentation.

Self-driving using reinforcement learning

Teach a racecar to drive in a simulation through reinforcement learning using Coach Reinforcement Learning Library, then deploy this capability to a robot. View the reward metrics in Amazon Cloudwatch Metrics to explore how the machine learning model improves over time. Customize your reward function to improve the machine learning algorithm used for training.

  • Use Gazebo and rviz to view the car as it trains in simulation
  • Use Amazon CloudWatch Logs to track a car's performance
  • Use the Coach Reinforcement Library to train and evaluate models
 
Learn more in the code repository or in documentation.

Simulation Assets

We have created additional environments you can use with your robots. They can be used to test facial recognition, navigation, obstacle avoidance, machine learning and can be modified for your scenarios. 

House

RoboMaker-House

A small house with kitchen, living room, home gym and pictures you can customize to test image recognition. There are plenty of obstacles for your robot to navigate.

Learn more »

Bookstore

RoboMaker-Bookstore

Navigate among shelves of books in this simulated bookstore. It includes different obstacles including chairs and tables for your robot to navigate.

Learn more »

Racetrack

RoboMaker-Racetrack

Use machine learning to teach your robot to stay on this racetrack. The racetrack is oval with clear edge markers. Ready, set, race!

Learn more »

Workshops and tutorials

Workshop

Hello World! Getting Started with AWS RoboMaker

In this workshop, you will learn how to get started with AWS RoboMaker to build smart robotic applications. You will also have the opportunity to manage and deploy robot applications in both a simulated environment as well as to a production robot (requires a TurtleBot 3 Burger).

Learn more »
Workshop

Finding Martians with AWS RoboMaker and the JPL Open Source Rover

In this workshop, you will become familiar with AWS RoboMaker and will learn to simulate the NASA JPL Mars Open Source Rover. In doing so, you will learn to integrate AWS RoboMaker with services such as machine learning, monitoring, and analytics so your Mars Rover can stream data, navigate, communicate, comprehend, and learn.

Learn more »
Tutorial

How to Train a Robot Using Reinforcement Learning

Reinforcement Learning (RL) is an advanced machine learning (ML) technique that learns very complex behaviors without requiring any labeled training data, and can make short term decisions while optimizing for a longer term goal. You can use the AWS RoboMaker sample application to generate simulated training data used for RL. The RL model will teach the robot to track and follow an object. This is a simple demonstration that can be extended into use cases like worker assistance in a warehouse or an entertainment robot following a consumer in their home.
Learn more »
Tutorial

Run ROS on multiple machines with AWS RoboMaker

In many cases, a robotic developer or researcher will need to run Robot Operating System (ROS) on multiple machines. In this tutorial, you will learn how to setup ROS on a virtual machine running on AWS, how to connect your physical robot to the virtual machine, and how to create a multi-machine distributed ROS system. Doing so will streamline development of your robotic application.  

Learn more »
Tutorial

Run ROS tutorials using AWS RoboMaker

In this tutorial, we will show you how to setup an environment at AWS RoboMaker to learn Robot Operating System (ROS). The tutorials include: ROS introduction, creating nodes, simple kinematics for mobile robot, visual object recognition, running ROS on multiple machines, SLAM navigation, path planning, unknown environment exploration, and object search.
Learn more »
Tutorial

ROSbot + AWS Robomaker - Quick start tutorial

The Husarion ROSbot 2.0 is an autonomous, open source robot platform. It can be used as a learning platform for Robot Operating System (ROS) as well as a base for a variety of robotic applications such as research robots, inspection robots, custom service robots etc. In this tutorial, we will guide you from unboxing through launching and deploying applications using AWS RoboMaker.
Learn more »

Videos

Using Reinforcement Learning with AWS RoboMaker (4:17)
Deploying Robotic Applications Using Machine Learning with Nvidia JetBot and AWS RoboMaker (32:04)
Building a continuous integration pipeline for your ROS applications using AWS RoboMaker (1:00:42)

Blogs

No blog posts have been found at this time. Please see the AWS Blog for other resources. 

Product-Page_Standard-Icons_01_Product-Features_SqInk
Check out the FAQs

Learn more about AWS RoboMaker on the FAQs page.

Learn more 
Product-Page_Standard-Icons_02_Sign-Up_SqInk
Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 
Product-Page_Standard-Icons_03_Start-Building_SqInk
Start building in the console

Get started building with AWS RoboMaker.

Sign in