In this module, you learn to use AWS Cloud9 to start a simulation in AWS RoboMaker. You'll visualize the robot in the simulation environment as it trains to follow a TurtleBot 3 Burger.
Time to complete module: 8.5 hours. 8 hours of the time is required for the training to be complete. If the simulation job is stopped early, the robot may not track accurately.
Services used: AWS RoboMaker, Amazon S3, Amazon CloudWatch, AWS Cloud9
-
Step 1. Start a simulation job
A simulation application is used in robotics development to simulate different environments. It contains visual and physical models for a robot, robot's sensors, terrain, and objects populating the world. It is also responsible for simulating physics such as gravity and collisions.
a. On the AWS Cloud9 menu bar, select “RoboMaker Run”, “Launch simulation”, then “1. ObjectTracker Train Model”. This process uploads the “output.tar.gz” bundle file to the S3 folder created in module 1, then it creates a simulation application and a simulation job in AWS RoboMaker.
-
Step 2. Using Gazebo
AWS RoboMaker provides tools to visualize, test and troubleshoot robots in the simulation. For example, Gazebo lets you build 3D worlds with robots, terrain, and other objects. It also has a physic engine for modeling illumination, gravity, and other forces. Robotics developers use Gazebo to evaluate and test robots in different scenarios, often times more quickly than using physical robots and scenarios.
On the job detail page, choose the Gazebo icon to visualize the simulation world. You can zoom in and out in the world to explore. The robot works in two phases. In the first phase, the robot performs actions based on the model, and is given a reward based on how well it performs. In the second phase, the robot is training the model using the rewards from the first phase. To learn more about the Reinforcement Learning library used in the tutorial, review the Reinforcement Learning Coach by Intel AI Lab on GitHub. There could be times where the robot might move in circles or may look stuck while training the reinforcement learning model, this is perfectly normal.
-
Step 3. Using rqt
The rqt tool is a Qt-based framework and plugins for ROS GUI development. The tool hosts a number of different plugins for visualizing ROS information. Multiple plugins can be displayed on a custom dashboard, providing a unique view of your robot.
b. Another useful way to use rqt is to look at all topics and messages in the system. On the rqt menu bar, select “Plugins”, “Topics”, and “Topic Monitor” to view all running topics.
c. For example, on the /odom (Odometry) topic, you can see the bandwidth that a message is using as well as the current motion (angular and linear) of the robot.
-
Step 4. Using rviz
ROS Visualizer (rviz) is a tool for visualizing sensor data and state information from ROS. The tool is a 3D visualization tool for ROS applications. It provides a view of your robot model, capture sensor information from robot sensors, and replay captured data. It can display data from camera, lasers, from 3D and 2D devices including pictures and point clouds.
-
Step 5. Using the terminal
-
Step 6. Using CloudWatch
If something goes wrong in one of your own simulations, the ROS logs are a good place to start debugging. You can find ROS stdout and stderr outputs for the simulation job in CloudWatch Logs. To access full ROS logs, it is in the output folder located in the S3 bucket that you created in module 1.
c. The metric published by Object Tracker is the reward that the robot earned every episode. You can think of this metric as an indicator into how well your model has been trained. If the graph shows a plateau, then your robot has finished learning. By default, a training job is complete in 8 hours. You can extend the training job longer. Longer training typically would mean a more accurate model. As shown in screenshot below, the job trained for 24 hours (X axis is time, Y axis is rewards), the accuracy steadily increases as time passes. With Amazon SageMaker GPU instance, the training can be much faster.
d. The trained models are stored in your S3 bucket at “model-store/model/”. In the next module you use an AWS RoboMaker simulation to evaluate this model.