In this module, you learn to use AWS Cloud9 to start a simulation in AWS RoboMaker. You'll visualize the robot in the simulation environment as it trains to follow a TurtleBot 3 Burger.

Time to complete module: 8.5 hours. 8 hours of the time is required for the training to be complete. If the simulation job is stopped early, the robot may not track accurately.

Services used: AWS RoboMaker, Amazon S3, Amazon CloudWatch, AWS Cloud9

 


  • Step 1. Start a simulation job

    A simulation application is used in robotics development to simulate different environments. It contains visual and physical models for a robot, robot's sensors, terrain, and objects populating the world. It is also responsible for simulating physics such as gravity and collisions.


    a. On the AWS Cloud9 menu bar, select “RoboMaker Run”, “Launch simulation”, then “1. ObjectTracker Train Model”. This process uploads the “output.tar.gz” bundle file to the S3 folder created in module 1, then it creates a simulation application and a simulation job in AWS RoboMaker.

    Object tracker train mode

    (Click to enlarge)

    Object tracker train mode

    b. On the AWS Cloud9 menu bar, select “RoboMaker Simulation (Pending)”, then “View Simulation Job Details”. This takes you to the AWS RoboMaker Simulation Job console.

    View simulation job details

    (Click to enlarge)

    View simulation job details

    c. In the AWS RoboMaker Simulation Job details page, make sure the job status is “Running” before continuing to the next step.

    Running

    (Click to enlarge)

    Running

    d. Scroll down to the bottom of the page and choose the “Simulation application” tab, you see the environment variables and the “MODEL_S3_BUCKET” variable is where the trained model is uploaded once training is completed.

    Model s3 bucket

    (Click to enlarge)

    Model s3 bucket
  • Step 2. Using Gazebo

    AWS RoboMaker provides tools to visualize, test and troubleshoot robots in the simulation. For example, Gazebo lets you build 3D worlds with robots, terrain, and other objects. It also has a physic engine for modeling illumination, gravity, and other forces. Robotics developers use Gazebo to evaluate and test robots in different scenarios, often times more quickly than using physical robots and scenarios.

    On the job detail page, choose the Gazebo icon to visualize the simulation world. You can zoom in and out in the world to explore. The robot works in two phases. In the first phase, the robot performs actions based on the model, and is given a reward based on how well it performs. In the second phase, the robot is training the model using the rewards from the first phase. To learn more about the Reinforcement Learning library used in the tutorial, review the Reinforcement Learning Coach by Intel AI Lab on GitHub. There could be times where the robot might move in circles or may look stuck while training the reinforcement learning model, this is perfectly normal.

  • Step 3. Using rqt

    The rqt tool is a Qt-based framework and plugins for ROS GUI development. The tool hosts a number of different plugins for visualizing ROS information. Multiple plugins can be displayed on a custom dashboard, providing a unique view of your robot.


    a. On the job detail page, choose rqt to look at node graph and look at how messages flow through the system. On the rqt menu bar, select “Plugins”, “Introspection”, and “Node Graph”.

     

    Miscellaneous tools

    (Click to enlarge)

    Miscellaneous tools

    b. Another useful way to use rqt is to look at all topics and messages in the system. On the rqt menu bar, select “Plugins”, “Topics”, and “Topic Monitor” to view all running topics.

    c. For example, on the /odom (Odometry) topic, you can see the bandwidth that a message is using as well as the current motion (angular and linear) of the robot.

    odom

    (Click to enlarge)

    odom
  • Step 4. Using rviz

    ROS Visualizer (rviz) is a tool for visualizing sensor data and state information from ROS. The tool is a 3D visualization tool for ROS applications. It provides a view of your robot model, capture sensor information from robot sensors, and replay captured data. It can display data from camera, lasers, from 3D and 2D devices including pictures and point clouds.


    a. On the job detail page, choose rviz. You can use this tool to visualize what the robot sees through its camera. On the rviz menu bar, choose “Add”, select “By topic” tab, “/rgb/image_raw/Image” topic, and choose “OK”.

     

    Using rviz

    (Click to enlarge)

    Using rviz

    b. You now see the images captured by the Robot’s camera as it moves. 

     

    Camera

    (Click to enlarge)

    Camera
  • Step 5. Using the terminal

    The terminal provides access to a command line on the simulation job host. You can use ROS commands such as rostopic list, rostopic info to test, debug and troubleshoot in the simulation environment.

     

     

    Using terminal

    (Click to enlarge)

    Using terminal

    You can also access the Gazebo, rqt, rviz, and Terminal tools from the AWS Cloud9 IDE menu bar.


     

    IDE menu bar

    (Click to enlarge)

    IDE menu bar
  • Step 6. Using CloudWatch

    If something goes wrong in one of your own simulations, the ROS logs are a good place to start debugging. You can find ROS stdout and stderr outputs for the simulation job in CloudWatch Logs. To access full ROS logs, it is in the output folder located in the S3 bucket that you created in module 1.


    a. In the AWS RoboMaker Simulation Job details page, scroll down to the bottom of the page and choose “Configuration” tab then “Logs” to access CloudWatch Logs.

     

     

    Using cloudwatch1

    (Click to enlarge)

    Using cloudwatch1
    Using cloudwatch2

    (Click to enlarge)

    Using cloudwatch2

    b. You can also see CloudWatch metric published by AWS RoboMaker in the Cloudwatch Metric and Custom Namespace section.

    Using cloudwatch3

    (Click to enlarge)

    Using cloudwatch3

    c. The metric published by Object Tracker is the reward that the robot earned every episode. You can think of this metric as an indicator into how well your model has been trained. If the graph shows a plateau, then your robot has finished learning. By default, a training job is complete in 8 hours. You can extend the training job longer. Longer training typically would mean a more accurate model. As shown in screenshot below, the job trained for 24 hours (X axis is time, Y axis is rewards), the accuracy steadily increases as time passes. With Amazon SageMaker GPU instance, the training can be much faster.

    Metrics

    (Click to enlarge)

    Metrics

    d. The trained models are stored in your S3 bucket at “model-store/model/”. In the next module you use an AWS RoboMaker simulation to evaluate this model.