AWS Robotics Blog
Easily record and store robotic application data with the S3 rosbag cloud extension for AWS RoboMaker
This blog uses colcon bundle for robot and simulation applications. AWS RoboMaker now only supports containers to make it easy for you to bring and run your own simulations and applications. To follow along with this blog post, see our updated blog on Preparing ROS application and simulation application containers for AWS RoboMaker.
Introduction
Data from robots that are operating in dynamic and real-world environments is vital in debugging, testing, development of features, and creating value for end customers. Robots are typically edge devices that have the data written to local storage available on the system. Retrieving and storing edge data from these systems requires developers to implement storage infrastructure application that move the data to durable and reliable destinations, such as the cloud. Many of these robotic systems do not have reliable or consistent internet, resulting in increased complexity of the storage infrastructure application.
AWS RoboMaker is open-sourcing cloud extensions to make it easier to record and store robotic application data. Cloud extensions are Robot Operating System (ROS) packages that provide capabilities for creating and uploading robot data to Amazon S3, a highly scalable, available, and secure object storage service. Cloud extensions contain ROS nodes that create rosbags by capturing the robot data such as position, velocity, and state information by subscribing to required rostopics and storing the serialized message data in a bag file format.
Solution overview
The S3 rosbag cloud extension for ROS Kinetic and Melodic distributions contains three ROS nodes. In this blog, we learn how these ROS nodes can be helpful, and go into detail about how to install and use them. The nodes are:
- s3_file_uploader: Enables uploading any file from the robot to a configured bucket on Amazon S3. This node can be used for multiple use cases, which require files to be uploaded and stored on the cloud. For example, you can upload the image files created by the robot’s camera sensors from the robot to an Amazon S3 bucket, and use it as a dataset for machine learning with Amazon SageMaker.
- rolling_recorder: Creates, splits, and uploads rosbag files for certain configured duration in the past, upon receiving a request. One use case is to use the rolling_recorder node along with a programmatic way to detect failures, for example, move_base planner failing to create a valid plan. As a result, you can publish a request to the node to send the last “x” minutes of rosbag files from the failure, which enables robotics and QA engineers to go through the rosbag file, to debug the issue. This node uses the s3_file_uploader node to upload the created rosbag files to the Amazon S3 bucket.
- duration_recorder: Creates and uploads rosbag files for a configured duration in the future from the time of request. A common use case for the duration_recorder node occurs during feature development. You can test the changes made to the code by initiating a test on the robot and publishing a request to duration_recorder node, which provides the ability to capture and analyze the response of the robot from the rosbag files created during the test. The duration_recorder node uses the s3_file_uploader node to upload the created rosbag files to the Amazon S3 bucket.
Setup for Rosbag and Amazon S3 Cloud Extensions
The AWS RoboMaker cloud extensions are currently supported on ROS Kinetic (Ubuntu 16.04) and Melodic (Ubuntu 18.04). To run these nodes on a device, a working ROS installation is required, which is sourced in the current shell.
We create an Amazon S3 bucket in a Region from your AWS account and establish credentials for an IAM user that has the s3:PutObject permissions. To set up an IAM user and obtain its credentials, reference how to change permissions for an IAM User and managing access keys for IAM Users. To set up your environment to use these credentials, see AWS configuration and credential files settings. Let us now look at requirements for each of the individual ROS nodes.
Installing the s3_file_uploader ROS node
We install the s3_file_uploader package via the Advanced Package Tool by running the following commands:
sudo apt-get update
sudo apt-get install -y ros-$ROS_DISTRO-s3-file-uploader
Note: You can also build the package from source.
The s3_file_uploader node creates a ROS action server that accepts requests to upload files to a configured Amazon S3 bucket. The configuration file in present in the ROS package. The request to upload files can come from either an action client or a corresponding rostopic. The ROS node accepts one upload request at a time, and each request can have a single or multiple files. The status of the request is available as feedback from the action server.
Running the s3_file_uploader ROS node
Launch the s3_file_uploader node with the following command, replacing <BUCKET_NAME> with the name of the Amazon S3 bucket created:
roslaunch s3_file_uploader s3_file_uploader.launch s3_bucket:=<BUCKET_NAME>
Next, we create a sample text file to test the functionality. Create a simple text file using the following command:
echo "Hello S3!" > /tmp/hello.txt
We create an action client to request the upload of the created text file. The following piece of code acts as such a client. Let’s save the following code snippet to a file called “s3_file_uploader_client.py”.
import actionlib
import rospy
from file_uploader_msgs.msg import UploadFilesAction, UploadFilesGoal
ACTION = "/s3_file_uploader/UploadFiles"
FILE_NAME = "/tmp/hello.txt"
NODE_NAME = "s3_file_uploader_client"
S3_KEY_PREFIX = "rosbags/test"
rospy.init_node(NODE_NAME)
goal = UploadFilesGoal(
upload_location=S3_KEY_PREFIX,
files=[FILE_NAME]
)
client = actionlib.SimpleActionClient(ACTION, UploadFilesAction)
client.wait_for_server()
client.send_goal(goal)
client.wait_for_result()
print(client.get_result())
We are now able to make a request to the action server using the following command.
python s3_file_uploader_client.py
Upon successful upload, the action client outputs a response as follows:
result_code:
success: True
error_code: -32768
files_uploaded: [rosbags/test/hello.txt]
The corresponding logs from the action server are as follows:
[ INFO] [1592638732.318350541]: [PutObject] Upload: /tmp/hello.txt to s3://<BUCKET_NAME>/rosbags/test/hello.txt
...
[ INFO] [1592638732.543829521]: [PutObject] Successfully uploaded /tmp/hello.txt to s3://<BUCKET_NAME>/rosbags/test/hello.txt
We now find a file named “hello.txt” in the corresponding Amazon S3 bucket.
Installing the rolling_recorder ROS node
The rolling_recorder ROS node is a part of rosbag_cloud_recorder ROS package, which can be installed via the Advanced Package Tool by running the following commands:
sudo apt-get update
sudo apt-get install -y ros-$ROS_DISTRO-rosbag-cloud-recorders
Note: You can also build the package from source.
The rolling_recorder node creates a ROS action server that accepts requests to create and upload rosbag files corresponding to the past “x” minutes.
Running the rolling_recorder ROS node
With the s3_file_uploader node already running, launch the rolling_recorder node with the following command:
roslaunch rosbag_cloud_recorders rolling_recorder.launch bag_rollover_time:=10 max_record_time:=10
Since rolling_recorder node also works as an action server, we interact with it by running the following code that creates an action client. Let start by saving the following snippet to a script called “recorder_client.py”. Note that the “recorder_client.py” script can also be used with the duration_recorder node:
from __future__ import print_function
import sys
import random
import rospy
import actionlib
from recorder_msgs.msg import DurationRecorderAction, DurationRecorderGoal
from recorder_msgs.msg import RollingRecorderAction, RollingRecorderGoal
NODE_NAME = 'recorder_client'
# Choose 'rolling_recorder' or 'duration_recorder'
recorder_type = sys.argv[1]
record_time = 10 # seconds
if recorder_type == 'rolling_recorder':
action = '/rolling_recorder/RosbagRollingRecord'
action_type = RollingRecorderAction
goal = RollingRecorderGoal(destination='rolling_recorder_test/')
print('RollingRecorderGoal:')
elif recorder_type == 'duration_recorder':
action = '/duration_recorder/RosbagDurationRecord'
action_type = DurationRecorderAction
goal = DurationRecorderGoal(
destination='duration_recorder_test/',
duration=rospy.Duration.from_sec(record_time),
topics_to_record=[] # Empty records all topics, or provide a list of topics e.g. ['/rosout']
)
print('DurationRecorderGoal:')
else:
print('Invalid recorder type. Please choose "rolling_recorder" or "duration_recorder"')
sys.exit(-1)
print(goal)
rospy.init_node(NODE_NAME, log_level=rospy.DEBUG)
action_client = actionlib.SimpleActionClient(action, action_type)
res = action_client.wait_for_server()
action_client.send_goal(goal)
action_client.wait_for_result(rospy.Duration.from_sec(record_time+5))
print('Goal state:', action_client.get_state())
print('Goal status text:', action_client.get_goal_status_text())
print('Goal', action_client.get_result())
We are now able to make a request to the action server, using the following command:
python recorder_client.py rolling_recorder
Upon successful upload, the action client outputs a response as follows:
Goal state: 3
Goal status text: "Upload Succeeded"
Goal result:
result: 0
message: "Upload Succeeded"
And the ROS node would output corresponding log messages as follows:
[ INFO] [1592638732.318350541]: [PutObject] Upload: /root/.ros/rr_rosbag_uploader/_2020-06-22-16-43-29_9.bag to s3://<BUCKET_NAME>/rolling_recorder_test/_2020-06-22-16-43-29_9.bag
...
[ INFO] [1592638732.543829521]: [PutObject] Successfully uploaded /root/.ros/rr_rosbag_uploader/_2020-06-22-16-43-29_9.bag to s3://<BUCKET_NAME>/rolling_recorder_test/_2020-06-22-16-43-29_9.bag
We now find the requested rosbag files in the corresponding Amazon S3 bucket.
Installing the duration_recorder ROS node
The duration_recorder ROS node is a part of rosbag_cloud_recorder ROS package, which can be installed via the Advanced Package Tool by running the following commands:
sudo apt-get update
sudo apt-get install -y ros-$ROS_DISTRO-rosbag-cloud-recorders
Note: You can also build the package from source.
The duration_recorder node creates a ROS action server that accepts requests to create and upload rosbag files corresponding to “x” minutes of rosbag files from when the request was made.
Running the duration_recorder ROS node
With the s3_file_uploader node already running, launch the duration_recorder node with the following command:
roslaunch rosbag_cloud_recorders duration_recorder.launch
The previously created “recorder_client.py” was designed to be able to interact with the action server created by the duration_recorder. We can send a request by running the following command:
Python recorder_client.py duration_recorder
Upon successful upload, the action client outputs a response as follows:
Goal state: 3
Goal status text: Upload Succeeded
Goal result:
result: 0
message: "Upload Succeeded
And the ROS node would output corresponding log messages as follows:
[ INFO] [1592638732.318350541]: [PutObject] Upload: /root/.ros/dr_rosbag_uploader/_2020-06-22-17-00-21.bag to s3://<BUCKET_NAME>/duration_recorder_test/_2020-06-22-17-00-21.bag
...
[ INFO] [1592638732.543829521]: [PutObject] Successfully uploaded /root/.ros/dr_rosbag_uploader/_2020-06-22-17-00-21.bag to s3://<BUCKET_NAME>/duration_recorder_test/_2020-06-22-17-00-21.bag
We now find the requested rosbag files in the corresponding Amazon S3 bucket.
Behavior with network offline and filled local storage scenarios
Network offline
If the s3_file_uploader node encounters network connectivity issues during upload, it employs the AWS C++ SDK's DefaultRetryStrategy. If all further attempts fail, the s3_file_uploader node will error out and the action client would send a response as follows:
result_code:
success: False
error_code: 99
files_uploaded: []
And the ROS node would output corresponding log messages as follows:
[ INFO] [1592957344.585182464]: [PutObject] Upload: /tmp/hello.txt to s3://<BUCKET_NAME>/rosbags/test/hello.txt
...
[ INFO] [1592957370.218315938]: [PutObject] Failed to upload /tmp/hello.txt to s3://<BUCKET_NAME>/rosbags/test/hello.txt: Unable to connect to endpoint
Filled local storage
1. If the rolling_recorder node at any point encounters insufficient disk space, it shuts down with error messages like:
[ERROR] [1592638732.440482883]: Less than 1024M of space free on disk with /root/.ros/rr_rosbag_uploader/_2020-06-22-16-43-29_9.bag.active. Disabling recording.
...
[ERROR] [1592638732.559953144]: [Run] RosbagRecorder encountered an error (code: 1)
In an “insufficient disk space” case, clean up local storage appropriately and restart the node for it to function as normal.
2. If the duration_recorder action server receives or is processing a request when the amount of clean local storage becomes insufficient, the request is aborted with the following status:
Goal state: 4
Goal status text: Rosbag recorder encountered errors.
Goal result:
result: 2
message: "Rosbag recorder encountered errors."
And the duration_recorder action server will output corresponding error messages as follows:
[ERROR] [1592638732.176230493]: Less than 1024M of space free on disk with /root/.ros/dr_rosbag_uploader/_2020-06-22-17-00-21.bag.active. Disabling recording.
...
[ERROR] [1592638732.281456654]: [Run] RosbagRecorder encountered an error (code: 1)
In an insufficient "disk space” case, clean up local storage appopriately and the action server will be able to handle goals.
Conclusion
The S3 rosbag cloud extension enables customers to easily configure and record data from robots as rosbags, and upload them to Amazon S3, which they can later use to analyze events, troubleshoot existing applications, and provide as inputs to AWS RoboMaker log-based simulation for regression. In this blog, we reviewed three ROS nodes in the S3 rosbag cloud extension that create and upload rosbag files from the robot to Amazon S3, with hands-on examples of how to use the nodes. This provides a set of features that developers, QA engineers and fleet managers can use to debug, test, and develop features. Now is the time to try it youself! If you have questions or feedback, email our team for more information.