AWS Robotics Blog

Introduction to Automatic Testing of Robotics Applications

Robotics applications such as warehouse logistics, automated driving, and factory automation require safe, error-free operation in dynamic real-world scenarios. Automated testing for these applications saves precious debugging time and helps improve the overall quality and safety of the software. This blog will explore best practices for testing and validation of robotics and autonomous systems running Robot Operating System (ROS) software. We will review how developers can increase their feature velocity and reduce errors by using simulation-based testing. We will focus on functional testing, which means testing whether the software provides expected results based on the design specification. Robotics software is multi-disciplinary with components for perception, planning, and controls. A small change in one part of the code base can adversely affect another component, causing a software bug. Software testing in robotics also requires testing for a wide variety of situations the robot encounters, including edge cases and unsafe conditions.

Figure 1: The V software testing framework

Figure 1: The V software testing framework

When testing robotics systems, a common approach is to use a V-model to verify that software meets the requirements. As shown in Figure 1, the left branch of V-model begins with system-level requirements (subsystems and components), which leads to high-level design and detailed design with simulation. Simulation is important to make sure that the design meets the requirements. The tip of the V-model is the coding or implementation phase to convert the design into code implementation. The right branch represents the testing of the components, starting with unit tests in code to integration tests of different subsystems. Verification tasks on the right branch are performed on the embedded software and target device.

Three types of functional tests commonly used in robotics application development are: unit tests, integration tests, and regression tests. Together these tests can be used to verify and validate the robustness of the robotic software. Unit tests verify a specific component or module of your ROS application individually. Common tools for unit tests in ROS include unit test, gtest, and pytests. Integration tests or system tests verify interactions between modules (such as multiple ROS nodes across libraries) making sure they work as designed and expected. Development teams use integration tests to find race conditions and deadlocks and to make sure that software components are fault tolerant and failure resistant. ROSTest is a popular tool to include functional tests into the native ROS build process. ROSTests have a .launch or .test extension. After testing your ROS function or system, an important step in software testing is running regression tests. This includes comparing the results of tests (unit tests or interaction tests) over time to harden software release and make sure that previous functionality remains intact.

Verifying a robot will behave as expected in a dynamic real-world environment is a complicated and time consuming testing task for robotics developers. In cases where hardware is not available, developers must wait until the end of the development cycle for physical testing. This delay results in expensive debugging and refactoring required to catch early design-stage bugs. Developers can increase their feature velocity and test their designs by using a physics-based 3D simulation that mimics their real-world application environment. Simulation testing during early stages of robot development is especially valuable when developers have no access to hardware or physical testing space.

Simulation-based Behavior Tests

Running tests in a simulated virtual environment enables robot software developers to confirm their applications result with the correct robot behavior. Simulation-based testing can involve: debugging an algorithm during iterative development, testing a subsystem such as localization or object detection, or testing the functionality of the complete system such as navigating from start to goal. A combination of different simulation fidelities is employed in testing, low fidelity 2D simulation for quick visualization and debugging and high-fidelity 3D simulation for system level behavior tests. Using a variety of tests (unit, integration, and frequent regression testing on a periodic basis) increases the overall test coverage of the code base and prevent expensive software defects.

Scenarios are parameter sets that define the system-under-test with real-world conditions, robot behaviors, and expected outcomes. Developers and quality assurance (QA) teams may want to run parametrized scenario tests. Parametrized testing is useful for:

  • coverage: to cover all conditions such as testing for changes in lighting,
  • extensibility: to increase the testing range through combinations of parameters, and
  • repeatability: to make sure that results are deterministic.

It also decouples teams; the developers work on the algorithms and the QA teams create the test definitions. This enables teams to work in parallel, thus increasing code release velocity.

AWS RoboMaker provides a fully managed robotics simulation service that can be used to run multiple, parallel, physics-based scenario tests. AWS RoboMaker enables developers to quickly and efficiently run hundreds of simulations in batch to test the variety of different combinations and permutations of conditions that would exist in the real world. The following is a sample JSON script used to define scenario-based test parameters as described in the blog “Building a ROS Application CI Pipeline with AWS RoboMaker”.

{
    "scenarios": {
        "<SCENARIO_NAME>": {
            "robotEnvironmentVariables": {},
            "simEnvironmentVariables": {}
        }
    },
    "simulations": [{
        "scenarios": ["<SCENARIO_NAME1>, <SCENARIO_NAME2>"],
        "params": CreateSimulationJobParams
    }]
}

Using this scenario-based approach with AWS RoboMaker, developers can upload their ROS application to an Amazon S3 bucket and then run a simulation in the cloud. There is no infrastructure to provision, configure, or manage. With the AWS RoboMaker batch simulation API, developers can launch a large-scale batch of simulations with a single API call.

Running multiple automated robot navigation tests in RoboMaker Simulation

Figure 2: Running multiple automated robot navigation tests in RoboMaker Simulation

Summary

In this blog, I discussed the different functional testing operations, especially the scenario-based simulation testing of robotics applications. I hope that the tips and tricks described in this blog help you define and include new tests for your applications, increase test coverage, and result in more fault tolerant and failure resistant robot applications. For more information about AWS Robomaker or to try a sample application in the AWS console visit aws.amazon.com/robotics.

If you are interested in learning more about this technology, please Contact Us!

Pulkit Kapur

Pulkit Kapur

Pulkit Kapur is a Business Development Manager for Robotics and Autonomous Systems at Amazon Web Services (AWS). Prior to AWS, Pulkit worked at MathWorks as Industry Lead for Robotics, responsible for global business development and product roadmap. Prior to that Pulkit was at iRobot as a product manager, launching consumer robots globally. Pulkit has a Master’s in Mechanical Engineering with a specialization in Robotics from the GRASP Lab at University of Pennsylvania in Philadelphia. Pulkit has over 10 years of industry experience in the field of robotics and autonomous systems.