The Internet of Things on AWS – Official Blog

Utilizing AWS Services to Quickly Build Solutions for Robotics Use Cases

Introduction

Autonomous mobile robots (AMRs) are widely used in many industries like logistics and manufacturing. However, there are various challenges in developing and operating autonomous robots. To develop autonomous robots, a wide range of technologies are required, and the process is complex and time-consuming. Integration with the cloud is also required to develop and operate the robots effectively. However, many robot builders are not familiar with the benefits of cloud robotics or lack cloud development expertise that can help them bring smarter robots to market faster.

By reading this article, you will learn how to solve the common challenges in developing and operating autonomous robots with AWS services. You can also understand which services are required to realize your use case and where to start your prototype.

Challenges in developing and operating autonomous robots

Let us consider the challenges of autonomous robot development in three phases: build, test, and deploy.

The development of robots requires expertise in a wide range of domains. For example, artificial intelligence (AI) and machine learning (ML) technologies are used for autonomous navigation; cloud connectivity is required for application integration; and video streaming is used for remote monitoring and operation.

During the testing phase, repeated trials are necessary to ensure the robots work correctly in various situations and environments. However, the availability of robot hardware can be limited and testing in a physical environment is costly and time consuming.

Once in production, robot engineers and operators need to monitor and manage the fleet, including robot health and status. A mechanism to deploy applications on the device and control the robot remotely are required. In some cases, interoperability across multiple types of robots and systems are also a requirement.

Because of these challenges, development of autonomous robots is laborious and time-consuming. AWS provides various services that can be used to develop, test, and operate such robot applications faster. With these services, you can quickly build your prototypes and easily operate a large number of robots in production. In the following section, I will introduce how you can utilize these services in robot development and solve the challenges.

AWS Services for Robotics

Communication between robot and cloud: AWS IoT Core

An autonomous robot is supposed to operate by itself in various environment, and unforeseen circumstances requires help by operators. In that case, the following capabilities are required. For example, operators can remotely control the robot via the cloud and developers can troubleshoot using the logs collected from robots. You can utilize AWS IoT Core to develop these features.

AWS IoT Core is a managed cloud platform for connected devices to interact with cloud applications and other devices easily and securely. Devices can be connected via lightweight protocols such as MQTT and communicate with the cloud and other devices. The messages collected from the device can be routed to other AWS services such as database, storage, analytics and AI/ML services.

For example, you can integrate AWS IoT Core and other AWS services to collect sensor data and log data from robots, store the data into a data lake for analysis and troubleshooting, and create dashboards for near real-time visualization. This article shows major patterns of data collection and visualization with AWS IoT services. For example, Pattern 6 in the article can be used for near real-time visualization use case.

You might also want to interact with your robots using web or mobile apps. You can also use Device Shadow feature to synchronize state between the robot and the cloud. This allows the user or application to know the latest status of the robot and send command to the robot even when the robot is offline.

Using AWS IoT Core for the communication between robot and application

Figure 1. Using AWS IoT Core for the communication between robot and application

Software Deployment and Execution: AWS IoT Greengrass

To run the developed applications on actual robots, a mechanism to deploy and manage the software is necessary. It might be also necessary to keep improving applications and deploying updates even after robots have been shipped. However, it is difficult to create a mechanism to manage the application software, deploy to multiple robots at once, and modify the application configuration depending on the type of robot.

AWS IoT Greengrass is an IoT open source edge runtime and cloud service that helps you build, deploy, and manage device software. You can manage the developed applications in the cloud, deploy and run the applications on a specific robot or multiple robots. Applications can be developed in popular programming languages or run on Docker containers. You can setup multiple software configurations for different types of robot fleets. With these features, you don’t need to develop your own mechanism to deploy and manage the applications running in the robot and can concentrate on developing the applications

AWS IoT Greengrass also provides a mechanism called components, which are pre-provided by AWS and the community to make device-side development efficient. For example, with Greengrass component, applications running on Greengrass can communicate with AWS IoT Core, and machine learning inferences at the edge such as image recognition can be easily implemented. You can also deploy and manage ROS based application with Greengrass and Docker. AWS IoT Greengrass allows you to quickly deploy and run your application on the robot, allowing developers to focus on developing the application itself. You can get started from AWS IoT Greengrass V2 Workshop.

Using AWS IoT Greengrass for robot software deployment and management

Figure 2. Using AWS IoT Greengrass for robot software deployment and management

Machine Learning at Edge: Amazon SageMaker and Amazon SageMaker Edge

To make a robot work autonomously, the robot have to recognize the environment correctly. For example, tasks such as obstacle detection and avoidance, human and object detection or mapping are necessary. These tasks are often needed to run at the edge due to several reasons like unstable network connection, network bandwidth and cost. In this use case, customers want to train machine learning (ML) models, deploy and make inference at the edge.

ML model workflow with Amazon SageMaker and SageMaker Edge

Figure 3. ML model workflow with Amazon SageMaker and SageMaker Edge

To collect raw data like image, rosbag or telemetry for ML model training, you can use AWS IoT Greengrass stream manager. With stream manager, you can transfer high-volume IoT data to the AWS Cloud efficiently and reliably. Stream manager works in environments with unstable connectivity and you can configure the AWS services such as Amazon S3 and Amazon Kinesis Data Streams to export data.

After collecting the raw data, you can use Amazon SageMaker to build your ML model. Amazon SageMaker is a service to build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. For example, you can annotate the images collected by your robots with Amazon SageMaker Ground Truth and train your custom ML model with SageMaker.

Once you build your custom ML model, you can utilize Amazon SageMaker Edge to optimize and deploy your ML model to edge device. Amazon SageMaker Edge enables machine learning on edge devices by optimizing, securing, and deploying models to the edge, and then monitoring these models on your robot fleet. You can optimize your ML model at cloud and deploy it as a Greengrass component with Amazon SageMaker Edge Manager. After the model and SageMaker Edge Manager are deployed to the robot, SageMaker inference engine will start and your robot applications can use the inference result at the edge.

Remote Control and Monitoring: Amazon Kinesis Video Streams

AMRs are used to move materials in environments like warehouses. They can navigate by themselves because they are equipped with cameras and other sensors to recognize people, obstacles, and other objects. However, in case of stuck, for example, you can use the video from the cameras to monitor and operate the robots remotely. In some cases, you want to store the video in the cloud for analysis and troubleshooting purposes. However, it is difficult to develop an infrastructure to stream and collect large amount of video data in real time.

Amazon Kinesis Video Streams makes it easy to securely stream media from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Amazon Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming media from millions of devices. Users can collect video from cameras of robots, playback for real-time monitoring or on-demand troubleshooting. Amazon Kinesis Video Streams also supports ultra-low latency two-way media streaming with WebRTC, as a fully managed capability, for the use case like remote control which require sub-second latency.

Amazon Kinesis Video Streams provides SDKs for the devices that can ingest video from the camera of robot to the cloud or stream over peer-to-peer connection using WebRTC. You can use either the Amazon Kinesis Video Streams Producer SDK or WebRTC SDK, depending on the use case. For example, if you need to collect and store video in the cloud for on-demand playback and analysis, you should use Producer SDK. On the other hand, if you need real-time playback with sub-second latency for remote control or bi-directional media streaming, you can use WebRTC SDK. These SDKs make it easy to securely stream media.

You can try Amazon Kinesis Video Streams Producer SDK, video play back and video analysis with Amazon Kinesis Video Streams Workshop. If you want to learn how to use Amazon Kinesis Video Stream with WebRTC, there is Amazon Kinesis Video Streams WebRTC Workshop.

Using Amazon Kinesis Video Streams for remote monitoring and control

Figure 4. Using Amazon Kinesis Video Streams for remote monitoring and control

Simulation for Testing Robot Applications: AWS RoboMaker

When developing autonomous robot application, it can be challenging to verify that the application performs as expected in a variety of environments. During the development phase, the number of robot hardware is often limited and it is difficult to prepare various test environments. Therefore, simulation environments are often used to test robot applications.

AWS RoboMaker is a fully managed service that enables you to easily create simulation worlds and run simulation jobs without provisioning or managing any infrastructure. You can run general simulation applications or ROS-based simulation applications on Docker. While the simulation is running in the cloud, you can check the simulation status by accessing graphical user interface (GUI) applications and terminals from your browser.

Building a simulation environment is costly, time consuming and required skills in 3D modeling. However, with RoboMaker WorldForge, you can create a number of 3D virtual environments by simply specifying parameters. You can also run multiple simulations in parallel at the same time, or start and stop simulations via RoboMaker APIs. These features makes it easier to build a CI/CD environment for robot applications that automatically and simultaneously test the developed application against a variety of simulation environments. You can try RoboMaker simulation example by following Preparing ROS application and simulation containers for AWS RoboMaker.

Running simulations on AWS RoboMaker

Figure 5. Running simulations on AWS RoboMaker

Conclusion

In this article, I introduced common challenges in the development, testing, and operation of robot applications and AWS IoT services that can be utilized for such use cases. The scope of robot applications development is very diverse, so you can accelerate development by integrating AWS services depending on your use case. You can also easily manage and operate a large number of robot fleets with these services. Let’s get started from exploring the services with IoT workshops.

About the Author

Yuma Mihira is Senior IoT Specialist Solutions Architect at Amazon Web Services. Based in Japan, he helps customers build their IoT solutions. Prior to AWS, he experienced robotics development as a software engineer.