AWS Robotics Blog

Deploy and Manage ROS Robots with AWS IoT Greengrass 2.0 and Docker

This blog references AWS RoboMaker Integrated Development Environment (IDE), a feature that has been deprecated. To follow along with this blog post, use the AWS Cloud9 IDE. See our updated blog on how to Build and simulate robotics applications in AWS Cloud9

 

Deploying and managing software for a fleet of autonomous robots is a complex, time-consuming, and error-prone task. Robot builders face challenges to prevent disruptions during software deployments. Challenges include establishing a secure communication layer to the robot, monitoring deployment status, and implementing failure-resilient logic. Additionally, robot builders routinely spend time and money integrating the technologies required to deploy their device software, increasing time to market and reducing margin. Once deployed, software requires significant development effort to collect reliable data, monitor the fleet, and build dashboards so operators can interact with robots, troubleshoot problems, and react to field conditions. Limitations to manage robot fleets result in operational disruptions caused by hardware or software malfunctions. These disruptions can lead to operational downtime and can cost robot operators millions of dollars per year.

In this blog, we introduce a new way to deploy and manage robot software on production fleets at scale using AWS IoT Greengrass 2.0. When combined with the industry-grade tools, libraries, and capabilities of the Robot Operating System 2 (ROS2), developers can bring new cloud-enhanced robot features to market, and reduce the time and effort required to build failure-resilient infrastructure. 

Introduction to AWS IoT Greengrass 2.0

AWS IoT Greengrass 2.0 is an open-source edge runtime and cloud service that reduces complexities when deploying and managing applications on robots. With AWS IoT Greengrass 2.0, developers can add local compute, messaging, and data management capabilities to their robotics fleets. This helps developers reliably deploy updates, remotely manage software, and securely connect robots to cloud services. You can use AWS IoT Greengrass 2.0 to:

  • Remotely manage the application lifecycle (install, run, and shutdown) on robots
  • Deploy software updates over-the-air to fleets of robots
  • Securely connect robots to operator-facing dashboards and monitoring tools
  • Deploy machine learning (ML) models and run ML inference at the edge for tasks like object detection and image classification
  • Ingest large amounts of raw telemetry, sensor data, and logs into cloud-based stream processing services like Amazon Kinesis

In the following, we will show you how to implement a secure, reliable, and customizable, application management workflow with a detailed step-by-step tutorial using a cloud-connected ROS2 Foxy sample application.

Using AWS IoT Greengrass 2.0 to deploy ROS applications

AWS IoT Greengrass 2.0 manages code running on robots through modular, self-contained units called components. Components can represent applications, runtime installers, libraries, or any code – including ROS applications and/or containers. They are defined using YAML or JSON-based recipes, where developers can set configuration parameters, dependencies and lifecycle operations to install, run, startup, shutdown, and recover the component. AWS IoT Greengrass 2.0 also manages permissions for components to access AWS resources in the cloud. This enables applications running on the robot to connect to services such as Amazon S3 for cloud object storage or Amazon Kinesis for real-time data streaming and ingestion. Finally, AWS IoT Greengrass 2.0 provides a variety of pre-built public components. These components accelerate development of common functionality such as centralized log capture, and remote secure tunnel session management to robots without any additional development work.

To manage and deploy components to fleets of robots at-scale, AWS IoT Greengrass 2.0 uses device registry features provided by AWS IoT Device Management. Developers can apply searchable metadata such as robot type, hardware configuration, and software version to individual robots or groups of robots, then run deployment actions based on this configuration.

Robots registered in AWS IoT can have one or more JSON-based state objects called device shadows. Device shadows are synchronized between the robot and the cloud, enabling decoupled applications to interact with robot state even when disconnected. Device shadows can be indexed with AWS IoT Device Management. Robot operators and engineers are then able to filter, monitor, and alert on local conditions across fleets of thousands or even millions of robots. Results retrieved include battery conditions, mission status, location, or inertial measurement unit (IMU) data. The robot metadata and device shadow are available through AWS IoT API/SDKs and can be integrated with custom fleet management and operator-facing applications. You can use device shadows along with the library of public components to build rich cloud-enhanced robot applications.

Tutorial: Build, package, deploy, and run cloud-connected ROS applications with AWS IoT Greengrass 2.0

In this step-by-step walkthrough, you learn how to build and run a cloud-connected ROS 2 Foxy application using Docker and AWS IoT Greengrass 2.0. We show you how to deploy and run applications with AWS IoT Greengrass 2.0. Finally, we show how to connect robots to decoupled applications such as operator facing dashboards running in the cloud over MQTT. Following is an architecture diagram that shows what you build in this tutorial:

ROS2 with Docker and AWS Greengrass 2.0 Sample Application Architecture

The application runs three containers using a Docker Compose file. Two of the containers, a talker and a listener, use local ROS messaging to send and receive a “Hello World” message over the ROS topic /chatter. A third container uses the AWS IoT Greengrass SDK to bridge messages published over the ROS topic /chatter with a local socket used by AWS IoT Greengrass for interprocess communication between components. AWS IoT Greengrass then relays the message over an MQTT topic named chatter in the cloud.

This tutorial contains the following steps:

  • Background: Build and run ROS applications with Docker
  • Step 1: Prepare a Docker-based ROS 2 sample application for deployment
  • Step 2: Install and provision AWS IoT Greengrass 2.0
  • Step 3: Create the sample application component in AWS IoT Greengrass 2.0
  • Step 4: Deploy the sample application component with AWS IoT Greengrass 2.0

Pre-requisites: This tutorial uses AWS RoboMaker Integrated Development Environment (IDE) for development. You can also develop locally on your own Linux-based machine. If you are developing on your local machine, make sure to install the AWS CLI and configure an AWS profile with administrative permissions, per these instructions.

Background: Build and run ROS applications with Docker

Building and running ROS applications with Docker can provide multiple benefits in the design, development, and operation of code for robots. Using Docker containers, developers can run their ROS stack as a set of microservices and decouple application components from the configuration of the underlying host. This ensures consistency at runtime across various types of robots, software versions, and hardware stacks. For example, teams could run multiple distributions of ROS and seamlessly bridge between ROS 1 and ROS 2. Running ROS microservices in containers also enables developers to establish resource isolation boundaries between containers and the underlying host. This prevents an application failure or run-away process from impacting other containers and/or the host, enabling developers to build and orchestrate resilient application prioritization and recovery logic.

Docker also provides optimizations when updating software over-the-air. Docker images are read-only templates used to create containers and are stored as layers on the local file system. Common layers are shared between container images and only download and store files one time. Therefore, when new updates are deployed over-the-air, only the updated layers are deployed. When Docker image layers are optimized, developers can reduce the bandwidth consumption and duration of update operations. To learn more about Docker primitives and design patterns, click here. The ROS community also has dedicated tools for building ROS applications with Docker. To learn more about these tools, click here.

In this tutorial, we will use Docker to build, package, and run the ROS2 Foxy sample application.

Step 1: Prepare a Docker-based ROS 2 sample application for deployment

  1. Open the AWS Management Console and search for AWS RoboMaker. From the main AWS RoboMaker console page, click Development Environments and press Create development environment.AWS RoboMaker IDE Console
  2. In the configuration page, give the IDE a name and select ROS 2 Foxy (Latest) as the preinstalled ROS distribution. In the networking section, keep the VPC and subnets as the default values.AWS RoboMaker Integrated Development Environment (IDE)
  1. A new AWS RoboMaker development environment should now be running in your browser. Next, clone the sample application code by running the following command in the IDE terminal window:
    git clone https://github.com/aws-samples/greengrass-v2-docker-ros-demo.git
  2. Next, review the two Docker configuration files named Dockerfileand docker-compose.yaml. In the Dockerfile, you will see the instructions for each build stage and the final runtime image that we will deploy. The Docker compose file will orchestrate the implementation of the containers at runtime.
    DEEP DIVE: DOCKER ASSETS

    Dockerfile is an instruction set used to build Docker images. This example uses a multi-stage build and integrated caching with Docker BuildKit. Multi-stage builds allow workflows with separate build steps that prevent extra source or build files from being copied to the final runtime Docker image. The caching operations speed up future builds by storing previously built files. To learn more about Docker build kit, click here.

    Caption: The Dockerfile for the ROS2 sample application:

    # Set main arguments.
    ARG ROS_DISTRO=foxy
    ARG LOCAL_WS_DIR=workspace
    
    # ==== ROS Build Stages ====
    # ==== Base ROS Build Image ====
    FROM ros:${ROS_DISTRO}-ros-base AS build-base
    LABEL component="com.example.ros2.demo"
    LABEL build_step="ROSDemoNodes_Build"
    
    RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F42ED6FBAB17C654
    RUN apt-get update && apt-get install python3-pip -y
    RUN apt-get update && apt-get install ros-$ROS_DISTRO-example-interfaces
    RUN python3 -m pip install awsiotsdk
    
    # ==== Package 1: ROS Demos Talker/Listener ====
    FROM build-base AS ros-demos-package
    LABEL component="com.example.ros2.demo"
    LABEL build_step="DemoNodesROSPackage_Build"
    
    # Clone the demos_ros_cpp package from within the ROS Demos monorepo.
    RUN mkdir -p /ws/src
    WORKDIR /ws
    RUN git clone https://github.com/ros2/demos.git \
        -b $ROS_DISTRO \
        --no-checkout \
        --depth 1 \
        --filter=blob:none \
        src/demos
    
    RUN cd src/demos && \
        git sparse-checkout set demo_nodes_cpp
    
    RUN . /opt/ros/$ROS_DISTRO/setup.sh && \
        colcon build --build-base workspace/build --install-base /opt/ros_demos
    
    # ==== Package 2: Greengrass Bridge Node ====
    FROM build-base AS greengrass-bridge-package
    LABEL component="com.example.ros2.demo"
    LABEL build_step="GreengrassBridgeROSPackage_Build"
    
    ARG LOCAL_WS_DIR
    COPY ${LOCAL_WS_DIR}/src /ws/src
    WORKDIR /ws
    
    # Cache the colcon build directory.
    RUN --mount=type=cache,target=${LOCAL_WS_DIR}/build:/ws/build \
        . /opt/ros/$ROS_DISTRO/setup.sh && \
        colcon build \
         --install-base /opt/greengrass_bridge
    
    # ==== ROS Runtime Image (with the two packages) ====
    FROM build-base AS runtime-image
    LABEL component="com.example.ros2.demo"
    COPY --from=ros-demos-package /opt/ros_demos /opt/ros_demos
    COPY --from=greengrass-bridge-package /opt/greengrass_bridge /opt/greengrass_bridge
    
    # Add the application source file to the entrypoint.
    WORKDIR /
    COPY app_entrypoint.sh /app_entrypoint.sh
    RUN chmod +x /app_entrypoint.sh
    ENTRYPOINT ["/app_entrypoint.sh"]
    

    Docker compose is a tool for defining and running multi-container Docker applications. Today, we will use Docker Compose to build the Docker image, then run the three sample application containers (talker, listener, and the AWS IoT Greengrass bridge) on the robot. To learn more about Docker compose, click here.

    In the Docker Compose file, take note of the environment variables and volumes defined in the greengrass_bridge service definition. These variables are passed from AWS IoT Greengrass and specify the location of the socket used for interprocess communication along with required credentials. We use AWS IoT Greengrass interprocess communication to republish messages over MQTT to the cloud. For more on configuring Docker containers with AWS IoT Greengrass 2.0, click here.

    Caption: The Docker compose file for the ROS2 sample application.

    version: "3"
    services:
    greengrass_demo_image:
    
    build:
      context: ./
      image: ros-foxy-greengrass-demo:latest
    
    talker:
      image: ros-foxy-greengrass-demo:latest
      command: ros2 run demo_nodes_cpp talker
      
    listener:
      image: ros-foxy-greengrass-demo:latest
      command: ros2 run demo_nodes_cpp listener
    
    greengrass_bridge:
      image: ros-foxy-greengrass-demo:latest
      command: ros2 launch greengrass_bridge greengrass_bridge.launch.py ros_topics:="['chatter']" iot_topics:="['cloud_chatter']"
      environment:
        - AWS_REGION
        - SVCUID
        - AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT
        - AWS_CONTAINER_AUTHORIZATION_TOKEN
        - AWS_CONTAINER_CREDENTIALS_FULL_URI
      volumes:
        - "/greengrass/v2/ipc.socket:/greengrass/v2/ipc.socket"
    
  1. Now, run the following commands in the terminal to install Docker compose and run the build operation.
    # Install Docker Compose
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    
    # Build your images
    cd ~/environment/greengrass-v2-docker-ros-demo
    DOCKER_BUILDKIT=1 docker-compose build
    
  1. Once the build completes, run the command docker images in the terminal to confirm that the image was successfully built. You should see a built container image named “ros-foxy-greengrass-demo”.Caption: Result of docker images command
    docker images command results
  1. Open a new tab or window with the AWS Management Console. In this new window, open Amazon Elastic Container Registry (Amazon ECR). We are going to push the two images created in the previous steps into Amazon ECR. Next, press Create Repository.
  1. In the wizard, set the name of the container image to be ros-foxy-greengrass-demo and press Create repository.Console view to create a new ECR repository.
  1. Once done, the repository list in Amazon ECR should look like this:Amazon ECR list of repositories.
  1. Click on the radio icon beside the new repository, then press the “View push commands” button.
  1. This will open a pop-up window with a set of Push commands to run in the AWS RoboMaker IDE terminal. Keep this window open, return to the AWS RoboMaker IDE, and run the push commands displayed in the pop-up window. Since we already built the docker images, skip number 2 from the list of push commands (“docker build -t ros-foxy-greengrass-demo”).Amazon ECR push commandsThe set of commands to run in the terminal (from the pop-up window) should look like this, except with your AWS account ID and Region.
    aws ecr get-login-password —region <REGION> | docker login —username AWS —password-stdin <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com
    
    docker tag ros-foxy-greengrass-demo:latest <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/ ros-foxy-greengrass-demo:latest
    
    docker push <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/ros-foxy-greengrass-demo:latest

Congratulations! The docker-based ROS application is now built, uploaded to Amazon ECR, and ready to deploy.

Step 2: Install and Provision AWS IoT Greengrass 2.0

Next, we will install and provision AWS IoT Greengrass 2.0. Today, we are going to use the AWS RoboMaker IDE instance as a substitute for a real robot. You can also use this same process when provisioning real robots, and customize it to fit your desired configuration and security requirements.

In order to create the cloud-based resources required to connect robots with AWS IoT Greengrass 2.0, the provisioning process will need elevated AWS permissions. We have packaged the required permissions along with pre-requisite AWS resources for deployment in an AWS CloudFormation template. You can look at the template here: greengrass/greengrass_bootstramp.template.yaml.

  1. Run the following AWS CloudFormation create stack command:
    cd ~/environment
    aws cloudformation create-stack --stack-name GG-Provisioning --template-body file://greengrass-v2-docker-ros-demo/greengrass/greengrass_bootstrap.template.yaml --capabilities CAPABILITY_NAMED_IAM
    DEEP DIVE: CLOUDFORMATION TEMPLATE

    The template will create the following resources:

    • An S3 bucket to stage deployment artifacts.
    • An IAM Greengrass Provisioning User with minimal IAM accessto provision new robots.
    • A base IAM Role to provide robots access to specific AWS resources.
  1. Check the status of the CloudFormation stack with the following command. Wait a minute or two for it create the resources. Once done, the response will look similar to the following:
    aws cloudformation describe-stacks --stack-name GG-Provisioning

    Result of CloudFormation describe stack.

  1. In a new browser window or tab, open the IAM Console. Click Users to view the new user that was created from the previous steps. The name of the user follows this format: GG-Provisioning-GreengrassProvisioningUser-N12345678. Click on this user and open the securty credentials tab.IAM user security credentials view.
  1. Next, scroll to the Access keys section and press Create access key. A pop-up window will open with a new access key. Download or copy/paste these credentials to a secure location.Create access key results.
    1. Create a new terminal tab in the IDE by pressing the green (+) symbol beside the open terminal.Create a new terminal in the IDE
    1. Run the following setup commands in the new AWS RoboMaker IDE terminal tab, replacing <INSERT_YOUR_AWS_ACCESS_KEY_ID_HERE> and <INSERT_YOUR_AWS_SECRET_KEY>with the credentials created preceding:
      # Install dependencies (Java JRE)
      sudo apt-get update
      sudo apt-get install default-jre -y
      
      # Install and provision AWS IoT Greengrass 2.0
      export AWS_ACCESS_KEY_ID=<INSERT_YOUR_AWS_ACCESS_KEY_ID_HERE>
      export AWS_SECRET_ACCESS_KEY=<INSERT_YOUR_AWS_SECRET_KEY>
      
      curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip && unzip greengrass-nucleus-latest.zip -d GreengrassCore
    1. Run the AWS IoT Greengrass 2.0 provisioning command:
      sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE -jar ./GreengrassCore/lib/Greengrass.jar \
      --thing-name ROS2_IDE_1 \
      --thing-group-name ROS2_IDE_Machines \
      --component-default-user ggc_user:ggc_group \
      --provision true \
      --setup-system-service true \
      --deploy-dev-tools true
      
      sudo usermod -aG docker ggc_user
      DEEP DIVE: AWS IOT GREENGRASS 2.0 DEVICE PROVISIONING

      The preceding command will provision the local and cloud-based resources required for Greengrass to run. Here is a description of each flag defined:

      • –thing-name: Defines the IoT Thing to create and/or use for this robot.
      • –thing-group-name: Defines the IoT Thing Group to create and/or use for this robot.
      • –component-default-user: Defines the default linux user to run Greengrass components.
      • –provision: Runs the provisioning process to create the resources defined in the preceding flags and setup Greengrass on the device. If this is set to false, Greengrass will assume that these resources already exist.
      • –setup-system-service: This will set up Greengrass as a local system service and run the software on boot.
      • –deploy-dev-tools: This flag will create an initial deployment with the Greengrass CLI component for development and debugging. In production, this flag is likely not needed. 

      In the final command, we gave the ggc_user system user access to run docker containers.

      For more details on AWS IoT Greengrass service invocation, click here.

    1. Once finished, close the additional terminal by pressing “x” on the right corner of the terminal tab.

    Step 3: Create the sample application component in Greengrass 2.0

    In this next section, you will create a software component named com.example.ros2.demos that will define and run the docker-based ROS 2 sample application, then deploy it with AWS IoT Greengrass. The component downloads and runs the docker compose file from Amazon S3 along with the private Docker image stored in Amazon ECR. Two pre-built public components are also deployed as dependencies, the Token Exchange Service and Docker Application Manager. These two public components allow AWS IoT Greengrass 2.0 to run docker containers on robots.

    DEEP DIVE: PUBLIC COMPONENTS

    The Token Exchange Service is used to provide AWS access to running components, such as access to download software artifacts from Amazon S3, pull docker images from Amazon ECR, or upload logs and telemetry data. The local AWS token exchange service will generate temporary short-lived credentials for robot applications to access AWS resources. By default, the Token Exchange Service will use the IAM role and associated permissions created by the provisioning process (GreengrassV2TokenExchangeRole). An example IAM policy to use with this role is provided in the sample application at greengrass/robot.policy.json.

    The Docker Application Manager public component will manage the docker permissions and pull docker images from private repositories in the cloud. We will deploy these public components as dependencies along with our ROS 2 component.

    Now, let’s create and deploy our first AWS IoT Greengrass component.

    1. In the AWS RoboMaker IDE, open the file in the directory greengrass > com.example.ros2.demo > 1.0.0 > recipes > recipe.yaml in the IDE. This file is a basic YAML-based component recipe, which defines the runtime logic in addition to the three deployment artifacts, the docker images in Amazon ECR and the docker compose file stored in Amazon S3.
    1. In the AWS RoboMaker IDE terminal, run the following command to retrieve the S3 bucket name created by the CloudFormation template. If there is an “AccessDenied” error at this stage, it is likely because you are still using the terminal with the provisioning credentials preceding. Close this terminal and open a new one.
      aws cloudformation describe-stacks --stack-name GG-Provisioning

      Result of CloudFormation describe stack.

    1. Modify the recipe file by replacing <YOUR_PRIVATE_ECR_IMAGE_ID_ROS_GREENGRASS_DEMO> fields with the respective Amazon ECR Image IDs that you created. These values should be similar to this: <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/ros-foxy-greengrass-demo:latest then, update <YOUR_S3_BUCKET> with the S3 bucket name from the previous step and save the recipe file.
      ---
      RecipeFormatVersion: '2020-01-25'
      ComponentName: com.example.ros2.demo
      ComponentVersion: '1.0.0'
      ComponentDescription: 'A basic component that runs a simple pub/sub ROS2 application'
      ComponentPublisher: Amazon
      ComponentDependencies:
        aws.greengrass.DockerApplicationManager:
          VersionRequirement: ~2.0.0
        aws.greengrass.TokenExchangeService:
          VersionRequirement: ~2.0.0
      ComponentConfiguration:
        DefaultConfiguration:
          accessControl:
            aws.greengrass.ipc.mqttproxy:
              com.example.PubSubPublisher:pubsub:1:
                policyDescription: "Allows access to publish and subscribe to MQTT topics."
                operations:
                - "aws.greengrass#PublishToIoTCore"
                - "aws.greengrass#SubscribeToIoTCore"
                resources:
                - "chatter"
                - "cloud_chatter"
      Manifests:
        - Platform:
            os: all
          Lifecycle:
              Install: |
                 docker tag  ros-foxy-greengrass-demo:latest
              Startup: |
                 docker-compose -f {artifacts:path}/docker-compose.yaml up -d
              Shutdown: |
                 docker-compose -f {artifacts:path}/docker-compose.yaml down
          Artifacts:
            - URI: "docker:"
            - URI: "s3:///com.example.ros2.demo/1.0.0/artifacts/docker-compose.yaml"
      
    1. Upload the docker compose file to the same Amazon S3 bucket using the object key defined in the recipe:
      cd ~/environment/greengrass-v2-docker-ros-demo
      aws s3 cp ./docker-compose.yaml s3://<MY_BUCKET_NAME>/com.example.ros2.demo/1.0.0/artifacts/docker-compose.yaml
    1. Open the AWS IoT Greengrass 2.0 console in a new browser tab. On the left menu, click Components. Then, press Create Component.Create a new AWS IoT Greengrass 2.0 component.
    2. Click “Enter recipe as YAML”, then copy/paste the recipe from preceding into the editor window.
    1. Press Create component.

    Create component summary page.

    Congratulations! The Docker-based ROS 2 sample application component has been successfully created in Greengrass 2.0.

    Step 4: Deploy the sample application component with Greengrass 2.0

    Using AWS IoT Greengrass 2.0, software components and configurations can be deployed to individual robots or fleets of robots. The provisioning process preceding created an IoT Thing Group namedROS2_IDE_Machines to use as a deployment target and executed an initial deployment to download and install local developer tools. All robots added to the ROS2_IDE_Machines group will receive the components defined in this deployment. To ensure consistency, AWS IoT Greengrass 2.0 overwrites previous deployments when deploying new components to targets that have existing deployments. Therefore, in this next section, we will revise the initial deployment to add the new ROS 2 sample application component we created above. Once done, the ROS 2 sample application will be deployed and running on the IDE host. To learn more about AWS IoT Greengrass 2.0 deployments, click here.

    1. Open the AWS IoT Greengrass console and click Deployments. Here, you will see the initial deployment that was created when the robot was provisioned.
    2. Click the check box for Deployment for ROS2_IDE_Machines and press Revise in the top-right corner.
      AWS IoT Greengrass 2.0 deployments.
    1. In the wizard, the first step specifies deployment name and the deployment target (either a Greengrass Core Device or IoT Thing Group). When deployments are revised, the deployment target must stay the same. However, the name could be changed if the new set of components no longer aligns with the defined naming scheme. We will leave these details as the default values. Press Next.Specify a deployment target in Greengrass 2.0
    1. In the next step of the wizard, find the new sample application component (com.example.ros2.demo) under My components. Then, click the check box beside it to include the new component in the revised deployment.Note: The dependency components aws.greengrass.DockerApplicationManager andaws.greengrass.TokenExchangeService do not need to be specified directly as all dependencies defined in the component recipe will be deployed automatically with the component.Select the components to deploy with IoT Greengrass 2.0
    1. In the third step of the wizard, you can customize the configuration (environment variables, component versions, system user, etc.) of the components in the deployment. For this tutorial, we will use the default settings. Click Next.
    1. In the final step, you have the option to configure advanced deployment settings. We will leave these options as default for now. Press next, then press deploy to start the deployment. You can configure the following advanced deployment settings:
      • Rollout configuration is the rate in which you deploy to large fleets of devices. This can be defined as either a constant or exponential rate and you can set the maximum number of devices to deploy software to per minute.
      • Timeout configuration is how long to wait for a device to apply the deployment before timing out. The max value here is 7 days.
      • Cancel configuration is where you can set failure thresholds on a defined minimum number of deployed devices. For example, if there is a failure rate of 20% or higher after the initial 50 deployments in a robot fleet of 1000, cancel the deployment to remaining 950 robots.
      • Deployment policies are where you can define advanced deployment workflows. Each individual component can have custom update logic defined in the component recipe. When component notifications are enabled, the logic defined by each component is executed before the component is updated. You can define custom conditional logic in the component recipe (such as delay update until the robot is at the charging dock or connected to Wi-Fi). You can also set components to automatically rollback to the previous if a deployment failure should occur.

      AWS IoT Greengrass 2.0 deployment results.

    1. After a minute or two, run the following commands in the AWS RoboMaker IDE with the Greengrass CLI to see if the ROS containers are running.
      cd /greengrass/v2/bin/
      sudo ./greengrass-cli component list

      AWS IoT Greengrass 2.0 CLI component list results.

    1. Use the docker compose file that was deployed to tail the logs and see the pub/sub communication between nodes.
      export ARTIFACT_DIR=/greengrass/v2/packages/artifacts/com.example.ros2.demo/1.0.0/
      sudo docker-compose -f $ARTIFACT_DIR/docker-compose.yaml logs –follow

      The running ROS2 with Docker and AWS IoT Greengrass 2.0 application.

    You should see your application running Hello World messages!

    1. (Optional Step) To stop and/or restart the ROS containers, run the following commands. To learn more about what you can do with the Greengrass CLI, click here.
      # Stop
      sudo ./greengrass-cli component stop -n com.example.ros2.demo
      
      # Check the state (it should be marked as "FINISHED"):
      sudo ./greengrass-cli component list
      
      # Restart
      sudo ./greengrass-cli component restart -n com.example.ros2.demo
    1. Finally, open the AWS IoT Click Test on the left menu, then MQTT Test Client. Part of our application today was to set up a ROS topic bridge that republished messages with Greengrass interprocess communication to MQTT topics running in AWS. Subscribe to the topic “chatter”. “Hello World” messages will start to appear in AWS IoT.MQTT Test Client in AWS IoT displaying Hello World messages.

    Congratulations! You have successfully deployed a docker-based ROS 2 application to a robot using AWS IoT Greengrass 2.0 and connected ROS topics to MQTT topics in the cloud!

    Clean-up

    Once finished with the walkthrough, you can remove the resources you created by following these three steps:

    1. Delete the CloudFormation stack per these instructions.
    2. Delete the AWS RoboMaker IDE instance per these instructions.
    3. Delete the Greengrass Core device in the AWS IoT console by clicking Greengrass > Core devices > ROS2_IDE_1. Then, click Delete in the top-right corner of the screen.
    4. Delete the Thing Group in the AWS IoT console by clicking Manage > Thing Groups > ROS2_IDE_Machines. Then click the drop-down menu, Actions, in the top-right corner and press Delete.

    Conclusion

    In this post, we walked through a step-by-step guide on how to build and deploy ROS 2 applications with Docker and AWS IoT Greengrass 2.0. These steps included:

    • How to set up a ROS development environment in the cloud.
    • How to run basic ROS applications in docker containers.
    • How to bridge ROS topics with MQTT topics and connect robots to the cloud.
    • How to deploy and manage ROS 2 docker containers on fleets of robots with AWS IoT Greengrass 2.0

    In our next blog, we will deploy a functioning robot application to a NVidia L4T-based JetBot using this same approach. The JetBot robot development kit leverages a NVidia Jetson Nano and has many interesting sensors, such as a LiDAR, camera, microphone, speakers, and OLED display. We will also describe common cross-compiling workflows with Docker and show you how to run inference at the edge.

    To find out more, visit AWS RoboMaker, AWS IoT Greengrass or contact AWS for further information.

    Happy Building!