AWS Robotics Blog

Building a ROS-Application CI Pipeline with AWS RoboMaker

This blog uses colcon bundle for robot and simulation applications. AWS RoboMaker now only supports containers to make it easy for you to bring and run your own simulations and applications. To follow along with this blog post, see our updated blog on Preparing ROS application and simulation application containers for AWS RoboMaker.

Introduction

Building and testing robot applications for dynamic and changing real-world environments are difficult and complicated tasks. Developers have to consider virtually endless number of real-world situations, build their applications to withstand a variety of hardware-related issues, and produce algorithms and applications that can be successfully integrated with code written by their teammates. For many companies, this is a seemingly insurmountable challenge that significantly reduces agility and time-to-market for their robots and applications.

When building applications for robots, developers can increase their velocity by running their code in a simulation environment and iterating before undergoing the time consuming and costly process of deploying and testing on physical robots. Testing in simulation is especially valuable when developers do not have access to a physical robot and/or it is not possible to routinely bring the robot into the real-world environments it will encounter. Using AWS RoboMaker, ROS application testing in simulation has never been easier. AWS RoboMaker provides a fully managed robotics simulation service which can be used to run multiple, parallel, physics based regression tests and automatically scales the underlying infrastructure based on the complexity of the simulation.

In this blog, we discuss an approach for automated testing of ROS applications by running parameterized simulations in AWS RoboMaker as part of a continuous integration and continuous delivery pipeline (CI/CD). With this solution, developers can quickly and accurately identify bugs before they impact customers in production. By regularly integrating their code and ensuring it works consistently under many different conditions that robots would encounter in the real world, customers can attain increases in test speed and coverage while reducing the amount of test infrastructure they need to maintain. Here is the high level architecture:

Testing ROS Applications with Scenarios

There are multiple ways to test robot software applications. Below, we introduce a testing approach that uses scenario-based testing. Scenarios are parameter sets that define real-world conditions, actor behaviors and expected outcomes. This enables developers to decouple their simulation application code from the parameters that define the tests to run in simulation. This de-coupling makes collaboration with QA engineers easier and standardizes how teams run regression tests. It also gives QA engineers the flexibility to easily define a variety of test cases (with different combinations of parameters) that can more completely cover the desired testing scenarios.  We will run these scenarios automatically through a simple API call using the AWS RoboMaker simulation service.

  1. Sign into GitHub and fork the AWS RoboMaker CloudWatch Monitoring Sample Application.
  2. Clone the repository into a development environment of your choice and create a new branch for this walkthrough:

    git clone <MY_CLONED_REPO>
    cd aws-robomaker-sample-application-cloudwatch
    git checkout -b <MY_INTEGRATION_BRANCH_NAME>

In today’s example, we will use the AWS RoboMaker CloudWatch Monitoring Sample Application provided in the sample applications section of AWS RoboMaker. We will write a simple navigation test node using the unittest library, which will define a set of navigation goals and run the simulation. The test node is packaged in our simulation application and triggered with a custom launch file.

The number of goals and the simulation world will be parameterized through environment variables, so that we can easily and flexibly define and run many different scenarios. Once the test is complete, the node will self-tag the simulation job with the test results. Tagging the running simulation job enables downstream pipeline steps, dashboards and notifications to easily reference and report on the test results. Finally, after tagging the simulation with the results, the test node will self-terminate the simulation job. This way, you are only charged for the duration and number of simulation units consumed. Each simulation unit is 1 vCPU and 2 GB RAM and is billed to the nearest minute. For more information about pricing and to calculate a detailed cost estimate, take a look at our pricing page.

The test node below monitors the status of the navigations and can be configured to assert success or failure in one or more navigations as well as accommodate short and long lived tests. The Gazebo world is parameterized in the launch file allowing us to run the test in many different worlds. In this simple example, we’ll run our tests in the AWS RoboMaker Small House World and the AWS RoboMaker Bookstore World. Finally, it includes two utility methods that will cancel the running RoboMaker simulation job as well as tag it.

#!/usr/bin/env python

import rospy
import rostest
import time
import os
import unittest

from rosgraph_msgs.msg import Clock
from ros_monitoring_msgs.msg import MetricList
from robomaker_simulation_msgs.msg import Tag
from robomaker_simulation_msgs.srv import Cancel, AddTags

METRIC_NAME = "distance_to_goal"

class NavigationTest(unittest.TestCase):
    """
    This test case will send a number of expected goals and monitor their status. 
    If the robot reaches all of the destinations, it will mark the test as passed. 
    """    

    def cancel_job(self):
        rospy.wait_for_service("/robomaker/job/cancel")
        requestCancel = rospy.ServiceProxy("/robomaker/job/cancel", Cancel)
        response = requestCancel()
        if response.success:
            self.is_cancelled = True
            rospy.loginfo("Successfully requested cancel job")
            self.set_tag(name=self.test_name + "_Time_Elapsed_End", value= str(time.time()).split(".", 1)[0])
        else:
            rospy.logerr("Cancel request failed: %s", response.message)
    
    def set_tag(self, name, value):
        rospy.wait_for_service("/robomaker/job/add_tags")
        requestAddTags = rospy.ServiceProxy("/robomaker/job/add_tags", AddTags)
        tags = ([Tag(key=name, value=value)])
        response = requestAddTags(tags)
        if response.success:
            rospy.loginfo("Successfully added tags: %s", tags)
        else:
            rospy.logerr("Add tags request failed for tags (%s): %s", tags, response.message)

    def setUp(self):
        self.latch = False
        self.successful_navigations = 0
        self.test_name = "Robot_Monitoring_Tests_" + str(time.time()).split(".", 1)[0]
        self.is_completed = False
        rospy.loginfo("Test Name: %s", self.test_name)
        self.navigation_success_count = rospy.get_param('NAVIGATION_SUCCESS_COUNT')
        self.timeout = rospy.get_param("SIM_TIMEOUT_SECONDS")
        
    def set_latched(self):
        self.latch = True
    
    def set_unlatched(self):
        self.latch = False
    
    def increment_navigations(self):
        self.successful_navigations = self.successful_navigations + 1
    
    def is_complete(self):
        return self.successful_navigations >= self.navigation_success_count
        
    def check_timeout(self, msg):
        """
            Cancel the test if it times out. The timeout is based on the
            /clock topic (simulation time).
        """
        if msg.clock.secs > self.timeout and not self.is_cancelled:
            rospy.loginfo("Test timed out, cancelling job")
            self.set_tag(name=self.test_name + "_Status", value="Failed")
            self.set_tag(name=self.test_name + "_Timed_Out", value=str(self.timeout))
            self.cancel_job()
            
    def check_complete(self, msgs):
        for msg in msgs.metrics:
            if msg.metric_name == METRIC_NAME:
                rospy.loginfo("Metric Name: %s", msg.metric_name)
                rospy.loginfo("Metric Value: %s", msg.value)
                """
                    If our distance to goal metric drops below .5 and we've
                    achieved our goal count then we are complete and we tag the
                    job success and cancel. Else, we continue checking progress
                    towards a new goal once the distance to goal climbs back 
                    above 1. Note that we're using what the nav stack thinks
                    is the distance to goal, in the real world we'd want to use
                    a ground truth value to ensure accuracy. 
                """
                if msg.value <= .5 and self.is_completed == False and self.latch == False:
                    self.set_latched()
                    self.increment_navigations()
                    self.set_tag(name=self.test_name + "_Successful_Nav_" + str(self.successful_navigations), value=str(self.successful_navigations))
                    if self.is_complete():
                        self.is_completed = True
                        self.set_tag(name=self.test_name + "_Status", value="Passed")
                        self.cancel_job()
                elif msg.value > 1 and self.is_completed == False:
                    self.set_unlatched()
                    
    def test_navigation(self):
    	try:
                self.is_cancelled = False
                self.set_tag(name=self.test_name + "_Time_Elapsed_Start", value= str(time.time()).split(".", 1)[0])
                rospy.Subscriber("/metrics", MetricList, self.check_complete)
                rospy.Subscriber("/clock", Clock, self.check_timeout)
                rospy.spin()
    	except Exception as e:
                rospy.logerror("Error", e)
                self.set_tag(name=self.test_name, value="Failed")
                #We cancel the job here and let the service bring down the simulation. We don't exit.
                self.cancel_job()

    def runTest(self):
        #Start the navigation test
        self.test_navigation()

if __name__ == "__main__":
    rospy.init_node("navigation_test", log_level=rospy.INFO)
    rostest.rosrun("test_nodes", "navigation_test", NavigationTest)
    

In this case, we are using multi-layered launch files to run the simulation test. The highest order launch file is worlds.launch. Here, we use the SIMULATION_WORLD environment variable to include the correct launch file for the world defined in this variable.

<launch>
    <!-- Spawn simulation world based on environment variable, SIMULATION_WORLD.-->
    <include file="$(find cloudwatch_simulation)/launch/$(env SIMULATION_WORLD)_turtlebot_navigation.launch">
    </include>
</launch>

For example, if the environment variable is set as bookstore, then it will include the launch file below. The test node will be included in this second launch file. A more detailed description how to include other environment variables in the CI/CD workflow is described later on this blog.

<launch>
  <!-- 
        A bookstore with a Turtlebot navigating to pre-determined 
        goals in a random order endlessly.

        Note that navigation nodes are in the simulation application 
        as it uses a virtual map and should not be deployed to 
        the real robot. 

        Requires environment variable TURTLEBOT3_MODEL to be set to "burger"

        Only Turtlebot Burger is currently supported. SLAM maps for 
        Waffle and Waffle PI need to be generated.
  -->

  <!-- If true, will follow a pre-defined route forever -->
  <arg name="follow_route" default="true"/>

  <!-- 
       Always set GUI to false for AWS RoboMaker Simulation
       Use gui:=true on roslaunch command-line to run with a gui.
  -->
  <arg name="gui" default="false"/>

  <!-- World and Robot -->
  <include file="$(find aws_robomaker_bookstore_world)/launch/bookstore_turtlebot_navigation.launch">
    <arg name="gui" value="$(arg gui)"/>
    <arg name="x_pos" value="-3.5"/>
    <arg name="y_pos" value="5.5"/>
    <arg name="yaw"   value="0.0"/>
  </include>

  <!-- Send navigation route goals to the robot in random order -->
  <group if="$(arg follow_route)">
    <node pkg="aws_robomaker_simulation_common" type="route_manager" name="route_manager" output="screen">
      <rosparam file="$(find aws_robomaker_bookstore_world)/routes/route.yaml" command="load"/> 
    </node>
  </group>
</launch>

Creating Scenario-based Testing Infrastructure in AWS

First, we will create a JSON document to define the scenarios that will be launched in AWS RoboMaker Simulation. We will use this document in our CI pipeline to create an array of simulation job requests.

{
	"scenarios": {
		"<SCENARIO_NAME>": {
			"robotEnvironmentVariables": {},
			"simEnvironmentVariables": {}
		}
	},
	"simulations": [{
		"scenarios": ["<SCENARIO_NAME>"],
		"params": CreateSimulationJobParams
	}]
}

A scenario is created by defining a set of environment variables. In AWS RoboMaker simulation, the robot application can be decoupled from the simulation application. The robot application would contain code meant to run on the physical robot where as the simulation application contains the Gazebo worlds and assets that are required for simulation. AWS RoboMaker will run both of these applications concurrently. Therefore, scenarios are defined by the environment variables to use in both the robot and simulation applications. Here is an example of a scenario that will run a single navigation test in the bookstore simulation world using the Waffle Pi model of the TurtleBot3:

...
	"scenarios": {
		"QuickNavBookStore": {
			"robotEnvironmentVariables": {
				"ROS_AWS_REGION": "us-west-2"
			},
			"simEnvironmentVariables": {
				"ROS_AWS_REGION": "us-west-2",
				"TURTLEBOT3_MODEL": "waffle_pi",
				"NAVIGATION_SUCCESS_COUNT": "1",
				"SIMULATION_WORLD": "bookstore",
				"SIM_TIMEOUT_SECONDS": "600"
			}
		}
	}
...

In the above example, we have setup four environment variables for the simulation application. They are:

  • ROS_AWS_REGION: The region to use by the AWS SDK included with the ROS application packages.
  • TURTLEBOT3_MODEL: The turtlebot model to use in simulation. You can select “burger” or  “waffle_pi”. The waffle pi model has a camera sensor.
  • NAVIGATION_SUCCESS_COUNT: The ROS application sets random navigation goals. Once a goal is reached, it sets a new goal and starts movement towards the new goal. This environment variable defines the number of navigation goals to complete in the test.
  • SIMULATION_WORLD: The simulation world to use in the test. This value supports “bookstore” and “small_house” as options for Gazebo simulation worlds.

You can define as many scenarios as you like, with different combinations of environment variables. A single AWS RoboMaker CreateSimulationJob API call will be executed for each simulation, scenario pair. For example, the below JSON file will create two AWS RoboMaker simulation jobs. One for each of the scenarios defined (MultiNavBookStore and QuickNavSmallHouse). The params field expects the same request syntax as what the create_simulation_job method expects, which is simply the parameters for the simulation job assets.

{
	"scenarios": {
	    "QuickNavBookStore": {
	      "robotEnvironmentVariables": {
	        "ROS_AWS_REGION": "us-west-2"
	      },
	      "simEnvironmentVariables": {
	        "ROS_AWS_REGION": "us-west-2",
	        "TURTLEBOT3_MODEL": "waffle_pi",
	        "NAVIGATION_SUCCESS_COUNT": "1",
	        "SIMULATION_WORLD": "bookstore"
	      }
	    },
	    "MultiNavBookStore": {
	      "robotEnvironmentVariables": {
	        "ROS_AWS_REGION": "us-west-2"
	      },
	      "simEnvironmentVariables": {
	        "ROS_AWS_REGION": "us-west-2",
	        "TURTLEBOT3_MODEL": "waffle_pi",
	        "NAVIGATION_SUCCESS_COUNT": "3",
	        "SIMULATION_WORLD": "bookstore"
	      }
	    }
	},
	"simulations": [{
		"scenarios": [
			"MultiNavBookStore",
	                "QuickNavBookStore"
	             ],
		"params": {
			"failureBehavior": "Fail",
			"iamRole": "<IAM_ROLE>",
			"maxJobDurationInSeconds": 600,
			"outputLocation": {
				"s3Bucket": "<S3_BUCKET>",
				"s3Prefix": "<S3_PREFIX>"
			},
			"robotApplications": [{
				"application": "<ROBOT_APP_ARN>",
				"applicationVersion": "$LATEST",
				"launchConfig": {
					"launchFile": "await_commands.launch",
					"packageName": "cloudwatch_robot"
				}
			}],
			"simulationApplications": [{
				"application": "<SIMULATION_APP_ARN>",
				"applicationVersion": "$LATEST",
				"launchConfig": {
                    "packageName": "cloudwatch_simulation",
                    "launchFile": "worlds.launch"
				}
			}],
			"vpcConfig": {
				"assignPublicIp": true,
				"subnets": [ "<SUBNET_1>", "<SUBNET_2>" ],
            	                "securityGroups": [ "<SECURITY_GROUP>" ]
			}
		}
	}]
}

Create a new scenarios.json file with the above JSON structure and save it in the base directory of the sample application.

Creating the CI Pipeline in AWS CodePipline

Now that our scenarios have been defined and the test node has been created, we will setup the CI infrastructure required to automatically build and bundle the application and launch the set of simulations after every code integration. AWS CodePipeline is a service that enables customers to easily setup and configure continuous integration pipelines. We will use this service today, however, you could also use common tools like TravisCI and Jenkins.

We will start by setting up a new AWS Code Pipeline with three stages. In the first stage, we will connect to a Git repository. AWS Code Pipeline supports GitHub, AWS CodeCommit, AWS S3, AWS ECR, AWS CodeStar Connections (BitBucket) as sources. In the second stage, the source code will be cloned on a managed build server, provided by AWS CodeBuild. We will use the docker image provided by OSRF in Docker hub, docker.io/ros as the basis for our build server. In the third and final stage, we will create and monitor the progress of the AWS RoboMaker simulation jobs.

The first task is to deploy the infrastructure we will need to launch and monitor a batch of AWS RoboMaker simulation jobs. To do this, we will use AWS Step Functions, a service that enables customers to easily build and operate state machines in the cloud. The below application is packaged as a serverless application, which we will deploy using the SAM Local CLI (Serverless Application Model Command Line Interface). Click here to follow the installation instructions. Once deployed, our Step Functions workflow will look like this:

We will use three commands in the SAM CLI tool. The first command sam build —use-container -m ./requirements.txt will build the python lambda function packages and install any required dependencies. It leverages a container that has been pre-provisioned with the correct version of python. The second commandsam package, will use the infrastructure-as-code SAM template (template.yml) file to stage the lambda function packages (zip files) in S3 and create a launch-ready CloudFormation template. The parameters for this command --output-template-file package.yml --s3-bucket <YOUR_S3_BUCKET>  define the output CloudFormation template file to create (package.yml) as well as the S3 bucket location to store the packaged lambda function zip packages. The final command sam deploy will launch the CloudFormation template in your AWS account. If a deployment already exists, it looks for changes and only deploys the deltas.

Before using this tool, first ensure that the AWS CLI is installed and there is a configured AWS default profile you can use with the SAM local CLI. If you have not already installed the AWS CLI, follow these instructions. The IAM user credentials leveraged with the CLI must have attached IAM policies that allow the user to launch cloud formation templates, create VPC, Lambda, IAM and RoboMaker resources and upload to the S3 bucket defined.

Once ready, run these commands to build, package and deploy the Serverless backend:

git clone https://github.com/aws-samples/aws-robomaker-simulation-launcher
cd aws-robomaker-simulation-launcher
sam build --use-container -m ./requirements.txt
sam package --output-template-file package.yml --s3-bucket <YOUR_S3_BUCKET>
sam deploy --template-file package.yml --stack-name cicd --capabilities CAPABILITY_NAMED_IAM --s3-bucket <YOUR_S3_BUCKET>

Now that we have our backend simulation infrastructure created, let’s create and configure our CI pipeline using AWS CodePipeline.

  1. Open AWS CodePipeline in the AWS Console. Click Create Pipeline.
  2. Set a name for the pipeline, select the Create a New Service Role radio button and click the checkbox that allows the role to be used with the pipeline. Keep the Advanced Settings as default. Press NextThe advanced settings allow you to customize the S3 bucket and KMS keys that are used for the pipeline assets if you would like to modify those.

Stage One: Source

The first stage we will setup is the source provider.

  1. Set the source provider to GitHub.
  2. Select the repository and branch that you created above.
  3. Set the detection mode to GitHub web hooks.
  4. Press Next to complete the configuration of this stage.

Stage Two: Build

In the second stage, we will setup the build server. To define what happens on the build server, we will create a buildspec file and include it in the GitHub repository with the ROS source code. In this example, we will use the following buildspec file to prepare the dependencies and run colcon build and bundle commands. This definition file includes sections for:

  • Environment Variables: Any variables you would like to reference in the build commands.
  • Phases: Command definitions for each phase of the build.
    • Install Phase: Any commands that need to run to install dependencies for the build process itself. Colcon will handle any ROS-level application dependencies. In this case, we are adding the Gazebo key for gazebo dependencies in colcon as well, updating apt sources and installing colcon as well as colcon bundle.
      Note: to improve build times, you could create your own Dockerfile with these items pre-installed.
    • Pre-build Phase: Any configuration commands that need to run prior to the build that. We have included any ROS workspace commands in this phase (rosdep updates and workspace-level dependency preparation and installation).
    • Build Phase: This is where the colcon build command is executed.
    • Post-build Phase: Once the code is built, the next operation is to package the code in a bundle and upload it to S3. This happens in the post-build phase with colcon bundle.
  • Cache: Colcon bundle supports caching and will store the packaged overlays (including the downloaded and installed dependencies) in a local folder. In this case, we will cache the built and bundled application to speed up future builds. The file paths to the directories to cache as well as the robot and simulation application source code is referenced in the environment variables.
  1. Before adding the build project, add a buildspec file in the base directory of the cloned sample application code. Create a new  file named buildspec.yml with the following build commands:
    version: 0.2 
    env:
      variables:
        S3_BUCKET: <YOUR_S3_BUCKET>
        APP_NAME: cicd
        CACHE_DIR: cache
        ROBOT_WS: robot_ws
        SIMULATION_WS: simulation_ws
        ROS_VERSION: kinetic
    phases: 
      install: 
        commands: 
           - apt-get update
           - apt-get install -y python3-pip python3-apt apt-transport-https ca-certificates wget
           - wget http://packages.osrfoundation.org/gazebo.key 
           - apt-key add gazebo.key
           - echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list
           - apt-get update
           - pip3 install -U setuptools pip
           - pip3 install colcon-ros-bundle
           - pip3 install awscli
      pre_build:
        commands:
          - . /opt/ros/$ROS_VERSION/setup.sh
          - rosdep update
          - sudo rosdep fix-permissions
          - rosws update --target-workspace ./$ROBOT_WS
          - rosdep install --from-paths ./$ROBOT_WS/src --ignore-src -r -y
          - rosws update --target-workspace ./$SIMULATION_WS
          - rosdep install --from-paths ./$SIMULATION_WS/src --ignore-src -r -y
      build: 
        commands: 
          - COLCON_LOG_PATH="$CACHE_DIR/$ROBOT_WS/logs" colcon build --base-paths "./$ROBOT_WS" --build-base "$CACHE_DIR/$ROBOT_WS/build" --install-base "$CACHE_DIR/$ROBOT_WS/install"
          - COLCON_LOG_PATH="$CACHE_DIR/$SIMULATION_WS/logs" colcon build --base-paths "./$SIMULATION_WS" --build-base "$CACHE_DIR/$SIMULATION_WS/build" --install-base "$CACHE_DIR/$SIMULATION_WS/install"
      post_build: 
        commands: 
          - COLCON_LOG_PATH="$CACHE_DIR/$ROBOT_WS/logs" colcon bundle --base-paths "./$ROBOT_WS" --build-base "$CACHE_DIR/$ROBOT_WS/build" --install-base "$CACHE_DIR/$ROBOT_WS/install" --bundle-base "$CACHE_DIR/$ROBOT_WS/bundle"
          - COLCON_LOG_PATH="$CACHE_DIR/$SIMULATION_WS/logs" colcon bundle --base-paths "./$SIMULATION_WS" --build-base "$CACHE_DIR/$SIMULATION_WS/build" --install-base "$CACHE_DIR/$SIMULATION_WS/install" --bundle-base "$CACHE_DIR/$SIMULATION_WS/bundle"
          - aws s3 cp $CACHE_DIR/$ROBOT_WS/bundle/output.tar s3://$S3_BUCKET/bundles/x86/robotApp.tar 
          - aws s3 cp $CACHE_DIR/$SIMULATION_WS/bundle/output.tar s3://$S3_BUCKET/bundles/x86/simulationApp.tar
    cache:
      paths:
        - '$CACHE_DIR/**/*'
        - '$ROBOT_WS/src/deps/**/*'
        - '$SIMULATION_WS/src/deps/**/*'
        
  2. Select AWS CodeBuild as the build provider. Select the region that you are using and press Create project. A pop-up will open with CodeBuild configuration settings.
  3. In the pop-up, type in a name for the build project.

  4. In the Environment section, select Custom Image. Set the Environment type to Linux and the Image Registry to Other Registry. Type in docker.io/ros:kinetic in the external registry URL. In this case, we are using ROS Kinetic. However, if your application uses a different release of ROS you can define the release you use here.
  5. Select the New Service Role radio button. Leave the additional configuration as the defaults.
  6. We will use buildspec.yml file in the GitHub source code that you created above. Therefore, select Use a buildspec file.
  7. Press Continue to CodePipeline. This will close the pop-up and inject the CodeBuild project into the AWS CodePipeline wizard. Press Next.
  8. Press Skip the deploy stage. Then Skip.
  9. Review the CodePipeline configuration and press Create Pipeline.

Stage Three: Simulate

In the third and final stage, we will test a variety of scenarios by launching and monitoring a set of AWS RoboMaker simulation jobs. Here is what the final CI pipeline will look like:

Architecture diagram for ROS application CI/CD pipelines.

At this point, we have a running CI pipeline with the first two stages, Source and Build.

  1. Return to the AWS CodePipeline console and open your newly created pipeline.
  2. Press Edit in the CodePipeline console page.
  3. After the Build Stage, press + Add stage.
  4. Type Simulate as the name for the new stage.
  5. Press Add action group. Type in LaunchSimulations as the Action Name.
  6. In the action provider dropdown, select AWS Lambda.
  7. In the input provider dropdown, select Source Artifact.
  8. As part of the SAM application above, we created an AWS Lambda function called cicd-TriggerStepFunctions. Select cicd-TriggerStepFunctions from the dropdown list.
  9. Press Done. The overlay will close. Next, press Done on the stage. Then, press Save on the top right corner.

Congratulations! You now have a fully configured ROS CI Pipeline.

Conclusion

In this blog we covered how to define scenarios as a mechanism for testing functionality of ROS applications. We walked through the steps to setup an AWS CodePipeline, with a managed ROS build server and we showed how to run automatic AWS RoboMaker simulation jobs as part of a CI pipeline. In future blogs, we will cover continuous deployment and delivery using AWS RoboMaker Fleet Management as well as general best practices for testing ROS applications. To learn more about AWS RoboMaker, please visit the resources page on our website.

Happy building!

Jeremy Wallace

Jeremy Wallace

Jeremy has helped hundreds of start-ups, SMBs and enterprises across many industry verticals adopt and optimize their cloud computing infrastructure on AWS. As a Principal Solutions Architect for Robotics at AWS, Jeremy works with customers to enhance their robots with cloud capabilities and improve release velocity by implementing automation in their dev/test processes.

Andrew Lafranchise

Andrew Lafranchise

Andrew Lafranchise is a Senior Development Software Engineer on the AWS RoboMaker team.