Creating a custom controller plugin in OpenRAVE 0.9

I needed to write a plugin-controller for OpenRAVE to support the UR5 robot. Through this process, I have learned about how the ControllerBase class works. In this story, I would like to pass very briefly over about this. I am not going to cover specific details since I found out that some robots use different ROS messages for the joints, I will try to keep the post from a high-level where I explain how OpenRAVE treat functions and the core idea behind making such controllers for robots. I am of course referencing the original code I am hosting on GitHub at the end of the post for you to have a look if you want more detail information. This plugin was created at the Robotics Lab, School of Computing, University of Leeds.

The two main tasks I needed to implement was how to visualise the current configuration of the robot in OpenRAVE (of the physical robot or the simulator) and how to execute trajectories from OpenRAVE solutions to the actual robot (or to the simulator).

Visualising the robot in OpenRAVE

OpenRAVE with UR5

The first problem was quite straightforward. I subscribed to the ROS topic that publishes the changes to the robot’s joint values, in my case was: /joint_states, and then handle the logic in a callback function.

This callback function (JointStateCallback) is doing something very simple, once it is called the new joint values are passed into the function as an argument (ROS message). UR5 have 6 joints and hence with a loop from 0 to 5 I am able to loop over the array with the new values, extract the positioning of the joints and push them back to a C++ vector that is now containing the raw position of joints:

std::vector<double> joint_angles;
for (unsigned int i = 0; i < (msg->position).size(); i++)
{
    joint_angles.push_back((msg->position).at(i));
}

The position values of each joint are now isolated into a unit vector, we can use OpenRAVE’s SetDOFValues function on the robot to set the new joint angles to the robot model in OpenRAVE:

// Set DOF Values of the joint angles just received from message to the
// robot in OpenRAVE.
OpenRAVE::EnvironmentMutex::scoped_lock lockenv(_penv->GetMutex());
_probot->SetDOFValues(joint_angles,
                      KinBody::CLA_CheckLimitsSilent);

With this simple code and logic we are able to subscribe to a ROS topic that publishes new joint values for the robot and then each time will set the DOF values of the robot model in OpenRAVE which as a result will have both the actual robot and the OpenRAVE robot model have the same joint values, in real time.

Trajectory execution

This part was quite more complicated. The idea is easy but the implementation is quite tricky.

The idea is that you use three main functions that the ControllerBase requires when doing the planning and then you publish the new values from OpenRAVE to the topic that the Robot is listening for changes to the joint values.

For UR5, I used /arm_controller/command and I keep a TrajectoryBasePtr object in my class.

The function SetPath receive the trajectory to be executed from the OpenRAVE instance, the SimulationStep will be executed every while and hence when the SetPath is called we need to handle the logic in SimulationStep if we are about to make a trajectory execution step by step. The function IsDone needs also to be integrated with our approach to solving the problem.

virtual bool SetPath(TrajectoryBaseConstPtr ptraj)
{
    if (ptraj != NULL)
    {
        _traj = RaveCreateTrajectory(GetEnv(), ptraj->GetXMLId());
        _traj->Clone(ptraj, Clone_Bodies);
    }

    return true;
}

The above snippet is the SetPath functionality. The idea is that when the method is called (when a trajectory needs to be executed) to set the class attribute _traj. That is because the SimulationStep will be called and we need to have a global state of the trajectory between methods of the class.

Therefore two tasks here: (1) we need to define the IsDone function that will return false when the trajectory is not yet finished and true when the trajectory has been executed (2) define the actions in SimulationStep.

For (1) the solution is quite straightforward: The TrajectoryBasePtr instance has some waypoints, those waypoints is what we want to execute, once we execute a waypoint we just remove that waypoint from the head of the container at some point where every waypoint is executed this container will become empty, hence the IsDone will return false while the container contains at least one element while will return true when the container is entirely empty:

virtual bool IsDone()
{
    return _traj->GetNumWaypoints() == 0;
}

That is the implementation of the IsDone, return true when the number of waypoints in the trajectory is 0, false otherwise.

For (2) what we really need to do is to take the first waypoint from the head of the _traj using the GetWaypoint(index, variable_to_extract) which will return a vector of real values.

Once this happens we need to move the arm to the waypoint and then remove the waypoint using simply the _traj->Remove(0, 1); method to remove the first waypoint from the trajectory.

Repeating this process and executing the trajectory waypoint by waypoint will soon leave the trajectory with zero waypoints in which case will make the IsDone method return true.

Note that the IsDone is mainly called because of the robot.WaitForController(0) that can be called in the main program (i.e outside the controller class). The WaitForController() is usually called in a main program that executes a trajectory and the reason why it is used is to wait for the trajectory to be executed before proceeding with further instructions in the main program (further trajectories executions or any other program commands). The WaitForController() requires the IsDone of the controller to be true before stop holding the program execution, while the IsDone is false then the WaitForController will wait and block the program execution until it becomes true.

Full implementation details see here.

© 2019 Rafael Papallas
Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.