Robot Coordination with Underactuated Robots

Introduction

The line of legged robot research inspired by Honda's ASIMO has been quite successful recently. The locomotory performance that robots like ASIMO demonstrate is quite impressive. However, the control techniques employed by these robots typically assume that the system is fully actuated. This leads to highly conservative motions (e.g. having to keep the knees bent in order to avoid kinematic singularities) and low energy efficiency (Efficient Bipedal Robots Based on Passive-Dynamic Walkers). Underactuated robots can often exploit their natural dynamics in order to achieve much greater energy efficiency. The resulting walking motions can often also look much more natural and human-like. Extreme examples of this are the passive dynamic walkers (Passive Dynamic Walkers). Video 1 shows a passive dynamic walker walking down a gentle slope, using only gravity in order to compensate for frictional energy losses. Video 2 shows a minimally actuated passive dynamic walker (developed at Cornell by Andy Ruina's group) walking on flat ground.

Video 1: Passive dynamic walkers

http://www.youtube.com/watch?v=e2Q2Lx8O6Cg&feature=related

Video 2: Minimally actuated compass gait walker

http://www.youtube.com/watch?v=jlbm8R4Vn7U

This increased energy efficiency comes at a cost, however. The control problem for underactuated systems is significantly harder than it is for fully actuated systems, due to the loss in complete control authority that is a result of having fewer actuated degrees of freedom than the total number of degrees of freedom. Equivalently, this means that the system is unable to follow arbitrary trajectories in state space. Hence, one must explicitly reason about the dynamics of the system in order to ensure that the robot is able to perform the motions that it is asked to, and that the resulting controller is stable.

Inspired by recent research in multi-robot coordination, control of underactuated robots, and the success of the RoboCup robot soccer competition, this project aims to explore the coordination of underactuated robots in the domain of robot soccer. Due to the difficult control problems that arise from underactuation, robot coordination becomes even more challenging. This project aims to offer (partial) solutions to some of these issues. Specifically, the goal of the project is to be able to successfully pass a soccer ball back and forth between two underactuated compass gait walkers in a computer simulation (a cartoon of the task is depicted in Figure 1).

Figure 1: Cartoon of task

System Description

A cartoon of two compass gait walkers (or equivalently, double pendulums with actuators at the "hip") is shown in Figure 1. Each robot has two rigid links that have mass and inertia (i.e. they are not point masses). A pin joint connects one link to the other, while another pin joint is responsible for the connection between the "base" link and the ground. Thus, the system has two degrees of freedom (q1 and q2 in the figure). An actuator at the "hip joint" (i.e. the joint that connects the two links together) provides the only control input.

The equations of motion for the robot are easily derived and can be found in The Swing-up Control Problem for the Acrobot.

In the robot coordination task considered here, each robot is aware only of its own state and the state of the ball. Thus, the only means of information exchange is the state of the ball.

Approach

The approach taken in this project is to rely on a relatively expensive pre-computation phase in order to avoid extensive computations at run-time. In particular, we pre-compute a trajectory library for each robot. Trajectory libraries have been employed successfully in Standing Balance Control Using a Trajectory Library to do push-recovery in humanoid robots. A similar idea is the basis for LQR-Trees (LQR-Trees: Feedback Motion Planning on Sparse Randomized Trees), a feedback motion planning algorithm that provides a probabilistically complete control policy that is able to control a system from any point in a bounded region in state space to some (pre-determined) stabilizable point in state space.

Each trajectory in this library is a feasible open-loop trajectory for the system and can be thought of as a motion primitive, or a "kick". At run-time, the robots(s) can select one of these "kicks" in order to pass the soccer ball back and forth (at possibly varying speeds). However, since contact with the ball causes the system to be perturbed from its initial planned trajectory, it is possible that continued execution of the open-loop plan will destabilize the system. In order to tackle this issue, a time-varying LQR controller is pre-computed for each motion primitive in the library.

At run-time, the robots act in a decentralized way. They choose a "kick" from the pre-computed trajectory libraries based on the speed at which the user specifies that the ball must be passed back and forth. The robot then executes the closed-loop plan consisting of the nominal control provided by the open-loop plan and the LQR-controller that is used to stabilize the system to the nominal trajectory.

The following sections provide detailed explanations of the approach described above.

Pre-computing Open-Loop Trajectories

The open-loop feasible trajectories for the system were computed using the direct collocation trajectory optimization method (Direct Collocation). This method performs quite well on complicated non-linear systems, producing locally optimal trajectories. Given a non-linear system, \dot{x} = f(x,u), the trajectory optimizer locally optimizes the following cost function:

J = \int^{t_f}_{t_0} [1 + u^{T}_0(t)Ru_0(t)] dt
R > 0

This trajectory optimization is implemented with sequential quadratic programming using SNOPT (Matlab).

It is simple to specify an initial value constraint and a final value constraint for the state of the system. Thus, in order to generate a "kick" of a particular "strength", we can compute two trajectories and stitch them together. The first trajectory starts off from the default configuration (with the robot standing upright and both joint velocities set to 0) and ends up in the same position in configuration space, but with the non-stance leg at some specified forward velocity (in order to kick the ball). The second trajectory takes the system from the final state of the first trajectory back to the default upright configuration. Thus, by stitching together these two trajectories, we obtain a motion primitive that starts and ends at the default position (i.e. a periodic trajectory) and has a certain non-stance leg velocity during the middle of this motion (this is when the robot kicks the ball).

Figure 2 shows a motion primitive in the 4-dimensional state space of the robot. The trajectory is projected onto the two 2-dimensional subspaces that are formed by a joint position and joint velocity.

Figure 2: Motion primitives in state space

For each robot, three of these motion primitives are computed. This forms the trajectory library.

TV-LQR Stabilization of Trajectories

In addition to computing open-loop trajectories in the trajectory library, a time-varying LQR controller is also pre-computed for each motion primitive. Given a nominal trajectory x_0(t) and the associated control input u_0(t), the linearization of the dynamics about this trajectory are as follows:

\dot{x} = A(t)x(t) + B(t)u(t)

The quadratic regulator cost function is given by:

J = x^TQ_fx + \int^{t_f}_{t_0} [x^TQx + u^TRu] dt
Q_f, Q, R > 0

Minimizes this yields the time-varying linear control law:

u^{*} = -R^{-1}B^T(t)S(t)x(t) = -K(t)x(t)

where S(t) is obtained from the Riccati equation:

-\dot{S} = Q - SBR^{-1}B^TS + SA + A^TS, S(t_f) = Q_f

This can be easily solved in Matlab using ode45. A high-order spline is fit to the K matrix, allowing for easy evaluation and simulation of the closed loop dynamics at run-time.

Run-time Motion Primitive Execution

As explained in the Approach section, at run-time, the ball is started in some location with a given speed. The robot towards which the ball is moving computes the time-to-impact of the ball, and then waits the necessary amount of time before beginning the execution of a motion primitive in order to ensure that the kicking leg makes contact with the ball at the right time. This sends the ball to the other robot, which performs the same computations. In order to kick the ball at a different speed, a different motion primitive can be chosen.

Results

The following video shows the methods described above implemented in a MATLAB simulation. In the first video, the robots are passing the ball back and forth at a predetermined (slow) speed. The second video shows the robots passing the ball back and forth at a higher speed. The difference in speeds is achieved by the choice of motion primitive. Notice that in the second video, the robots perform a more aggressive maneuver that involves a large "wind-up" motion.

http://www.youtube.com/watch?v=psXCKnqTaUw

http://www.youtube.com/watch?v=jiggYxPy9kA

Discussion

Approaches to robot coordination tasks can be roughly divided into two categories: centralized and distributed algorithms (Robot coordination survey). Centralized control algorithms for coordination plan trajectories for the system as a whole (i.e. all robots involved are considered one big system). This has the advantage that optimal plans can be computed for the system. However, computing optimal plans for the entire system can be computationally prohibitive. In contrast, distributed control algorithms trade-off optimality for computational efficiency. In such a control scheme, robots act based on local information. Further, they are more robust to run-time errors and uncertainties since it is easier for each robot to adapt its behavior individually.

In this project, the second approach has been taken. Further, expensive run-time computations are avoided by performing all the time-consuming computations in a pre-computation phase. At run-time, a discrete set of motion primitives is available to the robots. Mixing and matching these motion primitives can lead to different behaviors of the system.

In addition, pre-computation of open-loop trajectories and LQR controllers ensures that the plans that the robots are trying to execute are compatible with the underactuated dynamics.

Conclusion and Future Work

In this project, I explored the interesting problems that arise when one is trying to get underactuated robots to coordinate. The lack of control authority that results from having fewer actuators than degrees of freedom gives rise to several difficulties. However, solutions to these problems could have potential benefits to domains such as robot soccer, where one could envision using minimally actuated passive dynamic walkers. In this project, my approach was to pre-compute a trajectory library for the robots performing the coordination task. At runtime, these motion primitives are employed in order to achieve the desired goal.

In the future, one could envision additions to the framework presented above that could improve performance significantly. For a given coordination task, one could imagine computing trajectory libraries that are "complete" in some sense. i.e. any motion primitive that the robot would possibly require is contained in the trajectory library. One possible approach to this is presented in LQR-Roadmaps. In this paper, a control algorithm that provably takes a system from any point in state space to any stabilizable point is presented. This could form the "complete" trajectory library.


Project Contributions

A lot of the essential machinery used in this project was available to me from previous projects. The trajectory optimization code was available to me from collaboration with Russ Tedrake. Professor Tedrake's robotlib Matlab code is partially available here. The LQR-controller was also partly taken from Prof. Tedrake's robotlib code base. It was modified somewhat in order to fit with the simulation of the robots. The main challenge for me was to bring all these pieces together and successfully demonstrate the approach.


My Comments on the Final Presentations

Avik De

I liked the fact that the RRT was used to search for a feasible trajectory and then the spline based gradient descent was used to find a locally optimal trajectory. I thought this was quite clever. I think it would be interesting to think about constraints on velocities imposed by torque limits (i.e. thinking about state space too instead of just the configuration space).

Ben Charrow

The demonstration on a real robot was really impressive. The fact that the PR2 has to start at a predecided configuration is a somewhat tight constraint. It would be nice if this requirement is eliminated somehow.

Brian MacAllister

The simulation results looked cool (the graphics were nice). I didn't quite follow the details of the algorithm though and exactly how the multi-dimensionality was represented.

Caitlin Powers

I was wondering if there are any guarantees on convergence for the parameters. Also, if the parameters change too much, is there a chance that the controller will become unstable at all? The overall idea is quite cool though.

Cem Karan

I didn't quite follow all the details in the presentation, but I think the overall concept is quite nice (and useful). I look forward to seeing the report for more details.

Ceridwen Magee

The micro-robots were quite cool. I couldn't quite catch whether this was going to be implemented on a real physical system or just a simulation.

Chinwendu Enyioha

I was unable to follow the details in the presentation, so I don't have anything particularly insightful to say. I'll wait to read the final report.

Fei Mao

The use of structs to speed up the search was interesting. I was not aware of this. It would have been nice to see something more that just a comparison of the two methods.. maybe an attempt at combining the two methods?

Ian McMahon

Getting the PR2 to build those towers does not seem easy. I'm looking forward to the report to see how you accomplish this. Is the robot planning in the space of towers at all (is there a "goal tower" and if so, how does it choose where to place the next piece)?

Jason Owens

The results videos were quite cool. I think employing some kind of learning would make a big difference (as mentioned in the presentation).

Kartik Mohta

I was wondering if any symmetries in the probability distribution can be exploited in order to do the planning on a lower dimensional representation of the space? For example, if the distribution is Gaussian, can you do the planning on a 1-d space to figure out the optimal radius? And then, just place the robots uniformly on a circle of that radius?

Menglong Zhu

The results on the real robots in real environments were very impressive. The use of the convolution distance between alphabets also seemed quite nice. I wonder if other distance metrics would yield better results?

Monica Lui

Unfortunately, I was not able to get a good sense for the project from the short presentation. I look forward to reading the report for more details.

Shuai Li

Unfortunately, I was not able to get a good sense for the project from the short presentation. I look forward to reading the report for more details.

Steve McGill

The fact that you got it (almost) working on real robots was impressive. I was wondering how your coordination will work when there are asymmetries in the system (either if the robots are different, or if the table is asymmetric). Will the coordination get more challenging?

Teyvonia Thomas

The fact that you could get the scarab to navigate the maze with real time processing was quite cool.

Yida Zhang

The demo on a real robot was awesome! I look forward to seeing your technique implemented in order to handle shapes with volume.