System for Target Location and Immobilization

1. Introduction

The goal of this project is to design a system of hunter mobile robots set to locate and capture an elusive, mobile target. Each hunter robot will sense and share its estimate of the target's location with the other hunter robots in order to determine the location of the target, with some uncertainty. The hunter robots will attempt to surround, or immobilize, the target.

2. Related Work

This project is related to the pursuit-evasion problem, as well as game theory. Both the pursuers and the evaders respond based on each other's actions. Most of the literature on escaping and pursuing robots use control laws that are variations of the one listed in a paper by Fethi Belkhouche and Boumediene Belkouche [1].

3. Approach and Results

3.1 Robot Model and Environmental Setup

Simulations for this project were programmed in MATLAB and visualized with Gazebo using the IPC-Bridge and ROS. The erratic robot equipped with a Hokuyo laser in the front is the model of choice for both the hunter robots and the target. The laser scanner detects distances from -135 to 135 degrees with a maximum range of 4.0 meters. The erratic robot model was modified to include a 0.25-meter tall block on top of the robot so that it could be more easily sensed by the laser scanners, as shown below in Figure 1.

Figure 1: Modified Erratic Robot

In the simulation trials, two hunter robots, with ranging starting positions, seek to capture a single target in an enclosed space measuring 14 by 9 meters. The hunter robots and the target operate at the same velocities. Instead of randomly navigating the space until the target is detected, the hunter robots follow a predetermined path of waypoints, like the path marked in Figure 2, from corner to corner in order to decrease the amount of time taken to encounter the target. The positions of the robots are calculated and determined using odometry readings communicated through the IPC-Bridge. The hunter robots share their own current estimated locations and the estimated target locations with each other. The target is said to be captured when it is incapable of moving more than 0.5 meters from its position.

Figure 2: Default Path of Robots

Since accurate odometry readings are crucial in determining the positions of the robots and target, collisions between the robots, target, and walls must be avoided as they introduce significant amounts of error into the system which would render the hunter robots incapable of surrounding the target. Thus, the robots are bounded from each other and from the walls by a distance of 1 meter.

3.2 Robot Path Determination

The robots' paths are calculated using their relative distances and angular positions with respect to one another. When a hunter robot detects a target, it estimates the location of the target and attempts to face and head in the direction of the target. When the target detects a hunter robot, it attempts to face the hunter robot and traverses with negative linear velocity in order to not lose sight of the hunter. If the target detects more than one hunter, it will compute a weighted average of the locations of the hunters, as depicted in Figure 3, and use the averaged location to determine the heading direction as an attempt to keep all the detected hunter robots in its laser range. The weights used in averaging the locations of the hunter robots are inversely proportional to their distances from the target; a hunter robot that is closer to the target will have more impact on the path of the target. If the robot has a nonzero angular orientation, it will account for and add that angle to the computed angle between the robots when calculating position information.

Figure 3: Estimation of Hunter Location with Two Hunters

Since the target traverses backwards, it can easily collide into walls because it moves in a direction that lies outside of the laser's range. If the target were to turn away from the hunter robots and traverse forwards, it would lose sight of the hunter robots. Hence, the target is assumed to know the location of the walls in the enclosed environment such that when the target believes it has approached a wall based on its estimation of its current location, it will treat the wall as a hunter and compute a weighted average of the previous estimated location and the location of the adjacent wall.

3.3 Simulation Results

Five consecutive trials were executed for three different starting position configurations, as shown in Figure 4. The red marker represents the target and the blue markers represent the hunter robots.

Figure 4: Starting Position Configurations

The following video shows successful trials with the first two configurations. The hunter robots had trouble locating the target with the third configuration, when the robots cross paths and, with inaccurate odometry readings, mistake one another for the target. The variations between each trial for the same configuration are not significant.

Video 1: Simulation for Configuration 1

4. Anaylsis

4.1 Performance

The hunter robots can capture the target when they start off at locations where one of them will encounter the target before running into a fellow hunter. Even though a robot is programmed to recognize a fellow hunter if it is detected to be within a certain deviation from the fellow hunter's estimated location, error from the odometry readings between the two robots proves to be significant. Also, due to the poor odometry readings, many times the cornered target will eventually be detected as a wall or another hunter robot and the hunter robots will navigate away in search of the target again.

4.2 Problems Encountered

Theories on pursuer-evader dynamics exist, but most do not address the issues coming from non-ideal situations. First of all, a robot has a difficult time determining whether a laser reading indicates a wall or a target, unless it already knows the locations of the walls beforehand. For this project, the robots know the coordinates of the walls surrounding the navigable environment, which may not be realistic. Secondly, the odometry readings from the robots are inaccurate and only get worse as time progresses during the course of the simulation. Locating robots is almost impossible with inaccurate odometry information. A hunter robot will inaccurately estimate and share its location, which further propagates the amount of error in the system.

5. Future Work

For better simulation results, ground truth could be used instead of odometry readings to determine robot locations. Ground truth was not implemented in this simulation because it is not realistic for robots to possess this information, and the IPC-Bridge had trouble communicating the information to MATLAB for multiple robots. The ability to map stationary objects and obstacles and to differentiate them from moving robots would be a useful extension to this project for many real-world robot navigation applications. With ground truth, other methods of control and more complex environments could be implemented for comparison in the future.


[1] Fethi Belkhouche and Boumediene Belkouche. On the control of an escaping robot. EECS Department, Tulane University.

Final Presentations

Anirudha Mahumdar

I liked Anirudha's video of the underactuated robot kicking the ball back and forth. I think the results would be more interesting on a real underactuated robot.

Avik De

Avik presented an interesting view on finding the shortest path. I think he could consider harder trajectories with more arm movement for this problem.

Ben Charrow

I liked Ben's video showing the PR2 pinching and turning the doorknob. Perhaps one way to extend the project is to implement tool pick-up and handling.

Brian MacAllister

I liked the practicality and applications of Brian's flying robot path planning in 3D space and how it trades off the optimal path for faster computation. Perhaps he could consider multiple-flying robot interaction or formations in 3D space.

Caitlin Powers

I liked Caitlin's project since it involves implementing a more complex dynamic modelling and control system. I think an interesting extension would be stacking/sorting items with the WAM based on the mass of the objects.

Fei Miao

I liked the speed advantage of the RRT implementation over A* mentioned in Fei's presentation. I think she could consider having the robots interact to avoid collisions.

Jason Owens

I think Jason's project with backpack detection is interesting since it involves image segmentation and public security. One thing he could consider is detecting backpacks on moving targets so that it can be used on security cameras that could alert human monitors on suspicious activity.

Kartik Mohta

I liked Kartik's simulation video on the robots moving into designated configurations. I think he could consider more complex, irregular environments.

Menglong Zhu

I like Menglong's project. It seems like a very extensive, difficult project to be able to detect and read words. A possible idea would be to connect the robot to the internet and use the web's resources (such as Google Translate, online dictionaries) to detect and pronounce words from different languages.

Phillip Dames

I find Phillip's project interesting in that it is similar to mine. I think he could consider having a known set amount of targets such that once one target is located, the other robots will scurry off to find the remaining targets while a single robot continues to track the detected target.

Stephen McGill

I like Stephen's project in that it involves direct interaction of two humanoid robots with the relocation of an object. I think it would be interesting if the robots could avoid obstacles while carrying the object, or if he could extend the project into a relay game.