Pheromone Robotics

All about Pheromone Robots and Robot Swarms

Overview

The Pheromone Robotics project aims to provide a robust, scalable approach for coordinating actions of large numbers of small-scale robots to achieve large scale results in surveillance, reconnaissance, hazard detection, path finding, payload conveyance, and small-scale actuation. We intend to accomplish this by developing innovative concepts for coordinating, and interacting with, a large collective of tiny robots. Borrowing techniques used by certain insects, e.g. ants and termites, our robots exhibit emergent collaboration. Inspired by the chemical markers used by these insects for communication and coordination, we exploit the notion of a "virtual pheromone," implemented using simple beacons and directional sensors mounted on each robot. Virtual pheromones facilitate simple communication and coordination and require little on-board processing. Our approach is applicable to future robots with much smaller form factors (e.g., to dust-particle size) and is scaleable to large, heterogeneous groups of robots.

We provide robustness by requiring no explicit maps or models of the environment, and no explicit knowledge of robot location. Collections of robots will be able to perform complex tasks such as leading the way through a building to a hidden intruder or locating critical choke points. This is possible because the robot collective becomes a computing grid embedded within the environment while acting as a physical embodiment of the user interface. Over the past decades, the literature on path planning and terrain analysis has dealt primarily with algorithms operating on an internal map containing terrain features. Our approach externalizes the map, spreading it across a collection of simple processors, each of which determines the terrain features in its locality. The terrain processing algorithms of interest are then spread over the population of simple processors, allowing such global quantities as shortest routes, blocked routes, and contingency plans to be computed by the population.

The user interface to this distributed robot collective is itself distributed. Instead of communicating with each robot individually, the entire collective will work cooperatively to provide a unified display embedded in the environment. For example, robots that have dispersed themselves throughout a building will be able to guide a user toward an intruder by synchronizing to collectively blink in a marquee-style pattern to highlight the shortest path to the intruder. Through the use of augmented reality, robots are able to present more complex displays. For example, users wearing a see-through head-mounted display and a head-mounted camera that detects and tracks infrared beacons emanating from the robots are able to see a small amount of information superimposed over each robot. Each robot, in effect, becomes a pixel that paints information upon its local environment. The combination of this world-embedded interface with our world-embedded computation means that the results of complex distributed computations can be mapped directly onto the world with no intermediate representations required.

Create a Free Website