Termites are what inspired this whole research topic for us," said the study's lead author Justin Werfel, a researcher at the Wyss Institute for Biologically Inspired Engineering in Cambridge, Massachusetts. "We learned the incredible things these tiny insects can build and said: Fantastic. Now how do we create and program robots that work in similar ways but build what humans want?"
Unlike humans, who require a high-level blueprint to build something complicated, termites can build complex mounds hundreds of times their size without a detailed plan. Instead, they take simple cues from each other and their environment to know where to lay the next clump of dirt, and ultimately, to know how to build a structure that suits their surroundings.
This use of local information in this way is called stigmergy. Justin Werfel and colleagues leveraged stigmergy to design algorithms that reflect termite behavior, and then implemented these algorithms in their robots.
Their bots need only the ability to sense a brick or bot nearby to make their next move. Equipped with sensors, they move along a grid, lifting and depositing bricks. If they sense a brick in their path, they carry their cargo to the next open spot.
And they do all this without a detailed plan or centralized communication; instead, the bots are programmed with just a few simple rules.
"There are two kinds of rules," Werfel explained. "The rules that are the same for any structure the robots build, and the 'traffic laws' that correspond to the specific structure. The [traffic laws] tell robots at any site which sites they're allowed to go to next: traffic can only flow in one direction between any two adjacent sites, which keeps a flow of robots and material moving through the structure."
Werfel further explained why the robots won't place bricks just anywhere. "If they built carelessly, it would be easy for them to build in a way where they got stuck," he said. "The safety checks involve a robot looking at the sites immediately around itself, paying attention to where the bricks already are and where others are supposed to be, and making sure certain conditions in that local area are satisfied."
Though each robot "knows" only simple rules -- like when to put a brick down, turn around, or climb one step higher - together, the robots exhibit intelligent behavior, completing user-defined structures.
And critically, it's the unique user-defined structure that determines the rules the robots need to follow. In other words, simple rules guide the design process instead of the high-level plans and planning needed for human construction projects.
Robots like this -- independent, with decentralized control -- have numerous advantages. "Individual robots can break down but the rest can carry on," Werfel explained. "There's no one critical element that brings everything down if one fails."
Such systems are also scalable. "For a bigger job, you can just add more robots (even mid-job) without needing to change how they're programmed." By contrast, a robotic system with a centralized controller could create a bottleneck, with a limit in terms of how much it could coordinate as new robots came onto the scene.
"A long-term vision is for robot teams like this to build full-scale structures for human use, maybe with particular utility in settings where it's difficult or dangerous for humans to work (e.g., building shelters after an earthquake or habitats underwater or on other planets). While that's likely a long way out," Werfel said, "a shorter-term application could be something like building levees out of sandbags for flood protection."