Flock of Drones Acquires Collective Intelligence

Gabor Vásárhelyi, director of the Robotic Lab in the Department of Biological Physics at Eötvös University in Budapest is the first author of a study just published in Science Robotics.

Vásárhelyi’s team developed the model by running thousands of simulations and mimicking hundreds of generations of evolution. “The fact that they’ve done this in a decentralized fashion is quite cool,” says SUNY Buffalo roboticist Karthik Dantu, an expert in multi-robot coordination who was unaffiliated with the study. “Each agent is doing its own thing, and yet some mass behaviour emerges.”

In coordinated systems, more members usually means more opportunities for error. A gust of wind might throw a single drone off course, causing others to follow it. A quadcopter might misidentify its position, or lose communication with its neighbours. Those mistakes have a way of cascading through the system; one drone’s split-second delay can be quickly amplified by those flying behind it, like a traffic jam that starts with a single tap of the brakes. A hiccup can quickly give rise to chaos.

But Vásárhelyii’s team designed their flocking model to anticipate as many of those hiccups as possible. It’s why their drones can swarm not just in a simulation, but the real world. “That’s really impressive,” says roboticist Tønnes Nygaard, who was unaffiliated with the study. A researcher at the Engineering Predictability With Embodied Cognition project at University of Oslo, Nygaard is working to bridge the gap between simulations of walking robots and actual, non-biological quadrupeds. “Of course simulations are great,” he says, “because they make it easy to simplify your conditions to isolate and investigate problems.” The problem is that researchers can quickly oversimplify, stripping their simulations of the real world conditions that can dictate whether a design succeeds or fails.

Instead of subtracting complexity from their flocking model, Vásárhelyi’s team added it. Where other models might dictate two or three restrictions on a drone’s operation, theirs imposes 11. Together, they dictate things like how quickly a drone should align with other members of the fleet, how much distance it should keep between itself and its neighbours, and how aggressively it should maintain that distance.

To find the best settings for all 11 parameters, Vásárhelyi and his team used an evolutionary strategy. The researchers generated random variations of their 11-parameter model, using a supercomputer to simulate how 100 flocks of drones would perform under each set of rules. Then they took the models associated with the most successful swarms, tweaked their parameters, and ran the simulations again.

Sometimes a promising set of parameters led to a dead end. So they’d backtrack, perhaps combining the traits of two promising sets of rules, and run more simulations. Several years, 150 generations, and 15,000 simulations later, they’d arrived at a set of parameters they were confident would work in with actual drones.

And so far those drones have performed with flying colours; real-world tests of their model have resulted in zero collisions. Then there’s the literal flying colours: the lights on the quadcopters’ undercarriages. They’re colour-mapped to the direction of each drone’s travel. They were originally developed for multi-drone light shows—you know, Super Bowl type stuff—but the researchers decided at the last minute to add them to their test units. Vásárhelyi says they’ve made it much easier to visualize the drones’ status, spot bugs, and fix errors in the system.

The full paper can be studied here at Science Robotics.

Source: Wired

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *