US Deploys AI Algorithms on ScanEagle

Coalition airstrikes on DAESH gas oil separation plant near Dayr Az Zawr, Syria

Earlier this month at an undisclosed location in the Middle East, computers using special algorithms helped intelligence analysts identify objects in a video feed from a small ScanEagle drone over the battlefield.

A few days into the trials, the computer identified objects — people, cars, types of building — correctly about 60 percent of the time. Just over a week on the job — and a handful of on-the-fly software updates later — the machine’s accuracy improved to around 80 percent. Next month, when its creators send the technology back to war with more software and hardware updates, they believe it will become even more accurate.

It’s an early win for a small team of just 12 people who started working on the project in April. Over the next year, they plan to expand the project to help automate the analysis of video feeds coming from large drones — and that’s just the beginning.

“What we’re setting the stage for is a future of human-machine teaming,” said Air Force Lt. Gen. John N.T.“Jack” Shanahan, director for defense intelligence for warfighter support, the Pentagon general who is overseeing the effort. Shanahan believes the concept will revolutionize the way the military fights.

“This is not machines taking over,” he said. “This is not a technological solution to a technological problem. It’s an operational solution to an operational problem.”

Called Project Maven, the effort right now is focusing on helpingU.S. Special Operations Command intelligence analysts identify objects in video from small ScanEagle drones.

In coming months, the team plans to put the algorithms in the hands of more units with smaller tactical drones, before expanding the project to larger, medium-altitude Predator and Reaper drones by next summer.

Shanahan characterized the initial deployment this month as “prototype warfare” — meaning that officials had tempered expectations. Over the course of about eight days, the team refined the algorithm, six times.

“This is maybe one of our most impressive achievements is the idea of refinement to the algorithm,” Shanahan said.

Think of it as getting a new update to a smartphone application every day, each time improving its performance.

Before it deployed the technology, the team trained the algorithms using thousands of hours of archived battlefield video captured by drones in the Middle East. As it turned out, the data was different from the region where the Project Maven team deployed.

“Once you deploy it to a real location, it’s flying against a different environment than it was trained on,” Shanahan said. “Still works of course … but it’s just different enough in this location, say that there’s more scrub brush or there’s fewer buildings or there’s animals running around that we hadn’t seen in certain videos. That is why it’s so important in the first five days of a real-world deployment to optimize or refine the algorithm.”

While the algorithm is trained to identify people, vehicles and installations, it occasionally mischaracterizes an object. It’s then up to the intel analyst to correct the machine, thus helping it learning.

The team has paired the Maven algorithm with a system called Minotaur, a Navy and Marine Corps “correlation and georegistration application.” As Shanahan describes it, Maven has the algorithm, which puts boxes on the video screen, classifying an object and then tracking it. Then using Minotaur, it gets a georegistration of the coordinates, essentially displaying the location of the object on a map.

Source: Defense One

Leave a Reply

Your email address will not be published. Required fields are marked *