The US Air Force Research Laboratory (AFRL) has no intention of completely replacing humans with unmanned autonomous systems, says Jim Overholt, a senior scientist at the lab’s Human Effectiveness Directorate. However, AFRL does want to see people interact more effectively with machines so that both can work in “complex and contested environments.”
Speaking at last week’s Association of Unmanned Vehicle Systems International (AUVSI) conference in Orlando, Florida, Overholt illustrated the drawbacks of current human-machine operations by noting how many people it takes to support one MQ-1 Predator: 73, including maintenance crew and analysts.
“When we start to get into something like a combat air patrol, we have as many as 250 individuals involved in order to operate four unmanned vehicles,” Overholt said. “It’s really startling. We can’t keep on … throwing more humans at the problems.”
Likewise, he pointed out that when a drone loses its command and control link with its operator – due to enemy jamming, for example – it does not proceed with the mission on its own, nor does it update its controller when the link is re-established. Instead, an unmanned aircraft with lost comms simply returns back to base.
“We don’t want that to happen,” he says.
In an effort to address these challenges, AFRL aims to improve human-machine teaming and machine intelligence – in fact, the “heaviest dollars” in research, across the services, are going to those two areas, he said. In addition, the lab is trying to create teams of heterogeneous unmanned platforms that can work together.
Overholt acknowledges that, in trying to make people more efficient and unmanned systems more autonomous, human-machine teaming faces its own set of challenges. For example, how does one create a shared sense of perception? Or form a truly two-way flow of information?
Right now, information mostly goes from platform to person, rather than the other way around. A rare counter-example of a human providing data to a platform is the case of the sensor in an F-22 pilot’s helmet that monitors his oxygen intake to avoid a case of hypoxia.
Moving forward, platforms will need to better identity and interpret the operator’s physical status, intentions or state of mind in order to then augment him – a process that Overholt compares to the observe-orient-decide-act loop made famous by Air Force Col. John Boyd.
“The human is the most vital piece of the equation,” he says.
Source: Defense News