Improving Drone Collision Avoidance with Stereo Cameras

Three drones were used as target drones in the experiments in order to evaluate the generalization capability of the model to different drone modalities

Recently, a team of researchers from Madrid Polytechnic University, MIT, and Texas A&M University produced real-world results in active collision monitoring and avoidance by building an integrated system of a simple camera and data processing setup well suited for smaller drones.

The Challenge 

What efforts across the drone industry are being made to leverage cheaper sensors, lower-power processing models for detection and avoidance, or lightweight instrumentation?

Small drones face challenges with conventional sensors used for collision avoidance today. Many conventional tools for monitoring a vehicle’s surroundings are larger, more power hungry, or more expensive than is often practical. For the sake of this research project, the objective was solving for a low cost, integrated system that would work well on a test vehicle that weighed in at less than 2 kg. With size and constraints in place, typical sensors like LIDAR, radar, and acoustic sensors were not practical.

Given that there are gaps in these areas, this team set out to develop a solution that:

  • Used widely available, low-cost hardware
  • Optimized power consumption
  • Minimized payload weight

Improving on a design created for a previous project in 2018, a few more challenges were set. First, the team wanted to continue showing how low-cost conventional cameras can deliver results. Second, to realize reduced power needs, the solution must leverage basic computing without any sort of middleware or extraneous encapsulation of running environments. Finally, the end result should fully integrate into a small quadcopter and enable that vehicle to monitor and avoid multiple nearby flying vehicles.

Process Improvements in Building Collision Avoidance AI Models 

Building a lightweight, low power, highly performant collision avoidance system to the team’s defined specifications is a tall order when comparing their goals with the typical solutions available today. The team has eschewed fancier instrumentation for conventional cameras, whether simple optical cameras or infrared. The key realization driving the project was that these normal cameras, in pairs, can mimic human depth perception and rely on more digestible data that proves more reliable in practice than some alternatives.

By measuring the differences in geometry of two images taken side-by-side, stereo camera depth mapping enables a system to identify bodies that may pose threats to an autonomous vehicle. In its first iteration of this technology, the team accomplished exactly that identification of threats. Achieving this with simple cameras required reliable frameworks for identifying, sizing, and tracking the motion of those bodies. To construct these frameworks, the team leveraged Microsoft AirSim.

Since its introduction in 2017, Microsoft’s AirSim flight simulation software has become valuable for a wide range of GIS and autonomous flight use cases. A team at Cornell has used the platform to develop autonomous flight functionality by simulating pilotless drone races in detailed virtual space. In another example, collaborative research projects have used AirSim to improve autonomous drone flight programming for wildlife management in African game reserves, proving their concepts in near-perfect reproductions of their real-life settings.

Detector drone platform used in the experiments. It is equipped with a ZED camera, a Jetson TX2 (used for perception) and a Snapdragon Flight board (used for control)

For their purpose of building a collision avoidance system, the depth map research team used AirSim to model complex environments, simulate drones in flight for training, and precisely control image capture for collecting training and modeling content to be consumed by their machine learning application.

With independent left and right perspectives captured, the team was able to create photo-realistic renders for size detection, position detection, and depth mapping of drones both in static positions and in motion. Leveraging these working models, they further economized the product being packaged with their drone by writing the logic in C++ and C/CUDA. These basic languages require no complex middleware to process images and inferences during flight. They were even able to reduce resource needs by using a single storage module for both image storage and interpretation.

Real Life Results 

After creating models, training and testing an AI, and preparing the test vehicle, the team’s published results show great progress in their working concept. The outfitted drone tested was able to identify, track, and make flight path corrections in reaction to other drones of several sizes in complicated environments that required tracking both static and moving obstructions.

While some researchers have resorted to crashing drones thousands of times to create AI models for autonomous flight, this team has shown that AirSim can lead to working autonomous flight controllers. The applications for this study’s work are potentially wide, and the tools are easily accessible. In the journey to realize beyond-visual-line-of-sight or even autonomous flight, these tools are already being used by increasing numbers of researchers. Beyond the cool tools to build this AI, however, is an even more promising outcome: logical models for autonomous flight control that will work in conditions not all sensors can, while incurring less cost and offering functionality to a larger spectrum of vehicles.

The full study can be accessed here.

Source: Geospatial World

Leave a Reply

Your email address will not be published. Required fields are marked *