Carnegie Mellon Unvivesity and Robotics Institute Develop Autonomous Obstacle Avoidance for Micro Aerial Vehicles

[youtube hNsP6-K3Hn4 nolink]

The Robotics Laboratory (LAIRLab) at Carnegie Mellon Unvivesity and the Robotics Instutute teamed to work on  the “Probably-Stable Vision-Based Control of High-Speed Flight through Forests and Urban Environments” project, funded by the Office of Naval Research.

The BIRD Multi University Research Initiative project envisions getting mini Unmanned Aerial Vehicles to autonomously navigate through densely cluttered environments, like forests, autonomously.

Towards this end, CMU is working on reactive controllers and receding horizon control. They use imitation learning to train the drone to learn the expert’s control inputs iteratively; they evaluate a number of optical features from the image stream and then perform a linear ridge regression on the feature vectors over the control inputs. What this achieves is that the generated controller learns to correlate specific changes in visual features with a particular control input (In our case, a roll left or right). For instance, considering optical flow, a tree closer to the camera image would move faster than those further away, and then as the expert avoids the tree by moving sideways, the controller would learn to associate that specific change in optical flow to a command to evade it right or left.

After the first few flights with the expert in control, they generate a preliminary controller and start flying the drone with only the controller commanding the drone. The operator then provides his/her expert input based on the image stream and then a new controller is generated. This process continues till they obtain a satisfactory controller that has visited sufficient states to be able to avoid trees on a consistent basis. For a more rigorous discussion, they recommend reading their paper.

Receding Horizon Control

In addition to a purely reactive approach like DAgger they are working on a more deliberative approach. The video below above the ARDrone in the motion capture lab planning to a goal location using receding-horizon control.

[youtube wfZB0NfMHJM nolink]

In receding-horizon control a pre-computed set of feasible motion trajectories are evaluated on the local cost map built up by the sensors and the one which is collision-free and takes the vehicle towards the goal location is selected and traversed. The entire process is repeated several times a second to incorporate new obstacle information as the ARDrone moves.

Source: Robot Whisperer/CMU

Leave a Reply

Your email address will not be published. Required fields are marked *