Towards a human-like vision system for driver assistance - Embedded.com

Towards a human-like vision system for driver assistance

Today’s Advanced Driver Assistance Systems (ADAS) support effectively the driver in clearly defined traffic situations like keeping the distance to the forward vehicle. For this purpose RADAR sensors, LIDAR sensors, and cameras are used to extract parameters of the scene, like, e.g., headway distances, relative velocities, and relative position of lane markers ahead.

This approach resulted in specialized commercial products improving driving safety (e.g., the ”Honda Collision Mitigation Brake System” ) to help the driver to avoid rear end collisions in case the forward vehicle brakes unexpectedly.

Although traffic rules and road infrastructure like, e.g., lane markings restrict the complexity of what to sense while driving, perception systems of today’s ADAS are capable of recognizing simple traffic situations only. Furthermore driving in normal traffic scenes can be done mainly in a rather reactive way by staying in the middle of the lane and keeping an appropriate distance.

For assisting the driver over the full range of driving tasks in all kinds of challenging situations and going beyond simple reactive behaviors, a more sophisticated task-dependent processing strategy is required. We see two major challenges for achieving this target:

1) an adequate organization of perception using a generic vision system, and
2) a behavior planning system capable of predicting the driving situation and generating safe trajectories.

We focus in this paper on the first challenge: vision. One possible way to solve this challenge is to realize a task-dependent perception using top-down links. In this paradigm, the same scene can be decomposed in different ways depending on the current task. A promising approach is to use an attention system that can be modulated in a task-oriented way ( i.e., based on the current context. )

For example, while driving at high speed, the central field of the visual scene becomes more important than the surrounding. Furthermore only if the vision system attends fast enough to the relevant parts of the surrounding traffic and obstacles, it will be able to assist the driver in all dangerous situations.

Here we present the first instantiation of a vision architecture for driver assistance systems inspired by the human visual system that is based on task-dependent perception. Core element of our system is a state of the art attention system integrating bottom-up and top-down visual saliency.

Combining this task-dependent tunable visual saliency with object recognition and tracking enables for instance warnings according to the context of the scene. We demonstrate the performance of our approach in a construction site setup, where a traffic jam ending within the site is a dangerous situation that the system has to identify in order to warn the driver.

(*** Other authors of this article were Thomas Michalke, Darmstadt University of Technology; Alexander Gepperth and Christian Goerick, Honda; and Sven Bone, Falko Waibel, Marcus Kleinehagenbrock, and Jens Gayko, Honda R&D Europe. )

To read this external content in full, download the complete paper from the author archives at Darmstat University.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.