Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

How much of driving is pre-attentive?

Abstract

Driving a car in an urban setting is an extremely difficult problem, incorporating a large number of complex visual tasks; yet, this problem is solved daily by most adults with little apparent effort. This article proposes a novel vision-based approach to autonomous driving that can predict and even anticipate a driver’s behaviour in real-time, using preattentive vision only. Experiments on three large datasets totalling over 200,000 frames show that our pre-attentive model can: 1) detect a wide range of driving-critical context such as crossroads, city centre and road type; however, more surprisingly it can 2) detect the driver’s actions (over 80% of braking and turning actions); and 3) estimate the driver’s steering angle accurately. Additionally, our model is consistent with human data: first, the best steering prediction is obtained for a perception to action delay consistent with psychological experiments. Importantly, this prediction can be made before the driver’s action. Second, the regions of the visual field used by the computational model correlate strongly with the driver’s gaze locations, significantly outperforming many saliency measures and comparably to state-of-the-art approaches

Similar works

Full text

thumbnail-image

Surrey Research Insight

redirect
Last time updated on 16/05/2021

This paper was published in Surrey Research Insight.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.