Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

"Owl\u27 and "Lizard\u27: patterns of head pose and eye pose in driver gaze classification

Abstract

Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (i) vehicle interfaces and (ii) the design of future advanced driver assistance systems. In this study, the authors estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can they classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? The authors answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the study is conveyed through the analogy of an owl\u27 and lizard\u27 which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot (owl\u27), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move (lizard\u27), classification accuracy increases significantly from adding in eye pose. The authors characterise how that accuracy varies between people, gaze strategies, and gaze regions

Similar works

Full text

thumbnail-image

Chalmers Research

redirect
Last time updated on 07/05/2019

This paper was published in Chalmers Research.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.