Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

A model of bottom-up visual attention using cortical magnification

Abstract

International audienceThe focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring

Similar works

Full text

thumbnail-image

HAL-Université de Bretagne Occidentale

redirect
Last time updated on 12/11/2016

This paper was published in HAL-Université de Bretagne Occidentale.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.