Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Appropriate kernels for Divisive Normalization explained by Wilson-Cowan equations

Abstract

Cascades of standard Linear+NonLinear-Divisive Normalization transforms [Carandini&Heeger12] can be easily fitted using the appropriate formulation introduced in [Martinez17a] to reproduce the perception of image distortion in naturalistic environments. However, consistently with [Rust&Movshon05], training the model in naturalistic environments does not guarantee the prediction of well known phenomena illustrated by artificial stimuli. For example, the cascade of Divisive Normalizations fitted with image quality databases has to be modified to include a variety aspects of masking of simple patterns. Specifically, the standard Gaussian kernels of [Watson&Solomon97] have to be augmented with extra weights [Martinez17b]. These can be introduced ad-hoc using the intuition to solve the empirical failures found in the original model, but it would be nice a better justification for this hack. In this work we give a theoretical justification of such empirical modification of the Watson&Solomon kernel based on the Wilson-Cowan [WilsonCowan73] model of cortical interactions. Specifically, we show that the analytical relation between the Divisive Normalization model and the Wilson-Cowan model proposed here leads to the kind of extra factors that have to be included and its qualitative dependence with frequency

Similar works

This paper was published in Purdue E-Pubs.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.