Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

A system for gaze-contingent image analysis and multi-sensorial image display

Abstract

A novel system for gaze-contingent image analysis and multisensorial image display is described. The observer's scanpaths are recorded while viewing and analysing 2D or 3D (volumetric) images. A region-of-interest (ROI) centred around the current fixation point is simultaneously subjected to real-time image analysis algorithms to compute various image features, e.g. edges, textures (2D) or surfaces and volumetric texture (3D). This feature information is fed back to the observer using multiple channels, i.e. in visual (replacing the ROI by a visually modified ROI), auditory (generating an auditory display of a computed feature) and tactile (generating a tactile representation of a computed feature) manner. Thus, the observer can use several of his senses to perceive information from the image which may be otherwise hidden to his eyes, e.g. targets or patterns which are very difficult or impossible to detect. The human brain then fuses all the information from the multisensorial display. The moment the eyes make a saccade to a new fixation location, the same process is applied to the new ROI centred around it. In this way the observer receives information from the local real-time image analysis around the point of gaze, hence the term gaze-contingent image analysis. The new system is profiled and several example applications are discussedA novel system for gaze-contingent image analysis and multisensorial image display is described. The observer's scanpaths are recorded while viewing and analysing 2D or 3D (volumetric) images. A region-of-interest (ROI) centred around the current fixation point is simultaneously subjected to real-time image analysis algorithms to compute various image features, e.g. edges, textures (2D) or surfaces and volumetric texture (3D). This feature information is fed back to the observer using multiple channels, i.e. in visual (replacing the ROI by a visually modified ROI), auditory (generating an auditory display of a computed feature) and tactile (generating a tactile representation of a computed feature) manner. Thus, the observer can use several of his senses to perceive information from the image which may be otherwise hidden to his eyes, e.g. targets or patterns which are very difficult or impossible to detect. The human brain then fuses all the information from the multisensorial display. The moment the eyes make a saccade to a new fixation location, the same process is applied to the new ROI centred around it. In this way the observer receives information from the local real-time image analysis around the point of gaze, hence the term gaze-contingent image analysis. The new system is profiled and several example applications are discusse

Similar works

Full text

thumbnail-image

Explore Bristol Research

redirect

This paper was published in Explore Bristol Research.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.