Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Multi-modal RGB–Depth–Thermal Human Body Segmentation

Abstract

This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration device and a registration algorithm. Our baseline extracts regions of interest using background sub- traction, defines a partitioning of the foreground regions into cells, computes a set of image features on those cells us- ing different state-of-the-art feature extractions, and models the distribution of the descriptors per cell using probabilis- tic models. A supervised learning algorithm then fuses the output likelihoods over cells in a stacked feature vector rep- resentation. The baseline, using Gaussian Mixture Models for the probabilistic modeling and Random Forest for the stacked learning, is superior to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations

Similar works

This paper was published in VBN.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.