Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Quantized Compressive K-Means

Abstract

The recent framework of compressive statistical learning proposes to design tractable learning algorithms that use only a heavily compressed representation - or sketch - of massive datasets. Compressive K-Means (CKM) is such a method: It aims at estimating the centroids of data clusters from pooled, nonlinear, and random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the CKM sketching procedure to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. This idea is exemplified in a quantized CKM procedure, a variant of CKM that leverages 1-bit universal quantization (i.e., retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resource-efficient signature (standard in most acquisition schemes) has almost no impact on the clustering performance, as illustrated by numerical experiments

Similar works

Full text

thumbnail-image

DIAL UCLouvain

redirect
Last time updated on 23/09/2018

This paper was published in DIAL UCLouvain.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.