Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Sparser Johnson-Lindenstrauss Transforms

Abstract

We give two different Johnson-Lindenstrauss distributions, each with column sparsity s=Θ(ϵ1log(1/δ))s = \Theta(\epsilon^{−1} log(1/\delta)) and embedding into optimal dimension k=O(ϵ2log(1/δ))k = O(\epsilon^{−2} log(1/\delta)) to achieve distortion 1±ϵ1\pm \epsilon with probability 1δ1−\delta. That is, only an O(ϵ)O(\epsilon)-fraction of entries are non-zero in each embedding matrix in the supports of our distributions. These are the first distributions to provide o(k) sparsity for all values of ϵ\epsilon, δ\delta. Previously the best known construction obtained s=Θ(ϵ1log2(1/δ))1s = \overset \sim \Theta (\epsilon^{-1} log^2(1/\delta))^1 [Dasgupta-Kumar-Sarlós, STOC 2010]. In addition, one of our distributions can be sampled from a seed of O(log(1/δ)logd)O(log(1/\delta) log d) uniform random bits. Some applications that use Johnson-Lindenstrauss embeddings as a black box, such as those in approximate numerical linear algebra ([Sarlós, FOCS 2006], [Clarkson-Woodruff, STOC 2009]), require exponentially small δ\delta. Our linear dependence on log(1/δ)log(1/\delta) in the sparsity is thus crucial in these applications to obtain speedup.Engineering and Applied Science

Similar works

This paper was published in Harvard University - DASH.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.