Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Acoustic DOA estimation using space alternating sparse Bayesian learning

Abstract

Estimating the direction-of-arrival (DOA) of multiple acoustic sources is one of the key technologies for humanoidrobots and drones. However, it is a most challenging problem due to a number of factors, including the platform sizewhich puts a constraint on the array aperture. To overcome this problem, a high-resolution DOA estimation algorithmbased on sparse Bayesian learning is proposed in this paper. A group sparse prior based hierarchical Bayesian model isintroduced to encourage spatial sparsity of acoustic sources. To obtain approximate posteriors of the hiddenvariables, a variational Bayesian approach is proposed. Moreover, to reduce the computational complexity, the spacealternating approach is applied to push the variational Bayesian inference to the scalar level. Furthermore, an acousticDOA estimator is proposed to jointly utilize the estimated source signals from all frequency bins. Compared tostate-of-the-art approaches, the high-resolution performance of the proposed approach is demonstrated inexperiments with both synthetic and real data. The experiments show that the proposed approach achieves lowerroot mean square error (RMSE), false alert (FA), and miss-detection (MD) than other methods. Therefore, the proposedapproach can be applied to some applications such as humanoid robots and drones to improve the resolutionperformance for acoustic DOA estimation especially when the size of the array aperture is constrained by theplatform, preventing the use of traditional methods to resolve multiple sources

Similar works

This paper was published in VBN.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.