Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Spatially Realistic Audio in a Video Conference Based on User Head Orientation

Abstract

In current video conferencing applications, the audio of the speech of a participant is captured without any indication of the spatial positioning or body orientation of the speaker in relation to a device camera used to capture the corresponding video. Therefore, the audio experience in video conferencing lacks spatial and directional richness. This disclosure describes techniques to enhance the spatial richness of the audio in a video conference based on a user’s head orientation. With user permission, head orientation is estimated using measurements from device sensors of earbuds or another device used by a video conference participant. Head orientation measurements for the participants are used to apply appropriate positional correction to the audio using a head-related transfer function (HRTF). Implementation of the techniques can improve the spatial accuracy of the audio feed within a video conference, thus making the conversations sound more realistic

Similar works

This paper was published in Technical Disclosure Common.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.