Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Adaptive audio classification for smartphone in noisy car environment

Abstract

With ever-increasing number of car-mounted electronic devices that are accessed, managed, and controlled with smartphones, car apps are becoming an important part of the automotive industry. Audio classification is one of the key components of car apps as a front-end technology to enable human-app interactions. Existing approaches for audio classification, however, fall short as the unique and time-varying audio characteristics of car environments are not appropriately taken into account. Leveraging recent advances in mobile sensing technology that allow for effective and accurate driving environment detection, in this paper, we develop an audio classification framework for mobile apps that categorizes an audio stream into music, speech, speech+music, and noise, adaptably depending on different driving environments. A case study is performed with four different driving environments, i.e., highway, local road, crowded city, and stopped vehicle. More than 420 minutes of audio data are collected including various genres of music, speech, speech+music, and noise from the driving environments. The results demonstrate that the proposed approach improves the average classification accuracy by up to 166%, and 64% for speech, and speech+music, respectively, compared with a non-adaptive approach in our experimental settings

Similar works

Full text

thumbnail-image

University of Memphis Digital Commons

redirect
Last time updated on 05/05/2022

This paper was published in University of Memphis Digital Commons.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.