We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
How does the presence of background noise affect the cognitive processes underlying spoken-word recognition? And how do these effects differ in native and non-native language listeners? We addressed these questions using artificial neural-network modelling. We trained a deep auto-encoder architecture on binary phonological and semantic representations of 121 English and Dutch translation equivalents. We also varied exposure to the two languages to generate ‘native English’ and ‘non-native English’ trained networks. These networks captured key effects in the performance (accuracy rates and the number of erroneous responses per word stimulus) of English and Dutch listeners in an offline English spoken-word identification experiment (Scharenborg et al., 2017), which considered clean and noisy listening conditions and three intensities of speech-shaped noise, applied word-initially or word-finally. Our simulations suggested that the effects of noise on native and non-native listening are comparable and can be accounted for within the same cognitive architecture for spoken-word recognition
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.