Applying virtual reality to audiovisual speech perception tasks in children

Maeve Salanger, Dawna Lewis, Timothy Vallier, Tessa McDermott, Andrew Dergan

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Purpose: The primary purpose of this study was to explore the efficacy of using virtual reality (VR) technology in hearing research with children by comparing speech perception abilities in a typical laboratory environment and a simulated VR classroom environment. Method: The study included 48 final participants (40 children and eight young adults). The study design utilized a speech perception task in conjunction with a localization demand in auditory-only (AO) and auditory–visual (AV) conditions. Tasks were completed in simulated classroom acoustics in both a typical laboratory environment and in a virtual classroom environment accessed using an Oculus Rift head-mounted display. Results: Speech perception scores were higher for AV conditions over AO conditions across age groups. In addition, interaction effects of environment (i.e., laboratory environment and VR classroom environment) and visual accessibility (i.e., AV vs. AO) indicated that children’s performance on the speech perception task in the VR classroom was more similar to their performance in the laboratory environment for AV tasks than it was for AO tasks. AO tasks showed improvement in speech perception scores from the laboratory to the VR classroom environment, whereas AV conditions showed little significant change. Conclusion: These results suggest that VR head-mounted displays are a viable research tool in AV tasks for children, increasing flexibility for audiovisual testing in a typical laboratory environment.

Original languageEnglish (US)
Pages (from-to)244-258
Number of pages15
JournalAmerican journal of audiology
Volume29
Issue number2
DOIs
StatePublished - Jun 2020

ASJC Scopus subject areas

  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Applying virtual reality to audiovisual speech perception tasks in children'. Together they form a unique fingerprint.

Cite this