Vowel recognition from continuous articulatory movements for speaker-dependent applications

Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples in near real-time.

Original languageEnglish (US)
Title of host publication4th International Conference on Signal Processing and Communication Systems, ICSPCS'2010 - Proceedings
DOIs
StatePublished - 2010
Event4th International Conference on Signal Processing and Communication Systems, ICSPCS'2010 - Gold Coast, QLD, Australia
Duration: Dec 13 2010Dec 15 2010

Publication series

Name4th International Conference on Signal Processing and Communication Systems, ICSPCS'2010 - Proceedings

Conference

Conference4th International Conference on Signal Processing and Communication Systems, ICSPCS'2010
Country/TerritoryAustralia
CityGold Coast, QLD
Period12/13/1012/15/10

Keywords

  • Articulation
  • Machine learning
  • Recognition
  • Support vector machine

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Signal Processing

Fingerprint

Dive into the research topics of 'Vowel recognition from continuous articulatory movements for speaker-dependent applications'. Together they form a unique fingerprint.

Cite this