Auditory-visual speech perception examined by brain imaging and reaction time

Kaoru Sekiyama, Yoichi Sugita

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.

Original languageEnglish
Title of host publication7th International Conference on Spoken Language Processing, ICSLP 2002
PublisherInternational Speech Communication Association
Pages1693-1696
Number of pages4
Publication statusPublished - 2002
Externally publishedYes
Event7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States
Duration: 2002 Sep 162002 Sep 20

Other

Other7th International Conference on Spoken Language Processing, ICSLP 2002
CountryUnited States
CityDenver
Period02/9/1602/9/20

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Auditory-visual speech perception examined by brain imaging and reaction time'. Together they form a unique fingerprint.

  • Cite this

    Sekiyama, K., & Sugita, Y. (2002). Auditory-visual speech perception examined by brain imaging and reaction time. In 7th International Conference on Spoken Language Processing, ICSLP 2002 (pp. 1693-1696). International Speech Communication Association.