Auditory-visual speech perception examined by brain imaging and reaction time

Kaoru Sekiyama, Yoichi Sugita

研究成果: Conference contribution

2 被引用数 (Scopus)

抄録

By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.

本文言語English
ホスト出版物のタイトル7th International Conference on Spoken Language Processing, ICSLP 2002
出版社International Speech Communication Association
ページ1693-1696
ページ数4
出版ステータスPublished - 2002
外部発表はい
イベント7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States
継続期間: 2002 9月 162002 9月 20

Other

Other7th International Conference on Spoken Language Processing, ICSLP 2002
国/地域United States
CityDenver
Period02/9/1602/9/20

ASJC Scopus subject areas

  • 言語および言語学
  • 言語学および言語

フィンガープリント

「Auditory-visual speech perception examined by brain imaging and reaction time」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル