Auditory-visual speech perception examined by brain imaging and reaction time

Kaoru Sekiyama, Yoichi Sugita

研究成果: Conference contribution

2 引用 (Scopus)

抄録

By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.

元の言語English
ホスト出版物のタイトル7th International Conference on Spoken Language Processing, ICSLP 2002
出版者International Speech Communication Association
ページ1693-1696
ページ数4
出版物ステータスPublished - 2002
外部発表Yes
イベント7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States
継続期間: 2002 9 162002 9 20

Other

Other7th International Conference on Spoken Language Processing, ICSLP 2002
United States
Denver
期間02/9/1602/9/20

Fingerprint

brain
suppression
activation
time
Brain Imaging
Reaction Time
Hearing
Speech Perception
video
Intelligibility
McGurk Effect

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

これを引用

Sekiyama, K., & Sugita, Y. (2002). Auditory-visual speech perception examined by brain imaging and reaction time. : 7th International Conference on Spoken Language Processing, ICSLP 2002 (pp. 1693-1696). International Speech Communication Association.

Auditory-visual speech perception examined by brain imaging and reaction time. / Sekiyama, Kaoru; Sugita, Yoichi.

7th International Conference on Spoken Language Processing, ICSLP 2002. International Speech Communication Association, 2002. p. 1693-1696.

研究成果: Conference contribution

Sekiyama, K & Sugita, Y 2002, Auditory-visual speech perception examined by brain imaging and reaction time. : 7th International Conference on Spoken Language Processing, ICSLP 2002. International Speech Communication Association, pp. 1693-1696, 7th International Conference on Spoken Language Processing, ICSLP 2002, Denver, United States, 02/9/16.
Sekiyama K, Sugita Y. Auditory-visual speech perception examined by brain imaging and reaction time. : 7th International Conference on Spoken Language Processing, ICSLP 2002. International Speech Communication Association. 2002. p. 1693-1696
Sekiyama, Kaoru ; Sugita, Yoichi. / Auditory-visual speech perception examined by brain imaging and reaction time. 7th International Conference on Spoken Language Processing, ICSLP 2002. International Speech Communication Association, 2002. pp. 1693-1696
@inproceedings{d7ba5acc450e4de19f885f4cd6c78c5b,
title = "Auditory-visual speech perception examined by brain imaging and reaction time",
abstract = "By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.",
author = "Kaoru Sekiyama and Yoichi Sugita",
year = "2002",
language = "English",
pages = "1693--1696",
booktitle = "7th International Conference on Spoken Language Processing, ICSLP 2002",
publisher = "International Speech Communication Association",

}

TY - GEN

T1 - Auditory-visual speech perception examined by brain imaging and reaction time

AU - Sekiyama, Kaoru

AU - Sugita, Yoichi

PY - 2002

Y1 - 2002

N2 - By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.

AB - By using the McGurk effect [1], we compared brain activation during audiovisual (AV) speech perception for two sets of conditions differing in the intelligibility of auditory speech (High vs. Low). In the Low intelligibility condition in which speech was harder to hear, the McGurk effect, the visual influence, was much stronger. Functional magnetic resonance imaging (fMRI) also showed that speechreading-related visual areas (the left MT and left intraparietal sulcus as observed in the video-only condition) were strongly activated in the Low intelligibility AV condition but not in the High intelligibility AV condition. Thus visual information of the mouth movements was processed more intensively when speech was harder to hear. Reaction time data suggested that when auditory speech is easier to hear, there is a top-down suppression of visual processing that starts earlier than auditory processing. On the other hand, when auditory speech was less intelligible, reaction time data were such that visual mouth movements served as a priming cue. These results provide an insight into a time-spanned scope of the integration process.

UR - http://www.scopus.com/inward/record.url?scp=85009257800&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85009257800&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85009257800

SP - 1693

EP - 1696

BT - 7th International Conference on Spoken Language Processing, ICSLP 2002

PB - International Speech Communication Association

ER -