Gestural cue analysis in automated semantic miscommunication annotation

Masashi Inoue, Mitsunori Ogihara, Ryoko Hanada, Nobuhiro Furuyama

研究成果: Article査読

4 被引用数 (Scopus)

抄録

The automated annotation of conversational video by semantic miscommunication labels is a challenging topic. Although miscommunications are often obvious to the speakers as well as the observers, it is difficult for machines to detect them from the low-level features. We investigate the utility of gestural cues in this paper among various non-verbal features. Compared with gesture recognition tasks in human-computer interaction, this process is difficult due to the lack of understanding on which cues contribute to miscommunications and the implicitness of gestures. Nine simple gestural features are taken from gesture data, and both simple and complex classifiers are constructed using machine learning. The experimental results suggest that there is no single gestural feature that can predict or explain the occurrence of semantic miscommunication in our setting.

本文言語English
ページ(範囲)7-20
ページ数14
ジャーナルMultimedia Tools and Applications
61
1
DOI
出版ステータスPublished - 2012 11
外部発表はい

ASJC Scopus subject areas

  • ソフトウェア
  • メディア記述
  • ハードウェアとアーキテクチャ
  • コンピュータ ネットワークおよび通信

フィンガープリント

「Gestural cue analysis in automated semantic miscommunication annotation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル