The MEI robot: Towards using motherese to develop multimodal emotional intelligence

Angelica Lim, Hiroshi G. Okuno

研究成果: Article査読

26 被引用数 (Scopus)

抄録

We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI's development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality - human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.

本文言語English
論文番号6798757
ページ(範囲)126-138
ページ数13
ジャーナルIEEE Transactions on Autonomous Mental Development
6
2
DOI
出版ステータスPublished - 2014
外部発表はい

ASJC Scopus subject areas

  • 人工知能
  • ソフトウェア

フィンガープリント

「The MEI robot: Towards using motherese to develop multimodal emotional intelligence」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル