Intersensory causality modeling using deep neural networks

Kuniaki Noda*, Hiroaki Arie, Yuki Suga, Tetsuya Ogata

*この研究の対応する著者

研究成果: Conference contribution

3 被引用数 (Scopus)

抄録

Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.

本文言語English
ホスト出版物のタイトルProceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
ページ1995-2000
ページ数6
DOI
出版ステータスPublished - 2013 12 1
イベント2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013 - Manchester, United Kingdom
継続期間: 2013 10 132013 10 16

出版物シリーズ

名前Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013

Other

Other2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
国/地域United Kingdom
CityManchester
Period13/10/1313/10/16

ASJC Scopus subject areas

  • 人間とコンピュータの相互作用

フィンガープリント

「Intersensory causality modeling using deep neural networks」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル