Motion from sound: Intermodal neural network mapping

Tetsuya Ogata*, Hiroshi G. Okuno, Hideki Kozima

*この研究の対応する著者

研究成果: Article査読

抄録

A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.

本文言語English
論文番号4475863
ページ(範囲)76-78
ページ数3
ジャーナルIEEE Intelligent Systems
23
2
DOI
出版ステータスPublished - 2008 3 1
外部発表はい

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 人工知能

フィンガープリント

「Motion from sound: Intermodal neural network mapping」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル