Motion from sound: Intermodal neural network mapping

Research output: Contribution to journalArticle

Abstract

A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.

Original languageEnglish
Article number4475863
Pages (from-to)76-78
Number of pages3
JournalIEEE Intelligent Systems
Volume23
Issue number2
DOIs
Publication statusPublished - 2008 Mar
Externally publishedYes

Fingerprint

Acoustic waves
Neural networks
Recurrent neural networks
Robots

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Cite this

Motion from sound : Intermodal neural network mapping. / Ogata, Tetsuya; Okuno, Hiroshi G.; Kozima, Hideki.

In: IEEE Intelligent Systems, Vol. 23, No. 2, 4475863, 03.2008, p. 76-78.

Research output: Contribution to journalArticle

@article{381c035323c14222b5dc5b1f2927d275,
title = "Motion from sound: Intermodal neural network mapping",
abstract = "A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.",
author = "Tetsuya Ogata and Okuno, {Hiroshi G.} and Hideki Kozima",
year = "2008",
month = "3",
doi = "10.1109/MIS.2008.22",
language = "English",
volume = "23",
pages = "76--78",
journal = "IEEE Intelligent Systems",
issn = "1541-1672",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

TY - JOUR

T1 - Motion from sound

T2 - Intermodal neural network mapping

AU - Ogata, Tetsuya

AU - Okuno, Hiroshi G.

AU - Kozima, Hideki

PY - 2008/3

Y1 - 2008/3

N2 - A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.

AB - A technological method has been developed for intermodal mapping to generate robot motion from various sounds as well as to generate sounds from motions. The procedure consists of two phases, first the learning phase in which it observes some events together with associated sounds and then memorizes those sounds along with the motions of the sound source. Second phase is the interacting phase in which the robot receives limited sensory information from a single modality as input and associates this with different modality and expresses it. The recurrent-neural-network model with parametric bias (RNNPB) is applied that uses the current state-vector as input for outputting the next state-vector. The RNNPB model can self-organize the values that encode the input dynamics into special parametric-bias modes to reproduce he multimodal sensory flow.

UR - http://www.scopus.com/inward/record.url?scp=41549145445&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=41549145445&partnerID=8YFLogxK

U2 - 10.1109/MIS.2008.22

DO - 10.1109/MIS.2008.22

M3 - Article

AN - SCOPUS:41549145445

VL - 23

SP - 76

EP - 78

JO - IEEE Intelligent Systems

JF - IEEE Intelligent Systems

SN - 1541-1672

IS - 2

M1 - 4475863

ER -