Intersensory causality modeling using deep neural networks

Kuniaki Noda, Hiroaki Arie, Yuki Suga, Tetsuya Ogata

    研究成果: Conference contribution

    2 引用 (Scopus)

    抄録

    Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.

    元の言語English
    ホスト出版物のタイトルProceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
    ページ1995-2000
    ページ数6
    DOI
    出版物ステータスPublished - 2013
    イベント2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013 - Manchester
    継続期間: 2013 10 132013 10 16

    Other

    Other2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
    Manchester
    期間13/10/1313/10/16

    Fingerprint

    Brain
    Robotics
    Acoustic waves
    Robots
    Data storage equipment
    Deep neural networks
    Deep learning

    ASJC Scopus subject areas

    • Human-Computer Interaction

    これを引用

    Noda, K., Arie, H., Suga, Y., & Ogata, T. (2013). Intersensory causality modeling using deep neural networks. : Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013 (pp. 1995-2000). [6722095] https://doi.org/10.1109/SMC.2013.342

    Intersensory causality modeling using deep neural networks. / Noda, Kuniaki; Arie, Hiroaki; Suga, Yuki; Ogata, Tetsuya.

    Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013. 2013. p. 1995-2000 6722095.

    研究成果: Conference contribution

    Noda, K, Arie, H, Suga, Y & Ogata, T 2013, Intersensory causality modeling using deep neural networks. : Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013., 6722095, pp. 1995-2000, 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013, Manchester, 13/10/13. https://doi.org/10.1109/SMC.2013.342
    Noda K, Arie H, Suga Y, Ogata T. Intersensory causality modeling using deep neural networks. : Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013. 2013. p. 1995-2000. 6722095 https://doi.org/10.1109/SMC.2013.342
    Noda, Kuniaki ; Arie, Hiroaki ; Suga, Yuki ; Ogata, Tetsuya. / Intersensory causality modeling using deep neural networks. Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013. 2013. pp. 1995-2000
    @inproceedings{162e57152beb429aae73b166b9633b2f,
    title = "Intersensory causality modeling using deep neural networks",
    abstract = "Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.",
    keywords = "Deep learning, Multimodal integration, Robotics, Temporal sequence learning",
    author = "Kuniaki Noda and Hiroaki Arie and Yuki Suga and Tetsuya Ogata",
    year = "2013",
    doi = "10.1109/SMC.2013.342",
    language = "English",
    isbn = "9780769551548",
    pages = "1995--2000",
    booktitle = "Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013",

    }

    TY - GEN

    T1 - Intersensory causality modeling using deep neural networks

    AU - Noda, Kuniaki

    AU - Arie, Hiroaki

    AU - Suga, Yuki

    AU - Ogata, Tetsuya

    PY - 2013

    Y1 - 2013

    N2 - Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.

    AB - Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.

    KW - Deep learning

    KW - Multimodal integration

    KW - Robotics

    KW - Temporal sequence learning

    UR - http://www.scopus.com/inward/record.url?scp=84893603100&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84893603100&partnerID=8YFLogxK

    U2 - 10.1109/SMC.2013.342

    DO - 10.1109/SMC.2013.342

    M3 - Conference contribution

    AN - SCOPUS:84893603100

    SN - 9780769551548

    SP - 1995

    EP - 2000

    BT - Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013

    ER -