Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences

Ryoichi Nakajo, Shingo Murata, Hiroaki Arie, Tetsuya Ogata

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    5 Citations (Scopus)

    Abstract

    This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.

    Original languageEnglish
    Title of host publication5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages326-331
    Number of pages6
    ISBN (Print)9781467393201
    DOIs
    Publication statusPublished - 2015 Dec 2
    Event5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015 - Providence, United States
    Duration: 2015 Aug 132015 Aug 16

    Other

    Other5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015
    CountryUnited States
    CityProvidence
    Period15/8/1315/8/16

    Fingerprint

    Recurrent neural networks
    Robots
    Neurons

    ASJC Scopus subject areas

    • Artificial Intelligence

    Cite this

    Nakajo, R., Murata, S., Arie, H., & Ogata, T. (2015). Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. In 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015 (pp. 326-331). [7346166] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/DEVLRN.2015.7346166

    Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. / Nakajo, Ryoichi; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya.

    5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015. Institute of Electrical and Electronics Engineers Inc., 2015. p. 326-331 7346166.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Nakajo, R, Murata, S, Arie, H & Ogata, T 2015, Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. in 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015., 7346166, Institute of Electrical and Electronics Engineers Inc., pp. 326-331, 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015, Providence, United States, 15/8/13. https://doi.org/10.1109/DEVLRN.2015.7346166
    Nakajo R, Murata S, Arie H, Ogata T. Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. In 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015. Institute of Electrical and Electronics Engineers Inc. 2015. p. 326-331. 7346166 https://doi.org/10.1109/DEVLRN.2015.7346166
    Nakajo, Ryoichi ; Murata, Shingo ; Arie, Hiroaki ; Ogata, Tetsuya. / Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015. Institute of Electrical and Electronics Engineers Inc., 2015. pp. 326-331
    @inproceedings{9bb9cfaeae3e4a9ca916b85e5a053dc9,
    title = "Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences",
    abstract = "This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.",
    author = "Ryoichi Nakajo and Shingo Murata and Hiroaki Arie and Tetsuya Ogata",
    year = "2015",
    month = "12",
    day = "2",
    doi = "10.1109/DEVLRN.2015.7346166",
    language = "English",
    isbn = "9781467393201",
    pages = "326--331",
    booktitle = "5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",

    }

    TY - GEN

    T1 - Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences

    AU - Nakajo, Ryoichi

    AU - Murata, Shingo

    AU - Arie, Hiroaki

    AU - Ogata, Tetsuya

    PY - 2015/12/2

    Y1 - 2015/12/2

    N2 - This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.

    AB - This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.

    UR - http://www.scopus.com/inward/record.url?scp=84962148493&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84962148493&partnerID=8YFLogxK

    U2 - 10.1109/DEVLRN.2015.7346166

    DO - 10.1109/DEVLRN.2015.7346166

    M3 - Conference contribution

    AN - SCOPUS:84962148493

    SN - 9781467393201

    SP - 326

    EP - 331

    BT - 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -