Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction

Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    4 Citations (Scopus)

    Abstract

    In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

    Original languageEnglish
    Title of host publicationIEEE International Conference on Intelligent Robots and Systems
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages4179-4184
    Number of pages6
    Volume2015-December
    ISBN (Print)9781479999941
    DOIs
    Publication statusPublished - 2015 Dec 11
    EventIEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015 - Hamburg, Germany
    Duration: 2015 Sep 282015 Oct 2

    Other

    OtherIEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015
    CountryGermany
    CityHamburg
    Period15/9/2815/10/2

    Fingerprint

    Human robot interaction
    Recurrent neural networks
    Semantics
    Flow interactions
    Linguistics
    Robots
    Neural networks
    Communication

    Keywords

    • Hidden Markov models
    • Neurons
    • Pragmatics
    • Robots
    • Semantics
    • Training
    • Training data

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Software
    • Computer Vision and Pattern Recognition
    • Computer Science Applications

    Cite this

    Yamada, T., Murata, S., Arie, H., & Ogata, T. (2015). Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. In IEEE International Conference on Intelligent Robots and Systems (Vol. 2015-December, pp. 4179-4184). [7353968] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS.2015.7353968

    Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. / Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya.

    IEEE International Conference on Intelligent Robots and Systems. Vol. 2015-December Institute of Electrical and Electronics Engineers Inc., 2015. p. 4179-4184 7353968.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Yamada, T, Murata, S, Arie, H & Ogata, T 2015, Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. in IEEE International Conference on Intelligent Robots and Systems. vol. 2015-December, 7353968, Institute of Electrical and Electronics Engineers Inc., pp. 4179-4184, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, 15/9/28. https://doi.org/10.1109/IROS.2015.7353968
    Yamada T, Murata S, Arie H, Ogata T. Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. In IEEE International Conference on Intelligent Robots and Systems. Vol. 2015-December. Institute of Electrical and Electronics Engineers Inc. 2015. p. 4179-4184. 7353968 https://doi.org/10.1109/IROS.2015.7353968
    Yamada, Tatsuro ; Murata, Shingo ; Arie, Hiroaki ; Ogata, Tetsuya. / Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. IEEE International Conference on Intelligent Robots and Systems. Vol. 2015-December Institute of Electrical and Electronics Engineers Inc., 2015. pp. 4179-4184
    @inproceedings{52b6690e55b24af0b7aceb31848c468f,
    title = "Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction",
    abstract = "In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.",
    keywords = "Hidden Markov models, Neurons, Pragmatics, Robots, Semantics, Training, Training data",
    author = "Tatsuro Yamada and Shingo Murata and Hiroaki Arie and Tetsuya Ogata",
    year = "2015",
    month = "12",
    day = "11",
    doi = "10.1109/IROS.2015.7353968",
    language = "English",
    isbn = "9781479999941",
    volume = "2015-December",
    pages = "4179--4184",
    booktitle = "IEEE International Conference on Intelligent Robots and Systems",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",

    }

    TY - GEN

    T1 - Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction

    AU - Yamada, Tatsuro

    AU - Murata, Shingo

    AU - Arie, Hiroaki

    AU - Ogata, Tetsuya

    PY - 2015/12/11

    Y1 - 2015/12/11

    N2 - In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

    AB - In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

    KW - Hidden Markov models

    KW - Neurons

    KW - Pragmatics

    KW - Robots

    KW - Semantics

    KW - Training

    KW - Training data

    UR - http://www.scopus.com/inward/record.url?scp=84958151379&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84958151379&partnerID=8YFLogxK

    U2 - 10.1109/IROS.2015.7353968

    DO - 10.1109/IROS.2015.7353968

    M3 - Conference contribution

    AN - SCOPUS:84958151379

    SN - 9781479999941

    VL - 2015-December

    SP - 4179

    EP - 4184

    BT - IEEE International Conference on Intelligent Robots and Systems

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -