Understanding natural language sentences with word embedding and multi-modal interaction

Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi, Chenguang Yang

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    3 Citations (Scopus)

    Abstract

    Understanding and grounding human commands with natural languages have been a fundamental requirement for service robotic applications. Although there have been several attempts toward this goal, the bottleneck still exists to store and process the corpora of natural language in an interaction system. Currently, the neural- and statistical-based (N&S) natural language processing have shown potential to solve this problem. With the availability of large data-sets nowadays, these processing methods are able to extract semantic relationships while parsing a corpus of natural language (NL) text without much human design, compared with the rule-based language processing methods. In this paper, we show that how two N&S based word embedding methods, called Word2vec and GloVe, can be used in natural language understanding as pre-training tools in a multi-modal environment. Together with two different multiple time-scale recurrent neural models, they form hybrid neural language understanding models for a robot manipulation experiment.

    Original languageEnglish
    Title of host publication7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages184-189
    Number of pages6
    Volume2018-January
    ISBN (Electronic)9781538637159
    DOIs
    Publication statusPublished - 2018 Apr 2
    Event7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017 - Lisbon, Portugal
    Duration: 2017 Sep 182017 Sep 21

    Other

    Other7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017
    CountryPortugal
    CityLisbon
    Period17/9/1817/9/21

    Fingerprint

    Multimodal Interaction
    Natural Language
    Language
    Processing
    Electric grounding
    Robotics
    Semantics
    Multiple Time Scales
    Natural Language Processing
    Availability
    Robots
    Parsing
    Large Data Sets
    Manipulation
    Robot
    Experiments
    Requirements
    Interaction
    Model
    Experiment

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Mechanical Engineering
    • Control and Optimization
    • Developmental Neuroscience

    Cite this

    Zhong, J., Ogata, T., Cangelosi, A., & Yang, C. (2018). Understanding natural language sentences with word embedding and multi-modal interaction. In 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017 (Vol. 2018-January, pp. 184-189). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/DEVLRN.2017.8329805

    Understanding natural language sentences with word embedding and multi-modal interaction. / Zhong, Junpei; Ogata, Tetsuya; Cangelosi, Angelo; Yang, Chenguang.

    7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 184-189.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Zhong, J, Ogata, T, Cangelosi, A & Yang, C 2018, Understanding natural language sentences with word embedding and multi-modal interaction. in 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 184-189, 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017, Lisbon, Portugal, 17/9/18. https://doi.org/10.1109/DEVLRN.2017.8329805
    Zhong J, Ogata T, Cangelosi A, Yang C. Understanding natural language sentences with word embedding and multi-modal interaction. In 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 184-189 https://doi.org/10.1109/DEVLRN.2017.8329805
    Zhong, Junpei ; Ogata, Tetsuya ; Cangelosi, Angelo ; Yang, Chenguang. / Understanding natural language sentences with word embedding and multi-modal interaction. 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 184-189
    @inproceedings{ac20ff77a3f74b3f81cadbca640d8d7c,
    title = "Understanding natural language sentences with word embedding and multi-modal interaction",
    abstract = "Understanding and grounding human commands with natural languages have been a fundamental requirement for service robotic applications. Although there have been several attempts toward this goal, the bottleneck still exists to store and process the corpora of natural language in an interaction system. Currently, the neural- and statistical-based (N&S) natural language processing have shown potential to solve this problem. With the availability of large data-sets nowadays, these processing methods are able to extract semantic relationships while parsing a corpus of natural language (NL) text without much human design, compared with the rule-based language processing methods. In this paper, we show that how two N&S based word embedding methods, called Word2vec and GloVe, can be used in natural language understanding as pre-training tools in a multi-modal environment. Together with two different multiple time-scale recurrent neural models, they form hybrid neural language understanding models for a robot manipulation experiment.",
    author = "Junpei Zhong and Tetsuya Ogata and Angelo Cangelosi and Chenguang Yang",
    year = "2018",
    month = "4",
    day = "2",
    doi = "10.1109/DEVLRN.2017.8329805",
    language = "English",
    volume = "2018-January",
    pages = "184--189",
    booktitle = "7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",

    }

    TY - GEN

    T1 - Understanding natural language sentences with word embedding and multi-modal interaction

    AU - Zhong, Junpei

    AU - Ogata, Tetsuya

    AU - Cangelosi, Angelo

    AU - Yang, Chenguang

    PY - 2018/4/2

    Y1 - 2018/4/2

    N2 - Understanding and grounding human commands with natural languages have been a fundamental requirement for service robotic applications. Although there have been several attempts toward this goal, the bottleneck still exists to store and process the corpora of natural language in an interaction system. Currently, the neural- and statistical-based (N&S) natural language processing have shown potential to solve this problem. With the availability of large data-sets nowadays, these processing methods are able to extract semantic relationships while parsing a corpus of natural language (NL) text without much human design, compared with the rule-based language processing methods. In this paper, we show that how two N&S based word embedding methods, called Word2vec and GloVe, can be used in natural language understanding as pre-training tools in a multi-modal environment. Together with two different multiple time-scale recurrent neural models, they form hybrid neural language understanding models for a robot manipulation experiment.

    AB - Understanding and grounding human commands with natural languages have been a fundamental requirement for service robotic applications. Although there have been several attempts toward this goal, the bottleneck still exists to store and process the corpora of natural language in an interaction system. Currently, the neural- and statistical-based (N&S) natural language processing have shown potential to solve this problem. With the availability of large data-sets nowadays, these processing methods are able to extract semantic relationships while parsing a corpus of natural language (NL) text without much human design, compared with the rule-based language processing methods. In this paper, we show that how two N&S based word embedding methods, called Word2vec and GloVe, can be used in natural language understanding as pre-training tools in a multi-modal environment. Together with two different multiple time-scale recurrent neural models, they form hybrid neural language understanding models for a robot manipulation experiment.

    UR - http://www.scopus.com/inward/record.url?scp=85050202520&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85050202520&partnerID=8YFLogxK

    U2 - 10.1109/DEVLRN.2017.8329805

    DO - 10.1109/DEVLRN.2017.8329805

    M3 - Conference contribution

    AN - SCOPUS:85050202520

    VL - 2018-January

    SP - 184

    EP - 189

    BT - 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -