Representation learning of logic words by an RNN: From word sequences to robot actions

Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network’s internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot’s own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network’s dynamical system.

Original languageEnglish
Article number70
JournalFrontiers in Neurorobotics
Volume11
DOIs
Publication statusPublished - 2017 Jan 1

Keywords

  • Human–robot interaction
  • Language understanding
  • Logic words
  • Neural network
  • Sequence-to-sequence learning
  • Symbol grounding

ASJC Scopus subject areas

  • Biomedical Engineering
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Representation learning of logic words by an RNN: From word sequences to robot actions'. Together they form a unique fingerprint.

Cite this