Acquisition of motion primitives of robot in human-navigation task towards human-robot interaction based on "quasi-symbols"

Tetsuya Ogata*, Shigeki Sugano, Jun Tani

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

A novel approach to human-robot collaboration based on quasi-symbolic expressions is proposed. The target task is navigation in which a person with his or her eyes covered and a humanoid robot collaborate in a context-dependent manner. The robot uses a recurrent neural net with parametric bias (RNNPB) model to acquire the behavioral primitives, which are sensory-motor units, composing the whole task. The robot expresses the PB dynamics as primitives using symbolic sounds, and the person influences these dynamics through tactile sensors attached to the robot. Experiments with six participants demonstrated that the level of influence the person has on the PB dynamics is strongly related to task performance, the person's subjective impressions, and the prediction error of the RNNPB model (task stability). Simulation experiments demonstrated that the subjective impressions of the correspondence between the utterance sounds (the PB values) and the motions were well reproduced by the rehearsal of the RNNPB model.

Original languageEnglish
Pages (from-to)188-196
Number of pages9
JournalTransactions of the Japanese Society for Artificial Intelligence
Volume20
Issue number3
DOIs
Publication statusPublished - 2005

Keywords

  • Human-Robot Interaction
  • Motion Primitive
  • Quasi-Symbol
  • RNNPB

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Acquisition of motion primitives of robot in human-navigation task towards human-robot interaction based on "quasi-symbols"'. Together they form a unique fingerprint.

Cite this