Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network

Shun Nishide, Jun Tani, Hiroshi G. Okuno, Tetsuya Ogata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.

Original languageEnglish
Title of host publication2012 International Joint Conference on Neural Networks, IJCNN 2012
DOIs
Publication statusPublished - 2012 Aug 22
Externally publishedYes
Event2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 - Brisbane, QLD, Australia
Duration: 2012 Jun 102012 Jun 15

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012
CountryAustralia
CityBrisbane, QLD
Period12/6/1012/6/15

Keywords

  • Affordance Theory
  • Feature Extraction
  • Recurrent Neural Network

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network'. Together they form a unique fingerprint.

  • Cite this

    Nishide, S., Tani, J., Okuno, H. G., & Ogata, T. (2012). Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network. In 2012 International Joint Conference on Neural Networks, IJCNN 2012 [6252714] (Proceedings of the International Joint Conference on Neural Networks). https://doi.org/10.1109/IJCNN.2012.6252714