Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network

Shun Nishide, Jun Tani, Hiroshi G. Okuno, Tetsuya Ogata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 - Brisbane, QLD
Duration: 2012 Jun 102012 Jun 15

Other

Other2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012
CityBrisbane, QLD
Period12/6/1012/6/15

Fingerprint

Recurrent neural networks
Feature extraction
Cameras
Robots
Neural networks

Keywords

  • Affordance Theory
  • Feature Extraction
  • Recurrent Neural Network

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network. / Nishide, Shun; Tani, Jun; Okuno, Hiroshi G.; Ogata, Tetsuya.

Proceedings of the International Joint Conference on Neural Networks. 2012. 6252714.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nishide, S, Tani, J, Okuno, HG & Ogata, T 2012, Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network. in Proceedings of the International Joint Conference on Neural Networks., 6252714, 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012, Brisbane, QLD, 12/6/10. https://doi.org/10.1109/IJCNN.2012.6252714
Nishide, Shun ; Tani, Jun ; Okuno, Hiroshi G. ; Ogata, Tetsuya. / Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network. Proceedings of the International Joint Conference on Neural Networks. 2012.
@inproceedings{cf167458a34140d0bcdc8e8f2118533e,
title = "Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network",
abstract = "Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.",
keywords = "Affordance Theory, Feature Extraction, Recurrent Neural Network",
author = "Shun Nishide and Jun Tani and Okuno, {Hiroshi G.} and Tetsuya Ogata",
year = "2012",
doi = "10.1109/IJCNN.2012.6252714",
language = "English",
isbn = "9781467314909",
booktitle = "Proceedings of the International Joint Conference on Neural Networks",

}

TY - GEN

T1 - Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network

AU - Nishide, Shun

AU - Tani, Jun

AU - Okuno, Hiroshi G.

AU - Ogata, Tetsuya

PY - 2012

Y1 - 2012

N2 - Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.

AB - Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.

KW - Affordance Theory

KW - Feature Extraction

KW - Recurrent Neural Network

UR - http://www.scopus.com/inward/record.url?scp=84865089295&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84865089295&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2012.6252714

DO - 10.1109/IJCNN.2012.6252714

M3 - Conference contribution

SN - 9781467314909

BT - Proceedings of the International Joint Conference on Neural Networks

ER -