Recurrent neural network architecture with pre-synaptic inhibition for incremental learning

Hiroyuki Ohta, Yukio Gunji

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.

Original languageEnglish
Pages (from-to)1106-1119
Number of pages14
JournalNeural Networks
Volume19
Issue number8
DOIs
Publication statusPublished - 2006 Oct
Externally publishedYes

Fingerprint

Recurrent neural networks
Network architecture
Learning
Brain
Decision Making
Decision making
Inhibition (Psychology)

Keywords

  • Affection
  • Anticipation
  • Decision making
  • Incremental learning
  • Realization problem
  • Recurrent neural network

ASJC Scopus subject areas

  • Artificial Intelligence
  • Neuroscience(all)

Cite this

Recurrent neural network architecture with pre-synaptic inhibition for incremental learning. / Ohta, Hiroyuki; Gunji, Yukio.

In: Neural Networks, Vol. 19, No. 8, 10.2006, p. 1106-1119.

Research output: Contribution to journalArticle

@article{b4d3f25dd563413ca81c272821c6131a,
title = "Recurrent neural network architecture with pre-synaptic inhibition for incremental learning",
abstract = "We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.",
keywords = "Affection, Anticipation, Decision making, Incremental learning, Realization problem, Recurrent neural network",
author = "Hiroyuki Ohta and Yukio Gunji",
year = "2006",
month = "10",
doi = "10.1016/j.neunet.2006.06.005",
language = "English",
volume = "19",
pages = "1106--1119",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "8",

}

TY - JOUR

T1 - Recurrent neural network architecture with pre-synaptic inhibition for incremental learning

AU - Ohta, Hiroyuki

AU - Gunji, Yukio

PY - 2006/10

Y1 - 2006/10

N2 - We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.

AB - We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.

KW - Affection

KW - Anticipation

KW - Decision making

KW - Incremental learning

KW - Realization problem

KW - Recurrent neural network

UR - http://www.scopus.com/inward/record.url?scp=33749041444&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749041444&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2006.06.005

DO - 10.1016/j.neunet.2006.06.005

M3 - Article

C2 - 16989983

AN - SCOPUS:33749041444

VL - 19

SP - 1106

EP - 1119

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 8

ER -