Recurrent neural network architecture with pre-synaptic inhibition for incremental learning

Hiroyuki Ohta, Yukio Pegio Gunji

研究成果: Article査読

7 被引用数 (Scopus)

抄録

We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.

本文言語English
ページ(範囲)1106-1119
ページ数14
ジャーナルNeural Networks
19
8
DOI
出版ステータスPublished - 2006 10 1
外部発表はい

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

フィンガープリント 「Recurrent neural network architecture with pre-synaptic inhibition for incremental learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル