A brainlike learning system with supervised, unsupervised, and reinforcement Learning

Takafumi Sasakawa, Takayuki Furuzuki, Kotaro Hirasawa

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

According to Hebb's cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that in the brain there are three different learning paradigms: supervised, unsupervised, and reinforcement learning, which are related deeply to the three parts of brain: cerebellum, cerebral cortex, and basal ganglia, respectively. Inspired by the above knowledge of the brain in this paper we present a brainlike learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part, and reinforcement learning (RL) part. The SL part is a main part learning inputoutput mapping; the UL part is a competitive network dividing input space into subspaces and realizes the capability of function localization by controlling firing strength of neurons in the SL part based on input patterns; the RL part is a reinforcement learning scheme, which optimizes system performance by adjusting the parameters in the UL part. Numerical simulations have been carried out and the simulation results confirm the effectiveness of the proposed brainlike learning system.

Original languageEnglish
Pages (from-to)32-39
Number of pages8
JournalElectrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)
Volume162
Issue number1
DOIs
Publication statusPublished - 2008 Jan 15

Fingerprint

Unsupervised learning
Supervised learning
Reinforcement learning
Learning systems
Brain
Neurons
Computer simulation

Keywords

  • Brainlike model
  • Neural networks
  • Reinforcement learning
  • Supervised learning
  • Unsupervised learning

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

A brainlike learning system with supervised, unsupervised, and reinforcement Learning. / Sasakawa, Takafumi; Furuzuki, Takayuki; Hirasawa, Kotaro.

In: Electrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi), Vol. 162, No. 1, 15.01.2008, p. 32-39.

Research output: Contribution to journalArticle

@article{18e0e4fb2557461aacd7b1e83c9b1a6f,
title = "A brainlike learning system with supervised, unsupervised, and reinforcement Learning",
abstract = "According to Hebb's cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that in the brain there are three different learning paradigms: supervised, unsupervised, and reinforcement learning, which are related deeply to the three parts of brain: cerebellum, cerebral cortex, and basal ganglia, respectively. Inspired by the above knowledge of the brain in this paper we present a brainlike learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part, and reinforcement learning (RL) part. The SL part is a main part learning inputoutput mapping; the UL part is a competitive network dividing input space into subspaces and realizes the capability of function localization by controlling firing strength of neurons in the SL part based on input patterns; the RL part is a reinforcement learning scheme, which optimizes system performance by adjusting the parameters in the UL part. Numerical simulations have been carried out and the simulation results confirm the effectiveness of the proposed brainlike learning system.",
keywords = "Brainlike model, Neural networks, Reinforcement learning, Supervised learning, Unsupervised learning",
author = "Takafumi Sasakawa and Takayuki Furuzuki and Kotaro Hirasawa",
year = "2008",
month = "1",
day = "15",
doi = "10.1002/eej.20600",
language = "English",
volume = "162",
pages = "32--39",
journal = "Electrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)",
issn = "0424-7760",
publisher = "John Wiley and Sons Inc.",
number = "1",

}

TY - JOUR

T1 - A brainlike learning system with supervised, unsupervised, and reinforcement Learning

AU - Sasakawa, Takafumi

AU - Furuzuki, Takayuki

AU - Hirasawa, Kotaro

PY - 2008/1/15

Y1 - 2008/1/15

N2 - According to Hebb's cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that in the brain there are three different learning paradigms: supervised, unsupervised, and reinforcement learning, which are related deeply to the three parts of brain: cerebellum, cerebral cortex, and basal ganglia, respectively. Inspired by the above knowledge of the brain in this paper we present a brainlike learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part, and reinforcement learning (RL) part. The SL part is a main part learning inputoutput mapping; the UL part is a competitive network dividing input space into subspaces and realizes the capability of function localization by controlling firing strength of neurons in the SL part based on input patterns; the RL part is a reinforcement learning scheme, which optimizes system performance by adjusting the parameters in the UL part. Numerical simulations have been carried out and the simulation results confirm the effectiveness of the proposed brainlike learning system.

AB - According to Hebb's cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that in the brain there are three different learning paradigms: supervised, unsupervised, and reinforcement learning, which are related deeply to the three parts of brain: cerebellum, cerebral cortex, and basal ganglia, respectively. Inspired by the above knowledge of the brain in this paper we present a brainlike learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part, and reinforcement learning (RL) part. The SL part is a main part learning inputoutput mapping; the UL part is a competitive network dividing input space into subspaces and realizes the capability of function localization by controlling firing strength of neurons in the SL part based on input patterns; the RL part is a reinforcement learning scheme, which optimizes system performance by adjusting the parameters in the UL part. Numerical simulations have been carried out and the simulation results confirm the effectiveness of the proposed brainlike learning system.

KW - Brainlike model

KW - Neural networks

KW - Reinforcement learning

KW - Supervised learning

KW - Unsupervised learning

UR - http://www.scopus.com/inward/record.url?scp=35348998582&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=35348998582&partnerID=8YFLogxK

U2 - 10.1002/eej.20600

DO - 10.1002/eej.20600

M3 - Article

VL - 162

SP - 32

EP - 39

JO - Electrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)

JF - Electrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)

SN - 0424-7760

IS - 1

ER -