Online Learning of Genetic Network Programming and its Application to Prisoner's Dilemma Game

Shingo Mabu, Takayuki Furuzuki, Junichi Murata, Kotaro Hirasawa

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

A new evolutionary model with the network structure named Genetic Network Programming (GNP) has been proposed recently. GNP, that is, an expansion of GA and GP, represents solutions as a network structure and evolves it by using “offline learning (selection, mutation, crossover)”. GNP can memorize the past action sequences in the network flow, so it can deal with Partially Observable Markov Decision Process (POMDP) well. In this paper, in order to improve the ability of GNP, Q learning (an off-policy TD control algorithm) that is one of the famous online methods is introduced for online learning of GNP. Q learning is suitable for GNP because (1) in reinforcement learning, the rewards an agent will get in the future can be estimated, (2) TD control doesn't need much memory and can learn quickly, and (3) off-policy is suitable in order to search for an optimal solution independently of the policy. Finally, in the simulations, online learning of GNP is applied to a player for “Prisoner's dilemma game“ and its ability for online adaptation is confirmed.

Original languageEnglish
Pages (from-to)535-543
Number of pages9
JournalIEEJ Transactions on Electronics, Information and Systems
Volume123
Issue number3
DOIs
Publication statusPublished - 2003
Externally publishedYes

    Fingerprint

Keywords

  • Genetic Algorithm
  • Genetic Programming. Network Structure
  • Online learning
  • Prisoner's dilemma game
  • Q learning

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this