### Abstract

A novel graph-based Estimation of Distribution Algorithm (EDA) named Probabilistic Model Building Genetic Network Programming (PMBGNP) has been proposed. Inspired by classical EDAs, PMBGNP memorizes the current best individuals and uses them to estimate a distribution for the generation of the new population. However, PMBGNP can evolve compact programs by representing its solutions as graph structures. Therefore, it can solve a range of problems different from conventional ones in EDA literature, such as data mining and Reinforcement Learning (RL) problems. This paper extends PMBGNP from discrete to continuous search space, which is named PMBGNP-AC. Besides evolving the node connections to determine the optimal graph structures using conventional PMBGNP, Gaussian distribution is used for the distribution of continuous variables of nodes. The mean value μ and standard deviation σ are constructed like those of classical continuous Population-based incremental learning (PBILc). However, a RL technique, i.e., Actor-Critic (AC), is designed to update the parameters (μ and σ). AC allows us to calculate the Temporal-Difference (TD) error to evaluate whether the selection of the continuous value is better or worse than expected. This scalar reinforcement signal can decide whether the tendency to select this continuous value should be strengthened or weakened, allowing us to determine the shape of the probability density functions of the Gaussian distribution. The proposed algorithm is applied to a RL problem, i.e., autonomous robot control, where the robot's wheel speeds and sensor values are continuous. The experimental results show the superiority of PMBGNP-AC comparing with the conventional algorithms.

Original language | English |
---|---|

Title of host publication | 2012 IEEE Congress on Evolutionary Computation, CEC 2012 |

DOIs | |

Publication status | Published - 2012 |

Event | 2012 IEEE Congress on Evolutionary Computation, CEC 2012 - Brisbane, QLD Duration: 2012 Jun 10 → 2012 Jun 15 |

### Other

Other | 2012 IEEE Congress on Evolutionary Computation, CEC 2012 |
---|---|

City | Brisbane, QLD |

Period | 12/6/10 → 12/6/15 |

### Fingerprint

### ASJC Scopus subject areas

- Computational Theory and Mathematics
- Theoretical Computer Science

### Cite this

*2012 IEEE Congress on Evolutionary Computation, CEC 2012*[6256481] https://doi.org/10.1109/CEC.2012.6256481

**A continuous estimation of distribution algorithm by evolving graph structures using reinforcement learning.** / Li, Xianneng; Li, Bing; Mabu, Shingo; Hirasawa, Kotaro.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2012 IEEE Congress on Evolutionary Computation, CEC 2012.*, 6256481, 2012 IEEE Congress on Evolutionary Computation, CEC 2012, Brisbane, QLD, 12/6/10. https://doi.org/10.1109/CEC.2012.6256481

}

TY - GEN

T1 - A continuous estimation of distribution algorithm by evolving graph structures using reinforcement learning

AU - Li, Xianneng

AU - Li, Bing

AU - Mabu, Shingo

AU - Hirasawa, Kotaro

PY - 2012

Y1 - 2012

N2 - A novel graph-based Estimation of Distribution Algorithm (EDA) named Probabilistic Model Building Genetic Network Programming (PMBGNP) has been proposed. Inspired by classical EDAs, PMBGNP memorizes the current best individuals and uses them to estimate a distribution for the generation of the new population. However, PMBGNP can evolve compact programs by representing its solutions as graph structures. Therefore, it can solve a range of problems different from conventional ones in EDA literature, such as data mining and Reinforcement Learning (RL) problems. This paper extends PMBGNP from discrete to continuous search space, which is named PMBGNP-AC. Besides evolving the node connections to determine the optimal graph structures using conventional PMBGNP, Gaussian distribution is used for the distribution of continuous variables of nodes. The mean value μ and standard deviation σ are constructed like those of classical continuous Population-based incremental learning (PBILc). However, a RL technique, i.e., Actor-Critic (AC), is designed to update the parameters (μ and σ). AC allows us to calculate the Temporal-Difference (TD) error to evaluate whether the selection of the continuous value is better or worse than expected. This scalar reinforcement signal can decide whether the tendency to select this continuous value should be strengthened or weakened, allowing us to determine the shape of the probability density functions of the Gaussian distribution. The proposed algorithm is applied to a RL problem, i.e., autonomous robot control, where the robot's wheel speeds and sensor values are continuous. The experimental results show the superiority of PMBGNP-AC comparing with the conventional algorithms.

AB - A novel graph-based Estimation of Distribution Algorithm (EDA) named Probabilistic Model Building Genetic Network Programming (PMBGNP) has been proposed. Inspired by classical EDAs, PMBGNP memorizes the current best individuals and uses them to estimate a distribution for the generation of the new population. However, PMBGNP can evolve compact programs by representing its solutions as graph structures. Therefore, it can solve a range of problems different from conventional ones in EDA literature, such as data mining and Reinforcement Learning (RL) problems. This paper extends PMBGNP from discrete to continuous search space, which is named PMBGNP-AC. Besides evolving the node connections to determine the optimal graph structures using conventional PMBGNP, Gaussian distribution is used for the distribution of continuous variables of nodes. The mean value μ and standard deviation σ are constructed like those of classical continuous Population-based incremental learning (PBILc). However, a RL technique, i.e., Actor-Critic (AC), is designed to update the parameters (μ and σ). AC allows us to calculate the Temporal-Difference (TD) error to evaluate whether the selection of the continuous value is better or worse than expected. This scalar reinforcement signal can decide whether the tendency to select this continuous value should be strengthened or weakened, allowing us to determine the shape of the probability density functions of the Gaussian distribution. The proposed algorithm is applied to a RL problem, i.e., autonomous robot control, where the robot's wheel speeds and sensor values are continuous. The experimental results show the superiority of PMBGNP-AC comparing with the conventional algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84866856884&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84866856884&partnerID=8YFLogxK

U2 - 10.1109/CEC.2012.6256481

DO - 10.1109/CEC.2012.6256481

M3 - Conference contribution

SN - 9781467315098

BT - 2012 IEEE Congress on Evolutionary Computation, CEC 2012

ER -