### Abstract

The learning process consists of observation and inference. On the one hand, it is understood that the inference process involves internal choice. On the other hand, the internal process is not essentially expressed; however, the internal choice is explicitly written down by sorting of variants in many brain models. Finding out what the learning process is is nothing but to answer whether the origin of variants in variation and selection is a well-defined question or not. It is not whether we can find a sorting process in the brain or not, but whether the internal choice can be replaced by sorting of variants in programmable systems. We estimate here this type of question, and formalize internal choice in another way. In our model, the learning process is communication among elements of a system, in which an element learns the behavior of other elements through observation. However, observation is incomplete resulting from finite velocity of observation propagation. Incomplete identification (observation) is here formalized not by "variation and selection" but by decision change a posteriori, introducing backward-time. In our model, we can demonstrate that misreading a posteriori generates information that possibly generates novelty.

Original language | English |
---|---|

Pages (from-to) | 219-253 |

Number of pages | 35 |

Journal | Applied Mathematics and Computation |

Volume | 55 |

Issue number | 2-3 |

DOIs | |

Publication status | Published - 1993 |

Externally published | Yes |

### Fingerprint

### ASJC Scopus subject areas

- Applied Mathematics
- Computational Mathematics
- Numerical Analysis

### Cite this

^{1}1 We thank K. Matsuno, K. Ito, and T. Nakamura for various discussions and suggestions. We also thank T. Hirabayashi for drawing some figures.

*Applied Mathematics and Computation*,

*55*(2-3), 219-253. https://doi.org/10.1016/0096-3003(93)90023-8

**Learning processes based on incomplete identification and information generation ^{1} 1 We thank K. Matsuno, K. Ito, and T. Nakamura for various discussions and suggestions. We also thank T. Hirabayashi for drawing some figures.** / Gunji, Yukio; Shinohara, Shuji; Konno, Norio.

Research output: Contribution to journal › Article

^{1}1 We thank K. Matsuno, K. Ito, and T. Nakamura for various discussions and suggestions. We also thank T. Hirabayashi for drawing some figures.',

*Applied Mathematics and Computation*, vol. 55, no. 2-3, pp. 219-253. https://doi.org/10.1016/0096-3003(93)90023-8

^{1}1 We thank K. Matsuno, K. Ito, and T. Nakamura for various discussions and suggestions. We also thank T. Hirabayashi for drawing some figures. Applied Mathematics and Computation. 1993;55(2-3):219-253. https://doi.org/10.1016/0096-3003(93)90023-8

}

TY - JOUR

T1 - Learning processes based on incomplete identification and information generation1 1 We thank K. Matsuno, K. Ito, and T. Nakamura for various discussions and suggestions. We also thank T. Hirabayashi for drawing some figures.

AU - Gunji, Yukio

AU - Shinohara, Shuji

AU - Konno, Norio

PY - 1993

Y1 - 1993

N2 - The learning process consists of observation and inference. On the one hand, it is understood that the inference process involves internal choice. On the other hand, the internal process is not essentially expressed; however, the internal choice is explicitly written down by sorting of variants in many brain models. Finding out what the learning process is is nothing but to answer whether the origin of variants in variation and selection is a well-defined question or not. It is not whether we can find a sorting process in the brain or not, but whether the internal choice can be replaced by sorting of variants in programmable systems. We estimate here this type of question, and formalize internal choice in another way. In our model, the learning process is communication among elements of a system, in which an element learns the behavior of other elements through observation. However, observation is incomplete resulting from finite velocity of observation propagation. Incomplete identification (observation) is here formalized not by "variation and selection" but by decision change a posteriori, introducing backward-time. In our model, we can demonstrate that misreading a posteriori generates information that possibly generates novelty.

AB - The learning process consists of observation and inference. On the one hand, it is understood that the inference process involves internal choice. On the other hand, the internal process is not essentially expressed; however, the internal choice is explicitly written down by sorting of variants in many brain models. Finding out what the learning process is is nothing but to answer whether the origin of variants in variation and selection is a well-defined question or not. It is not whether we can find a sorting process in the brain or not, but whether the internal choice can be replaced by sorting of variants in programmable systems. We estimate here this type of question, and formalize internal choice in another way. In our model, the learning process is communication among elements of a system, in which an element learns the behavior of other elements through observation. However, observation is incomplete resulting from finite velocity of observation propagation. Incomplete identification (observation) is here formalized not by "variation and selection" but by decision change a posteriori, introducing backward-time. In our model, we can demonstrate that misreading a posteriori generates information that possibly generates novelty.

UR - http://www.scopus.com/inward/record.url?scp=38249002612&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=38249002612&partnerID=8YFLogxK

U2 - 10.1016/0096-3003(93)90023-8

DO - 10.1016/0096-3003(93)90023-8

M3 - Article

AN - SCOPUS:38249002612

VL - 55

SP - 219

EP - 253

JO - Applied Mathematics and Computation

JF - Applied Mathematics and Computation

SN - 0096-3003

IS - 2-3

ER -