Encoding longer-term contextual multi-modal information in a predictive coding model

Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi

Research output: Contribution to journalArticlepeer-review

Abstract

Studies suggest that within the hierarchical architecture, the topological higher level possibly represents a conscious category of the current sensory events with a slower changing activities. They attempt to predict the activities on the lower level by relaying the predicted information. On the other hand, the incoming sensory information corrects such prediction of the events on the higher level by the novel or surprising signal. We propose a predictive hierarchical artificial neural network model that examines this hypothesis on neurorobotic platforms, based on the AFA-PredNet model. In this neural network model, there are different temporal scales of predictions exist on different levels of the hierarchical predictive coding, which are defined in the temporal parameters in the neurons. Also, both the fastand the slow-changing neural activities are modulated by the active motor activities. A neurorobotic experiment based on the architecture was also conducted based on the data collected from the VRep simulator.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Apr 17

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Encoding longer-term contextual multi-modal information in a predictive coding model'. Together they form a unique fingerprint.

Cite this