Disentanglement in conceptual space during sensorimotor interaction

Junpei Zhong*, Tetsuya Ogata, Angelo Cangelosi, Chenguang Yang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


The disentanglement of different objective properties from the external world is the foundation of language development for agents. The basic target of this process is to summarise the common natural properties and then to name it to describe those properties in the future. To realise this purpose, a new learning model is introduced for the disentanglement of several sensorimotor concepts (e.g. sizes, colours and shapes of objects) while the causal relationship is being learnt during interaction without much a priori experience and external instructions. This learning model links predictive deep neural models and the variational auto-encoder (VAE) and provides the possibility that the independent concepts can be extracted and disentangled from both perception and action. Moreover, such extraction is further learnt by VAE to memorise their common statistical features. The authors examine this model in the affordance learning setting, where the robot is trying to learn to disentangle about shapes of the tools and objects. The results show that such a process can be found in the neural activities of the β-VAE unit, which indicate that using similar VAE models is a promising way to learn the concepts, and thereby to learn the causal relationship of the sensorimotor interaction.

Original languageEnglish
Pages (from-to)103-112
Number of pages10
JournalCognitive Computation and Systems
Issue number4
Publication statusPublished - 2019 Dec

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'Disentanglement in conceptual space during sensorimotor interaction'. Together they form a unique fingerprint.

Cite this