TY - GEN
T1 - End-to-End Visuomotor Learning of Drawing Sequences using Recurrent Neural Networks
AU - Sasaki, Kazuma
AU - Ogata, Tetsuya
N1 - Funding Information:
This work was supported by JST CREST Grant Number: JPMJCR15E3, MEXT Grant-in-Aid for Scientific Research(A), No.15H01710, and The Program for Leading Graduate Schools, “Graduate Program for Embodiment Informatics” of the Ministry of Education, Culture, Sports, Science, and Technology.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/10/10
Y1 - 2018/10/10
N2 - Drawing is one of the complex cognitive abilities of humans. Cognitive neuropsychological studies have attempted to develop models that can explain the observations of the drawing behavior. These models exhibit limitations to reproduce the drawing behaviors because of individual factors that are related to the drawing style or non-reproducibility of motions. A constructive approach provides another methodology to investigate the complex systems by constructing models that can reproducibly replicate the behaviors. In this study, we focus on an ability to reuse the integrated visuomotor memory of drawing to associate the drawing motion from an image. Existing computational models of drawing have not considered the visual information in hand-drawn pictures. Therefore, we propose a dynamical model of the visuomotor process of drawing. The proposed model does not require any prior knowledge of the process such as the pre-designed shape primitives or the image processing algorithms. The proposed model is implemented by utilizing a recurrent neural network that learns the visuomotor transition of the drawing process. The association of the model's drawing motion by reusing the obtained memory can be obtained by minimizing the prediction error of the image. By performing simulator experiments, the proposed model demonstrates its association ability in case of pictures that comprise multiple lines.
AB - Drawing is one of the complex cognitive abilities of humans. Cognitive neuropsychological studies have attempted to develop models that can explain the observations of the drawing behavior. These models exhibit limitations to reproduce the drawing behaviors because of individual factors that are related to the drawing style or non-reproducibility of motions. A constructive approach provides another methodology to investigate the complex systems by constructing models that can reproducibly replicate the behaviors. In this study, we focus on an ability to reuse the integrated visuomotor memory of drawing to associate the drawing motion from an image. Existing computational models of drawing have not considered the visual information in hand-drawn pictures. Therefore, we propose a dynamical model of the visuomotor process of drawing. The proposed model does not require any prior knowledge of the process such as the pre-designed shape primitives or the image processing algorithms. The proposed model is implemented by utilizing a recurrent neural network that learns the visuomotor transition of the drawing process. The association of the model's drawing motion by reusing the obtained memory can be obtained by minimizing the prediction error of the image. By performing simulator experiments, the proposed model demonstrates its association ability in case of pictures that comprise multiple lines.
UR - http://www.scopus.com/inward/record.url?scp=85056550133&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056550133&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2018.8489744
DO - 10.1109/IJCNN.2018.8489744
M3 - Conference contribution
AN - SCOPUS:85056550133
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2018 International Joint Conference on Neural Networks, IJCNN 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 International Joint Conference on Neural Networks, IJCNN 2018
Y2 - 8 July 2018 through 13 July 2018
ER -