Visual motor integration of robot's drawing behavior using recurrent neural network

Kazuma Sasaki, Kuniaki Noda, Tetsuya Ogata

    Research output: Contribution to journalArticle

    7 Citations (Scopus)

    Abstract

    Drawing is a way of visually expressing our feelings, knowledge, and situation. People draw pictures to share information with other human beings. This study investigates visuomotor memory (VM), which is a reusable memory storing drawing behavioral data. We propose a neural network-based model for acquiring a computational memory that can replicate VM through self-organized learning of a robot's actual drawing experiences. To design the model, we assume that VM has the following two characteristics: (1) it is formed by bottom-up learning and integration of temporal drawn pictures and motion data, and (2) it allows the observers to associate drawing motions from pictures. The proposed model comprises a deep neural network for dimensionally compressing temporal drawn images and a continuous-time recurrent neural network for integration learning of drawing motions and temporal drawn images. Two experiments are conducted on unicursal shape learning to investigate whether the proposed model can learn the function without any shape information for visual processing. Based on the first experiment, the model can learn 15 drawing sequences for three types of pictures, acquiring associative memory for drawing motions through the bottom-up learning process. Thus, it can associate drawing motions from untrained drawn images. In the second experiment, four types of pictures are trained, with four distorted variations per type. In this case, the model can organize the different shapes based on their distortions by utilizing both the image information and the drawing motions, even if visual characteristics are not shared.

    Original languageEnglish
    Pages (from-to)184-195
    Number of pages12
    JournalRobotics and Autonomous Systems
    Volume86
    DOIs
    Publication statusPublished - 2016 Dec 1

    Fingerprint

    Recurrent neural networks
    Recurrent Neural Networks
    Robot
    Robots
    Data storage equipment
    Motion
    Bottom-up
    Unicursal
    Experiments
    Model
    Neural Networks
    Experiment
    Drawing
    Vision
    Associative Memory
    Neural networks
    Learning Process
    Continuous Time
    Observer
    Processing

    Keywords

    • Deep learning
    • Drawing ability
    • Drawing robot

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Software
    • Mathematics(all)
    • Computer Science Applications

    Cite this

    Visual motor integration of robot's drawing behavior using recurrent neural network. / Sasaki, Kazuma; Noda, Kuniaki; Ogata, Tetsuya.

    In: Robotics and Autonomous Systems, Vol. 86, 01.12.2016, p. 184-195.

    Research output: Contribution to journalArticle

    @article{b1dffa17992a448f8d52fc220ec94c7c,
    title = "Visual motor integration of robot's drawing behavior using recurrent neural network",
    abstract = "Drawing is a way of visually expressing our feelings, knowledge, and situation. People draw pictures to share information with other human beings. This study investigates visuomotor memory (VM), which is a reusable memory storing drawing behavioral data. We propose a neural network-based model for acquiring a computational memory that can replicate VM through self-organized learning of a robot's actual drawing experiences. To design the model, we assume that VM has the following two characteristics: (1) it is formed by bottom-up learning and integration of temporal drawn pictures and motion data, and (2) it allows the observers to associate drawing motions from pictures. The proposed model comprises a deep neural network for dimensionally compressing temporal drawn images and a continuous-time recurrent neural network for integration learning of drawing motions and temporal drawn images. Two experiments are conducted on unicursal shape learning to investigate whether the proposed model can learn the function without any shape information for visual processing. Based on the first experiment, the model can learn 15 drawing sequences for three types of pictures, acquiring associative memory for drawing motions through the bottom-up learning process. Thus, it can associate drawing motions from untrained drawn images. In the second experiment, four types of pictures are trained, with four distorted variations per type. In this case, the model can organize the different shapes based on their distortions by utilizing both the image information and the drawing motions, even if visual characteristics are not shared.",
    keywords = "Deep learning, Drawing ability, Drawing robot",
    author = "Kazuma Sasaki and Kuniaki Noda and Tetsuya Ogata",
    year = "2016",
    month = "12",
    day = "1",
    doi = "10.1016/j.robot.2016.08.022",
    language = "English",
    volume = "86",
    pages = "184--195",
    journal = "Robotics and Autonomous Systems",
    issn = "0921-8890",
    publisher = "Elsevier",

    }

    TY - JOUR

    T1 - Visual motor integration of robot's drawing behavior using recurrent neural network

    AU - Sasaki, Kazuma

    AU - Noda, Kuniaki

    AU - Ogata, Tetsuya

    PY - 2016/12/1

    Y1 - 2016/12/1

    N2 - Drawing is a way of visually expressing our feelings, knowledge, and situation. People draw pictures to share information with other human beings. This study investigates visuomotor memory (VM), which is a reusable memory storing drawing behavioral data. We propose a neural network-based model for acquiring a computational memory that can replicate VM through self-organized learning of a robot's actual drawing experiences. To design the model, we assume that VM has the following two characteristics: (1) it is formed by bottom-up learning and integration of temporal drawn pictures and motion data, and (2) it allows the observers to associate drawing motions from pictures. The proposed model comprises a deep neural network for dimensionally compressing temporal drawn images and a continuous-time recurrent neural network for integration learning of drawing motions and temporal drawn images. Two experiments are conducted on unicursal shape learning to investigate whether the proposed model can learn the function without any shape information for visual processing. Based on the first experiment, the model can learn 15 drawing sequences for three types of pictures, acquiring associative memory for drawing motions through the bottom-up learning process. Thus, it can associate drawing motions from untrained drawn images. In the second experiment, four types of pictures are trained, with four distorted variations per type. In this case, the model can organize the different shapes based on their distortions by utilizing both the image information and the drawing motions, even if visual characteristics are not shared.

    AB - Drawing is a way of visually expressing our feelings, knowledge, and situation. People draw pictures to share information with other human beings. This study investigates visuomotor memory (VM), which is a reusable memory storing drawing behavioral data. We propose a neural network-based model for acquiring a computational memory that can replicate VM through self-organized learning of a robot's actual drawing experiences. To design the model, we assume that VM has the following two characteristics: (1) it is formed by bottom-up learning and integration of temporal drawn pictures and motion data, and (2) it allows the observers to associate drawing motions from pictures. The proposed model comprises a deep neural network for dimensionally compressing temporal drawn images and a continuous-time recurrent neural network for integration learning of drawing motions and temporal drawn images. Two experiments are conducted on unicursal shape learning to investigate whether the proposed model can learn the function without any shape information for visual processing. Based on the first experiment, the model can learn 15 drawing sequences for three types of pictures, acquiring associative memory for drawing motions through the bottom-up learning process. Thus, it can associate drawing motions from untrained drawn images. In the second experiment, four types of pictures are trained, with four distorted variations per type. In this case, the model can organize the different shapes based on their distortions by utilizing both the image information and the drawing motions, even if visual characteristics are not shared.

    KW - Deep learning

    KW - Drawing ability

    KW - Drawing robot

    UR - http://www.scopus.com/inward/record.url?scp=84992694601&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84992694601&partnerID=8YFLogxK

    U2 - 10.1016/j.robot.2016.08.022

    DO - 10.1016/j.robot.2016.08.022

    M3 - Article

    VL - 86

    SP - 184

    EP - 195

    JO - Robotics and Autonomous Systems

    JF - Robotics and Autonomous Systems

    SN - 0921-8890

    ER -