Application of hybrid learning strategy for manipulator robot

Shingo Nakamura, Shuji Hashimoto

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Generally, the bottom-up learning approaches, such as neural-networks, are implemented to obtain the optimal controller of target task for mechanical system. However, they must face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such issues, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this study, we are considering a construction of simulator directly from the actual robot. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a task. Construction of a simulator is performed by neural-networks with back-propagation method, and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the actual robot after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only actual one. And we consider that our proposed method can be a basic and versatile learning strategy to obtain the optimal controller of mechanical systems.

    Original languageEnglish
    Title of host publicationProceedings of the International Joint Conference on Neural Networks
    Pages2465-2470
    Number of pages6
    DOIs
    Publication statusPublished - 2011
    Event2011 International Joint Conference on Neural Network, IJCNN 2011 - San Jose, CA
    Duration: 2011 Jul 312011 Aug 5

    Other

    Other2011 International Joint Conference on Neural Network, IJCNN 2011
    CitySan Jose, CA
    Period11/7/3111/8/5

    Fingerprint

    Manipulators
    Simulators
    Robots
    Controllers
    Neural networks
    Hardware
    Reinforcement learning
    Backpropagation
    Sampling

    ASJC Scopus subject areas

    • Software
    • Artificial Intelligence

    Cite this

    Nakamura, S., & Hashimoto, S. (2011). Application of hybrid learning strategy for manipulator robot. In Proceedings of the International Joint Conference on Neural Networks (pp. 2465-2470). [6033539] https://doi.org/10.1109/IJCNN.2011.6033539

    Application of hybrid learning strategy for manipulator robot. / Nakamura, Shingo; Hashimoto, Shuji.

    Proceedings of the International Joint Conference on Neural Networks. 2011. p. 2465-2470 6033539.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Nakamura, S & Hashimoto, S 2011, Application of hybrid learning strategy for manipulator robot. in Proceedings of the International Joint Conference on Neural Networks., 6033539, pp. 2465-2470, 2011 International Joint Conference on Neural Network, IJCNN 2011, San Jose, CA, 11/7/31. https://doi.org/10.1109/IJCNN.2011.6033539
    Nakamura S, Hashimoto S. Application of hybrid learning strategy for manipulator robot. In Proceedings of the International Joint Conference on Neural Networks. 2011. p. 2465-2470. 6033539 https://doi.org/10.1109/IJCNN.2011.6033539
    Nakamura, Shingo ; Hashimoto, Shuji. / Application of hybrid learning strategy for manipulator robot. Proceedings of the International Joint Conference on Neural Networks. 2011. pp. 2465-2470
    @inproceedings{f9e977d088574242bf20bbce69658ae7,
    title = "Application of hybrid learning strategy for manipulator robot",
    abstract = "Generally, the bottom-up learning approaches, such as neural-networks, are implemented to obtain the optimal controller of target task for mechanical system. However, they must face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such issues, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this study, we are considering a construction of simulator directly from the actual robot. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a task. Construction of a simulator is performed by neural-networks with back-propagation method, and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the actual robot after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only actual one. And we consider that our proposed method can be a basic and versatile learning strategy to obtain the optimal controller of mechanical systems.",
    author = "Shingo Nakamura and Shuji Hashimoto",
    year = "2011",
    doi = "10.1109/IJCNN.2011.6033539",
    language = "English",
    isbn = "9781457710865",
    pages = "2465--2470",
    booktitle = "Proceedings of the International Joint Conference on Neural Networks",

    }

    TY - GEN

    T1 - Application of hybrid learning strategy for manipulator robot

    AU - Nakamura, Shingo

    AU - Hashimoto, Shuji

    PY - 2011

    Y1 - 2011

    N2 - Generally, the bottom-up learning approaches, such as neural-networks, are implemented to obtain the optimal controller of target task for mechanical system. However, they must face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such issues, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this study, we are considering a construction of simulator directly from the actual robot. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a task. Construction of a simulator is performed by neural-networks with back-propagation method, and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the actual robot after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only actual one. And we consider that our proposed method can be a basic and versatile learning strategy to obtain the optimal controller of mechanical systems.

    AB - Generally, the bottom-up learning approaches, such as neural-networks, are implemented to obtain the optimal controller of target task for mechanical system. However, they must face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such issues, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this study, we are considering a construction of simulator directly from the actual robot. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a task. Construction of a simulator is performed by neural-networks with back-propagation method, and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the actual robot after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only actual one. And we consider that our proposed method can be a basic and versatile learning strategy to obtain the optimal controller of mechanical systems.

    UR - http://www.scopus.com/inward/record.url?scp=80054723176&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=80054723176&partnerID=8YFLogxK

    U2 - 10.1109/IJCNN.2011.6033539

    DO - 10.1109/IJCNN.2011.6033539

    M3 - Conference contribution

    SN - 9781457710865

    SP - 2465

    EP - 2470

    BT - Proceedings of the International Joint Conference on Neural Networks

    ER -