Dynamic perception after visually guided grasping by a human-like autonomous robot

Mototaka Suzuki*, Kuniaki Noda, Yuki Suga, Tetsuya Ogata, Shigeki Sugano

*この研究の対応する著者

研究成果: Article査読

抄録

We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.

本文言語English
ページ(範囲)233-254
ページ数22
ジャーナルAdvanced Robotics
20
2
DOI
出版ステータスPublished - 2006

ASJC Scopus subject areas

  • 制御およびシステム工学
  • ソフトウェア
  • 人間とコンピュータの相互作用
  • ハードウェアとアーキテクチャ
  • コンピュータ サイエンスの応用

フィンガープリント

「Dynamic perception after visually guided grasping by a human-like autonomous robot」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル