Automatic Foreground Detection at 784 FPS for Ultra-High-Speed Human-Machine Interactions

Songlin Du, Peikun Cai, Tingting Hu, Takeshi Ikenaga

研究成果: Article査読

抄録

Human-machine interactive systems show increasing demand for analysing fast moving objects in high-frame-rate videos. Robust foreground detection, which is able to reduce large amount of redundant background data from high-frame-rate video, becomes the essence to achieve ultra-high-speed human-machine interactions. This paper proposes a local spatial propagation based background model generation, a local linear illumination correction based background model update, and a regional central coordinates and edge keypoints constrained foreground region reselection. The three proposals make up a robust and hardware-friendly foreground detection method. Experimental results prove that the proposed hardware-friendly algorithm achieves high accuracy and robustness on various kinds of challenging cases. Meanwhile, the hardware implementation utilizes little hardware resources and achieves realtime processing of high-frame-rate (784 frame/second) video with the delay less than 1 ms/frame in image processing core. In addition, a practical system is implemented by combing a PC, a high-speed camera and a field programmable gate array (FPGA) for realworld applications. This work will significatively promote the development and application of high-speed human machine interaction. A demo of the proposed vision system working at 784 FPS is available at https://wcms.waseda.jp/em/5f84f75136a6.

本文言語English
ジャーナルIEEE Transactions on Automation Science and Engineering
DOI
出版ステータスAccepted/In press - 2021

ASJC Scopus subject areas

  • 制御およびシステム工学
  • 電子工学および電気工学

フィンガープリント

「Automatic Foreground Detection at 784 FPS for Ultra-High-Speed Human-Machine Interactions」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル