Object detection oriented feature pooling for video semantic indexing

Kazuya Ueki, Tetsunori Kobayashi

研究成果

1 被引用数 (Scopus)

抄録

We propose a new feature extraction method for video semantic indexing. Conventional methods extract features densely and uniformly across an entire image, whereas the proposed method exploits the object detector to extract features from image windows with high objectness. This feature extraction method focuses on "objects." Therefore, we can eliminate the unnecessary background information, and keep the useful information such as the position, the size, and the aspect ratio of a object. Since these object detection oriented features are complementary to features from entire images, the performance of video semantic indexing can be further improved. Experimental comparisons using large-scale video dataset of the TRECVID benchmark demonstrated that the proposed method substantially improved the performance of video semantic indexing.

本文言語English
ホスト出版物のタイトルVISAPP
編集者Francisco Imai, Alain Tremeau, Jose Braz
出版社SciTePress
ページ44-51
ページ数8
ISBN(電子版)9789897582264
DOI
出版ステータスPublished - 2017
イベント12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017 - Porto, Portugal
継続期間: 2017 2 272017 3 1

出版物シリーズ

名前VISIGRAPP 2017 - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
5

Other

Other12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017
国/地域Portugal
CityPorto
Period17/2/2717/3/1

ASJC Scopus subject areas

  • コンピュータ グラフィックスおよびコンピュータ支援設計
  • コンピュータ ビジョンおよびパターン認識
  • 人工知能

フィンガープリント

「Object detection oriented feature pooling for video semantic indexing」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル