Object detection oriented feature pooling for video semantic indexing

Kazuya Ueki, Tetsunori Kobayashi

    研究成果: Conference contribution

    1 引用 (Scopus)

    抜粋

    We propose a new feature extraction method for video semantic indexing. Conventional methods extract features densely and uniformly across an entire image, whereas the proposed method exploits the object detector to extract features from image windows with high objectness. This feature extraction method focuses on "objects." Therefore, we can eliminate the unnecessary background information, and keep the useful information such as the position, the size, and the aspect ratio of a object. Since these object detection oriented features are complementary to features from entire images, the performance of video semantic indexing can be further improved. Experimental comparisons using large-scale video dataset of the TRECVID benchmark demonstrated that the proposed method substantially improved the performance of video semantic indexing.

    元の言語English
    ホスト出版物のタイトルVISAPP
    出版者SciTePress
    ページ44-51
    ページ数8
    5
    ISBN(電子版)9789897582264
    出版物ステータスPublished - 2017 1 1
    イベント12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017 - Porto, Portugal
    継続期間: 2017 2 272017 3 1

    Other

    Other12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017
    Portugal
    Porto
    期間17/2/2717/3/1

      フィンガープリント

    ASJC Scopus subject areas

    • Computer Graphics and Computer-Aided Design
    • Computer Vision and Pattern Recognition
    • Artificial Intelligence

    これを引用

    Ueki, K., & Kobayashi, T. (2017). Object detection oriented feature pooling for video semantic indexing. : VISAPP (巻 5, pp. 44-51). SciTePress.