WasedA at TRECVID 2015: Semantic indexing

Kazuya Ueki, Tetsunori Kobayashi

Research output: Contribution to conferencePaperpeer-review

1 Citation (Scopus)

Abstract

Waseda participated in the TRECVID 2015 Semantic Indexing (SIN) task [6]. For the SIN task, our approach used the following processing pipelines: feature extraction using several deep convolutional neural networks (CNNs); classification of the presence or absence of a detection target by support vector machines (SVMs); and fusion of multiple score outputs. In order to improve the performance of semantic video indexing, we employed the following techniques: utilizing multiple evidences observed in each video and compressing them into a fixed-length vector; introducing gradient and motion features to CNNs; enriching variations of the training and the testing sets; and extracting features from several CNNs trained with various large-scale datasets. Through these techniques, our best run achieved a mean Average Precision (mAP) of 30.9%. This was ranked 2nd among all the participants.

Original languageEnglish
Publication statusPublished - 2015
Event2015 TREC Video Retrieval Evaluation, TRECVID 2015 - Gaithersburg, United States
Duration: 2015 Nov 162015 Nov 18

Conference

Conference2015 TREC Video Retrieval Evaluation, TRECVID 2015
Country/TerritoryUnited States
CityGaithersburg
Period15/11/1615/11/18

ASJC Scopus subject areas

  • Information Systems
  • Electrical and Electronic Engineering
  • Signal Processing

Fingerprint

Dive into the research topics of 'WasedA at TRECVID 2015: Semantic indexing'. Together they form a unique fingerprint.

Cite this