Identifying scenes with the same person in video content on the basis of scene continuity and face similarity measurement

Tatsunori Hirai*, Tomoyasu Nakano, Masataka Goto, Shigeo Morishima

*この研究の対応する著者

研究成果: Article査読

2 被引用数 (Scopus)

抄録

We present a method that can automatically annotate when and who is appearing in a video stream that is shot in an unstaged condition. Previous face recognition methods were not robust against different shooting conditions, such as those with variable lighting, face directions, and other factors, in a video stream and had difficulties identifying a person and the scenes the person appears in. To overcome such difficulties, our method groups consecutive video frames (scenes) into clusters that each have the same person's face, which we call a "facial-temporal continuum," and identifies a person by using many video frames in each cluster. In our experiments, accuracy with our method was approximately two or three times higher than a previous method that recognizes a face in each frame.

本文言語English
ページ(範囲)J251-J259
ジャーナルKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
66
7
DOI
出版ステータスPublished - 2012

ASJC Scopus subject areas

  • メディア記述
  • コンピュータ サイエンスの応用
  • 電子工学および電気工学

フィンガープリント

「Identifying scenes with the same person in video content on the basis of scene continuity and face similarity measurement」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル