Topic-based generation of keywords and caption for video content

Masanao Okamoto*, Kiichi Hasegawa, Sho Sobue, Akira Nakamura, Satoshi Tamura, Satoru Hayamizu

*この研究の対応する著者

研究成果: Paper査読

1 被引用数 (Scopus)

抄録

This paper studies usage of both keywords and captions in one scene for video content. Captions show the spoken content and are renewed in a sentence unit. A method is proposed to extract keywords automatically from transcribed texts. The method estimates topic boundary, extracts keywords by Latent Dirichlet Allocation (LDA) and presents them in speech balloon captioning system. The proposed method is evaluated by experiments from the viewpoint of easy to view and helpfulness to understand the video content. Adding keywords and captions obtained favorable scores by subjective assessments.

本文言語English
ページ605-608
ページ数4
出版ステータスPublished - 2010
外部発表はい
イベント2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010 - Biopolis, Singapore
継続期間: 2010 12月 142010 12月 17

Conference

Conference2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010
国/地域Singapore
CityBiopolis
Period10/12/1410/12/17

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 情報システム

フィンガープリント

「Topic-based generation of keywords and caption for video content」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル