In this paper, we propose a new time-reduction method for video skimming in which the focus is on the overall playback time. While fast-forwarding is a natural way to check whether or not items are of interest, the sound is not synchronized with the images and the lack of comprehensible audio data means that we must work from the images alone. The focus in video summarization has been solely on video segmentation, i.e. building a structure that represents the parts and flow of meaning in the video. In our system, the user simply specifies the running time required for the summarized video. We describe the current state of our prototype system and its results in testing, which show how well it works.
|ジャーナル||Proceedings of SPIE - The International Society for Optical Engineering|
|出版ステータス||Published - 2004 12月 1|
|イベント||Multimedia Computing and Networking 2004 - San Jose, CA, United States|
継続期間: 2004 1月 21 → 2004 1月 22
ASJC Scopus subject areas
- コンピュータ サイエンスの応用