Video retrieval based on user-specified appearance and application to animation synthesis

Makoto Okabe, Yuta Kawate, Ken Anjyo, Rikio Onai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In our research group, we investigate techniques for retrieving videos based on user-specified appearances. In this paper, we introduce two of our research activities. First, we present a user interface for quickly and easily retrieving scenes of a desired appearance from videos. Given an input image, our system allows the user to sketch a transformation of an object inside the image, and then retrieves scenes showing this object in the userspecified transformed pose. Our method employs two steps to retrieve the target scenes. We first apply a standard image-retrieval technique based on feature matching, and find scenes in which the same object appears in a similar pose. Then we find the target scene by automatically forwarding or rewinding the video, starting from the frame selected in the previous step. When the user-specified transformation is matched, we stop forwarding or rewinding, and thus the target scene is retrieved. We demonstrate that our method successfully retrieves scenes of a racing car, a running horse, and a flying airplane with user-specified poses and motions. Secondly, we present a method for synthesizing fluid animation from a single image, using a fluid video database. The user inputs a target painting or photograph of a fluid scene. Employing the database of fluid video examples, the core algorithm of our technique then automatically retrieves and assigns appropriate fluid videos for each part of the target image. The procedure can thus be used to handle various paintings and photographs of rivers, waterfalls, fire, and smoke, and the resulting animations demonstrate that it is more powerful and efficient than our prior work.

Original languageEnglish
Title of host publicationAdvances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings
Pages110-120
Number of pages11
Volume7733 LNCS
EditionPART 2
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event19th International Conference on Advances in Multimedia Modeling, MMM 2013 - Huangshan
Duration: 2013 Jan 72013 Jan 9

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume7733 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other19th International Conference on Advances in Multimedia Modeling, MMM 2013
CityHuangshan
Period13/1/713/1/9

Fingerprint

Video Retrieval
Animation
Synthesis
Fluid
Target
Fluids
Painting
Video Databases
Feature Matching
Two-step Method
Image retrieval
Image Retrieval
Smoke
Demonstrate
User Interface
User interfaces
Assign
Fires
Railroad cars
Rivers

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Okabe, M., Kawate, Y., Anjyo, K., & Onai, R. (2013). Video retrieval based on user-specified appearance and application to animation synthesis. In Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings (PART 2 ed., Vol. 7733 LNCS, pp. 110-120). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7733 LNCS, No. PART 2). https://doi.org/10.1007/978-3-642-35728-2_11

Video retrieval based on user-specified appearance and application to animation synthesis. / Okabe, Makoto; Kawate, Yuta; Anjyo, Ken; Onai, Rikio.

Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings. Vol. 7733 LNCS PART 2. ed. 2013. p. 110-120 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7733 LNCS, No. PART 2).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Okabe, M, Kawate, Y, Anjyo, K & Onai, R 2013, Video retrieval based on user-specified appearance and application to animation synthesis. in Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings. PART 2 edn, vol. 7733 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), no. PART 2, vol. 7733 LNCS, pp. 110-120, 19th International Conference on Advances in Multimedia Modeling, MMM 2013, Huangshan, 13/1/7. https://doi.org/10.1007/978-3-642-35728-2_11
Okabe M, Kawate Y, Anjyo K, Onai R. Video retrieval based on user-specified appearance and application to animation synthesis. In Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings. PART 2 ed. Vol. 7733 LNCS. 2013. p. 110-120. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 2). https://doi.org/10.1007/978-3-642-35728-2_11
Okabe, Makoto ; Kawate, Yuta ; Anjyo, Ken ; Onai, Rikio. / Video retrieval based on user-specified appearance and application to animation synthesis. Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings. Vol. 7733 LNCS PART 2. ed. 2013. pp. 110-120 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 2).
@inproceedings{546d3f8343fb45e68f71fa227eeb3e7f,
title = "Video retrieval based on user-specified appearance and application to animation synthesis",
abstract = "In our research group, we investigate techniques for retrieving videos based on user-specified appearances. In this paper, we introduce two of our research activities. First, we present a user interface for quickly and easily retrieving scenes of a desired appearance from videos. Given an input image, our system allows the user to sketch a transformation of an object inside the image, and then retrieves scenes showing this object in the userspecified transformed pose. Our method employs two steps to retrieve the target scenes. We first apply a standard image-retrieval technique based on feature matching, and find scenes in which the same object appears in a similar pose. Then we find the target scene by automatically forwarding or rewinding the video, starting from the frame selected in the previous step. When the user-specified transformation is matched, we stop forwarding or rewinding, and thus the target scene is retrieved. We demonstrate that our method successfully retrieves scenes of a racing car, a running horse, and a flying airplane with user-specified poses and motions. Secondly, we present a method for synthesizing fluid animation from a single image, using a fluid video database. The user inputs a target painting or photograph of a fluid scene. Employing the database of fluid video examples, the core algorithm of our technique then automatically retrieves and assigns appropriate fluid videos for each part of the target image. The procedure can thus be used to handle various paintings and photographs of rivers, waterfalls, fire, and smoke, and the resulting animations demonstrate that it is more powerful and efficient than our prior work.",
author = "Makoto Okabe and Yuta Kawate and Ken Anjyo and Rikio Onai",
year = "2013",
doi = "10.1007/978-3-642-35728-2_11",
language = "English",
isbn = "9783642357275",
volume = "7733 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
number = "PART 2",
pages = "110--120",
booktitle = "Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings",
edition = "PART 2",

}

TY - GEN

T1 - Video retrieval based on user-specified appearance and application to animation synthesis

AU - Okabe, Makoto

AU - Kawate, Yuta

AU - Anjyo, Ken

AU - Onai, Rikio

PY - 2013

Y1 - 2013

N2 - In our research group, we investigate techniques for retrieving videos based on user-specified appearances. In this paper, we introduce two of our research activities. First, we present a user interface for quickly and easily retrieving scenes of a desired appearance from videos. Given an input image, our system allows the user to sketch a transformation of an object inside the image, and then retrieves scenes showing this object in the userspecified transformed pose. Our method employs two steps to retrieve the target scenes. We first apply a standard image-retrieval technique based on feature matching, and find scenes in which the same object appears in a similar pose. Then we find the target scene by automatically forwarding or rewinding the video, starting from the frame selected in the previous step. When the user-specified transformation is matched, we stop forwarding or rewinding, and thus the target scene is retrieved. We demonstrate that our method successfully retrieves scenes of a racing car, a running horse, and a flying airplane with user-specified poses and motions. Secondly, we present a method for synthesizing fluid animation from a single image, using a fluid video database. The user inputs a target painting or photograph of a fluid scene. Employing the database of fluid video examples, the core algorithm of our technique then automatically retrieves and assigns appropriate fluid videos for each part of the target image. The procedure can thus be used to handle various paintings and photographs of rivers, waterfalls, fire, and smoke, and the resulting animations demonstrate that it is more powerful and efficient than our prior work.

AB - In our research group, we investigate techniques for retrieving videos based on user-specified appearances. In this paper, we introduce two of our research activities. First, we present a user interface for quickly and easily retrieving scenes of a desired appearance from videos. Given an input image, our system allows the user to sketch a transformation of an object inside the image, and then retrieves scenes showing this object in the userspecified transformed pose. Our method employs two steps to retrieve the target scenes. We first apply a standard image-retrieval technique based on feature matching, and find scenes in which the same object appears in a similar pose. Then we find the target scene by automatically forwarding or rewinding the video, starting from the frame selected in the previous step. When the user-specified transformation is matched, we stop forwarding or rewinding, and thus the target scene is retrieved. We demonstrate that our method successfully retrieves scenes of a racing car, a running horse, and a flying airplane with user-specified poses and motions. Secondly, we present a method for synthesizing fluid animation from a single image, using a fluid video database. The user inputs a target painting or photograph of a fluid scene. Employing the database of fluid video examples, the core algorithm of our technique then automatically retrieves and assigns appropriate fluid videos for each part of the target image. The procedure can thus be used to handle various paintings and photographs of rivers, waterfalls, fire, and smoke, and the resulting animations demonstrate that it is more powerful and efficient than our prior work.

UR - http://www.scopus.com/inward/record.url?scp=84892866170&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84892866170&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-35728-2_11

DO - 10.1007/978-3-642-35728-2_11

M3 - Conference contribution

SN - 9783642357275

VL - 7733 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 110

EP - 120

BT - Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings

ER -