Audio–visual object removal in 360-degree videos

Ryo Shimamura, Qi Feng, Yuki Koyama, Takayuki Nakatsuka, Satoru Fukayama, Masahiro Hamasaki, Masataka Goto, Shigeo Morishima

Research output: Contribution to journalArticle

Abstract

We present a novel concept audio–visual object removal in 360-degree videos, in which a target object in a 360-degree video is removed in both the visual and auditory domains synchronously. Previous methods have solely focused on the visual aspect of object removal using video inpainting techniques, resulting in videos with unreasonable remaining sounds corresponding to the removed objects. We propose a solution which incorporates direction acquired during the video inpainting process into the audio removal process. More specifically, our method identifies the sound corresponding to the visually tracked target object and then synthesizes a three-dimensional sound field by subtracting the identified sound from the input 360-degree video. We conducted a user study showing that our multi-modal object removal supporting both visual and auditory domains could significantly improve the virtual reality experience, and our method could generate sufficiently synchronous, natural and satisfactory 360-degree videos.

Original languageEnglish
JournalVisual Computer
DOIs
Publication statusAccepted/In press - 2020

Keywords

  • 360-degree video
  • Audio–visual object removal
  • Human perception
  • Signal processing
  • Virtual reality

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Audio–visual object removal in 360-degree videos'. Together they form a unique fingerprint.

  • Cite this

    Shimamura, R., Feng, Q., Koyama, Y., Nakatsuka, T., Fukayama, S., Hamasaki, M., Goto, M., & Morishima, S. (Accepted/In press). Audio–visual object removal in 360-degree videos. Visual Computer. https://doi.org/10.1007/s00371-020-01918-1