Abstract
We present a novel concept audio–visual object removal in 360-degree videos, in which a target object in a 360-degree video is removed in both the visual and auditory domains synchronously. Previous methods have solely focused on the visual aspect of object removal using video inpainting techniques, resulting in videos with unreasonable remaining sounds corresponding to the removed objects. We propose a solution which incorporates direction acquired during the video inpainting process into the audio removal process. More specifically, our method identifies the sound corresponding to the visually tracked target object and then synthesizes a three-dimensional sound field by subtracting the identified sound from the input 360-degree video. We conducted a user study showing that our multi-modal object removal supporting both visual and auditory domains could significantly improve the virtual reality experience, and our method could generate sufficiently synchronous, natural and satisfactory 360-degree videos.
Original language | English |
---|---|
Pages (from-to) | 2117-2128 |
Number of pages | 12 |
Journal | Visual Computer |
Volume | 36 |
Issue number | 10-12 |
DOIs | |
Publication status | Published - 2020 Oct 1 |
Keywords
- 360-degree video
- Audio–visual object removal
- Human perception
- Signal processing
- Virtual reality
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design