This paper presents a subtitle placement method that reduces unnecessary eye movements. Although methods that vary the position of subtitles have been discussed in a previous study, subtitles may overlap the region of interest (ROI). Therefore, we propose a dynamic subtitling method that utilizes eye-Tracking data to avoid the subtitles from overlapping with important regions. The proposed method calculates the ROI based on the eye-Tracking data of multiple viewers. By positioning subtitles immediately under the ROI, the subtitles do not overlap the ROI. Furthermore, we detect speakers in a scene based on audio and visual information to help viewers recognize the speaker by positioning subtitles near the speaker. Experimental results show that the proposed method enables viewers to watch the ROI and the subtitle in longer duration than traditional subtitles, and is effective in terms of enhancing the comfort and utility of the viewing experience.
|Title of host publication||VISAPP|
|Number of pages||8|
|Publication status||Published - 2017 Jan 1|
|Event||12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017 - Porto, Portugal|
Duration: 2017 Feb 27 → 2017 Mar 1
|Other||12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2017|
|Period||17/2/27 → 17/3/1|
- Dynamic Subtitles
- Region of Interest
- Speaker Detection
- User Experience.
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Computer Vision and Pattern Recognition
- Artificial Intelligence