TY - GEN
T1 - VRMixer
T2 - 11th Advances in Computer Entertainment Technology Conference, ACE 2014
AU - Hirai, Tatsunori
AU - Nakamura, Satoshi
AU - Yumura, Tsubasa
AU - Morishima, Shigeo
PY - 2014/11/11
Y1 - 2014/11/11
N2 - This paper presents VRMixer, a system that mixes real world and a video clip letting a user enter the video clip and realize a virtual co-starring role with people appearing in the clip. Our system constructs a simple virtual space by allocating video frames and the people appearing in the clip within the user's 3D space. By measuring the user's 3D depth in real time, the time space of the video clip and the user's 3D space become mixed. VRMixer automatically extracts human images from a video clip by using a video segmentation technique based on 3D graph cut segmentation that employs face detection to detach the human area from the background. A virtual 3D space (i.e., 2.5D space) is constructed by positioning the background in the back and the people in the front. In the video clip, the user can stand in front of or behind the people by using a depth camera. Real objects that are closer than the distance of the clip's background will become part of the constructed virtual 3D space. This synthesis creates a new image in which the user appears to be a part of the video clip, or in which people in the clip appear to enter the real world. We aim to realize "video reality," i.e., a mixture of reality and video clips using VRMixer.
AB - This paper presents VRMixer, a system that mixes real world and a video clip letting a user enter the video clip and realize a virtual co-starring role with people appearing in the clip. Our system constructs a simple virtual space by allocating video frames and the people appearing in the clip within the user's 3D space. By measuring the user's 3D depth in real time, the time space of the video clip and the user's 3D space become mixed. VRMixer automatically extracts human images from a video clip by using a video segmentation technique based on 3D graph cut segmentation that employs face detection to detach the human area from the background. A virtual 3D space (i.e., 2.5D space) is constructed by positioning the background in the back and the people in the front. In the video clip, the user can stand in front of or behind the people by using a depth camera. Real objects that are closer than the distance of the clip's background will become part of the constructed virtual 3D space. This synthesis creates a new image in which the user appears to be a part of the video clip, or in which people in the clip appear to enter the real world. We aim to realize "video reality," i.e., a mixture of reality and video clips using VRMixer.
KW - 2.5D
KW - 3D graph cut segmentation
KW - Human extraction
KW - Mixed media
KW - Visual interaction
UR - http://www.scopus.com/inward/record.url?scp=84938350891&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84938350891&partnerID=8YFLogxK
U2 - 10.1145/2663806.2663834
DO - 10.1145/2663806.2663834
M3 - Conference contribution
AN - SCOPUS:84938350891
T3 - ACM International Conference Proceeding Series
BT - ACE 2014 - 11th Advances in Computer Entertainment Technology Conference, Proceedings
PB - Association for Computing Machinery
Y2 - 11 November 2014 through 14 November 2014
ER -