TY - GEN
T1 - Active 3D modeling by recursive viewpoint selection based on symmetry
AU - Yoshida, Kazunori
AU - Tanaka, Hiromi T.
AU - Ohya, Jun
AU - Kishino, Fumio
PY - 1995/12/1
Y1 - 1995/12/1
N2 - This paper proposes a new method for creating 3D models of objects efficiently from the silhouettes of objects in images acquired by an active camera whose viewpoints are selected recursively based on symmetry planes of the observed silhouettes. In the proposed method, to obtain the initial view point, we use the assumption that an object takes a stable pose under the influence of gravity by having a symmetry plane to which the direction of gravity is constrained. We choose a point in the direction of the gravity as the initial viewpoint. Then, the new view points are determined based on information about the symmetry plane, where the symmetry plane is obtained from the center of gravity and the axis of inertia of the observed silhouette. This process is repeated until no new view point is selected. Then, the 3D shape of the object is reconstructed by processing voxel data based on the silhouette information acquired at the selected view points. Finally, textures acquired by the observations are mapped to the reconstructed 3D shape. We present some experimental results that show the effectiveness of the proposed method.
AB - This paper proposes a new method for creating 3D models of objects efficiently from the silhouettes of objects in images acquired by an active camera whose viewpoints are selected recursively based on symmetry planes of the observed silhouettes. In the proposed method, to obtain the initial view point, we use the assumption that an object takes a stable pose under the influence of gravity by having a symmetry plane to which the direction of gravity is constrained. We choose a point in the direction of the gravity as the initial viewpoint. Then, the new view points are determined based on information about the symmetry plane, where the symmetry plane is obtained from the center of gravity and the axis of inertia of the observed silhouette. This process is repeated until no new view point is selected. Then, the 3D shape of the object is reconstructed by processing voxel data based on the silhouette information acquired at the selected view points. Finally, textures acquired by the observations are mapped to the reconstructed 3D shape. We present some experimental results that show the effectiveness of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=0029504820&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0029504820&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:0029504820
SN - 0819419524
SN - 9780819419521
T3 - Proceedings of SPIE - The International Society for Optical Engineering
SP - 326
EP - 336
BT - Proceedings of SPIE - The International Society for Optical Engineering
A2 - Casasent, David P.
T2 - Intelligent Robots and Computer Vision XIV: Algorithms, Techniques, Active Vision, and Materials Handling
Y2 - 23 October 1995 through 26 October 1995
ER -