We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.