Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality

Yutaro Hirao, Takashi Kawai

    Research output: Contribution to journalArticle

    2 Citations (Scopus)

    Abstract

    This paper proposes the method of interaction design to present haptic experience as intended in virtual reality (VR). The method that we named “Augmented Cross-Modality” is to translate the physiological responses, knowledge and impression about the experience in real world into audio-visual stimuli and add them to the interaction in VR. In this study, as expressions for presenting a haptic experience of gripping an object strongly and lifting a heavy object, we design hand tremor, strong gripping and increasing heart rate in VR. The objective is, at first, to enhance a sense of strain of a body with these augmented cross-modal expressions and then, change the quality of the total haptic experience and as a result, make it closer to the experience of lifting a heavy object. This method is evaluated by several rating scales, interviews and force sensors attached to a VR controller. The result suggests that the expressions of this method enhancing a haptic experience of strong gripping in almost all participants and the effectiveness were confirmed. c 2018 Society for Imaging Science and Technology.

    Original languageEnglish
    Article number060402
    JournalJournal of Imaging Science and Technology
    Volume62
    Issue number6
    DOIs
    Publication statusPublished - 2018 Nov 1

    ASJC Scopus subject areas

    • Electronic, Optical and Magnetic Materials
    • Chemistry(all)
    • Atomic and Molecular Physics, and Optics
    • Computer Science Applications

    Fingerprint Dive into the research topics of 'Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality'. Together they form a unique fingerprint.

  • Cite this