Lifelog image analysis based on activity situation models using contexts from wearable multi sensors

Katsuhiro Takata, Jianhua Ma, Bernady O. Apduhan, Runhe Huang, Norio Shiratori

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    5 Citations (Scopus)

    Abstract

    Lifelog is a set of continuously captured data records of our daily activities. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual's body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual's context information, and likewise represent the individual's life experiences in some semantic and structured ways for future efficient retrievals and exploitations. The resulting structured lifelog images were evaluated using the vision-based only approach and the proposed technique. Our proposed integrated technique exhibited better results.

    Original languageEnglish
    Title of host publicationProceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008
    Pages160-163
    Number of pages4
    DOIs
    Publication statusPublished - 2008
    Event2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008 - Busan
    Duration: 2008 Apr 242008 Apr 26

    Other

    Other2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008
    CityBusan
    Period08/4/2408/4/26

    Fingerprint

    Image analysis
    Sensors
    Semantics

    ASJC Scopus subject areas

    • Computer Graphics and Computer-Aided Design
    • Computer Science Applications
    • Software

    Cite this

    Takata, K., Ma, J., Apduhan, B. O., Huang, R., & Shiratori, N. (2008). Lifelog image analysis based on activity situation models using contexts from wearable multi sensors. In Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008 (pp. 160-163). [4505713] https://doi.org/10.1109/MUE.2008.69

    Lifelog image analysis based on activity situation models using contexts from wearable multi sensors. / Takata, Katsuhiro; Ma, Jianhua; Apduhan, Bernady O.; Huang, Runhe; Shiratori, Norio.

    Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008. 2008. p. 160-163 4505713.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Takata, K, Ma, J, Apduhan, BO, Huang, R & Shiratori, N 2008, Lifelog image analysis based on activity situation models using contexts from wearable multi sensors. in Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008., 4505713, pp. 160-163, 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008, Busan, 08/4/24. https://doi.org/10.1109/MUE.2008.69
    Takata K, Ma J, Apduhan BO, Huang R, Shiratori N. Lifelog image analysis based on activity situation models using contexts from wearable multi sensors. In Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008. 2008. p. 160-163. 4505713 https://doi.org/10.1109/MUE.2008.69
    Takata, Katsuhiro ; Ma, Jianhua ; Apduhan, Bernady O. ; Huang, Runhe ; Shiratori, Norio. / Lifelog image analysis based on activity situation models using contexts from wearable multi sensors. Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008. 2008. pp. 160-163
    @inproceedings{b4a4c5a3c27c49f2968d4719570924e6,
    title = "Lifelog image analysis based on activity situation models using contexts from wearable multi sensors",
    abstract = "Lifelog is a set of continuously captured data records of our daily activities. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual's body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual's context information, and likewise represent the individual's life experiences in some semantic and structured ways for future efficient retrievals and exploitations. The resulting structured lifelog images were evaluated using the vision-based only approach and the proposed technique. Our proposed integrated technique exhibited better results.",
    author = "Katsuhiro Takata and Jianhua Ma and Apduhan, {Bernady O.} and Runhe Huang and Norio Shiratori",
    year = "2008",
    doi = "10.1109/MUE.2008.69",
    language = "English",
    isbn = "0769531342",
    pages = "160--163",
    booktitle = "Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008",

    }

    TY - GEN

    T1 - Lifelog image analysis based on activity situation models using contexts from wearable multi sensors

    AU - Takata, Katsuhiro

    AU - Ma, Jianhua

    AU - Apduhan, Bernady O.

    AU - Huang, Runhe

    AU - Shiratori, Norio

    PY - 2008

    Y1 - 2008

    N2 - Lifelog is a set of continuously captured data records of our daily activities. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual's body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual's context information, and likewise represent the individual's life experiences in some semantic and structured ways for future efficient retrievals and exploitations. The resulting structured lifelog images were evaluated using the vision-based only approach and the proposed technique. Our proposed integrated technique exhibited better results.

    AB - Lifelog is a set of continuously captured data records of our daily activities. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual's body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual's context information, and likewise represent the individual's life experiences in some semantic and structured ways for future efficient retrievals and exploitations. The resulting structured lifelog images were evaluated using the vision-based only approach and the proposed technique. Our proposed integrated technique exhibited better results.

    UR - http://www.scopus.com/inward/record.url?scp=51249106517&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=51249106517&partnerID=8YFLogxK

    U2 - 10.1109/MUE.2008.69

    DO - 10.1109/MUE.2008.69

    M3 - Conference contribution

    SN - 0769531342

    SN - 9780769531342

    SP - 160

    EP - 163

    BT - Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008

    ER -