Joint equal contribution of global and local features for image annotation

Supheakmungkol Sarin, Wataru Kameyama

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    2 Citations (Scopus)

    Abstract

    Image annotation is a very important task as the number of photographs has gone sky-high. This paper describes our participation in the ImageCLEF Large Scale Visual Concept Detection and Annotation Task 2009. We present the method used for our best run. Our approach is inspired from a recently proposed method where joint equal contribution (JEC) of simple global color and texture features can outperform the state-of-the-art annotation techniques [10]. Our idea is that if such simple features could do so well, then the combination of higher-level features would do even better. Study has shown that the concurrent use of saliency and gist of the scene is a major trait of human vision system. Therefore, in this preliminary study, we propose to explore the combination of different visual features at global, local and scene levels including global and local color, texture, and gist of the scene. The experiments confirm that higher-level features lead to better performance. Through the experiments, we also found that using 40 nearest neighbors and HSV, HSV (at saliency regions), HAAR, GIST (full scene), GIST (scene at the center) as features produce the best result.We finally identify the weakness in our approach and ways on how the system could be optimized and improved.

    Original languageEnglish
    Title of host publicationCEUR Workshop Proceedings
    PublisherCEUR-WS
    Volume1175
    Publication statusPublished - 2009
    Event2009 Working Notes for CLEF Workshop, CLEF 2009 - Co-located with the 13th European Conference on Digital Libraries, ECDL 2009 - Corfu, Greece
    Duration: 2009 Sep 302009 Oct 2

    Other

    Other2009 Working Notes for CLEF Workshop, CLEF 2009 - Co-located with the 13th European Conference on Digital Libraries, ECDL 2009
    CountryGreece
    CityCorfu
    Period09/9/3009/10/2

    Fingerprint

    Textures
    Color
    Experiments

    Keywords

    • Automatic image annotation
    • Color
    • Gist of scene
    • Joint equal contribution
    • K nearest neighbors
    • Saliency
    • Texture

    ASJC Scopus subject areas

    • Computer Science(all)

    Cite this

    Sarin, S., & Kameyama, W. (2009). Joint equal contribution of global and local features for image annotation. In CEUR Workshop Proceedings (Vol. 1175). CEUR-WS.

    Joint equal contribution of global and local features for image annotation. / Sarin, Supheakmungkol; Kameyama, Wataru.

    CEUR Workshop Proceedings. Vol. 1175 CEUR-WS, 2009.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Sarin, S & Kameyama, W 2009, Joint equal contribution of global and local features for image annotation. in CEUR Workshop Proceedings. vol. 1175, CEUR-WS, 2009 Working Notes for CLEF Workshop, CLEF 2009 - Co-located with the 13th European Conference on Digital Libraries, ECDL 2009, Corfu, Greece, 09/9/30.
    Sarin S, Kameyama W. Joint equal contribution of global and local features for image annotation. In CEUR Workshop Proceedings. Vol. 1175. CEUR-WS. 2009
    Sarin, Supheakmungkol ; Kameyama, Wataru. / Joint equal contribution of global and local features for image annotation. CEUR Workshop Proceedings. Vol. 1175 CEUR-WS, 2009.
    @inproceedings{14db506cf5934545bc01a47a71b84051,
    title = "Joint equal contribution of global and local features for image annotation",
    abstract = "Image annotation is a very important task as the number of photographs has gone sky-high. This paper describes our participation in the ImageCLEF Large Scale Visual Concept Detection and Annotation Task 2009. We present the method used for our best run. Our approach is inspired from a recently proposed method where joint equal contribution (JEC) of simple global color and texture features can outperform the state-of-the-art annotation techniques [10]. Our idea is that if such simple features could do so well, then the combination of higher-level features would do even better. Study has shown that the concurrent use of saliency and gist of the scene is a major trait of human vision system. Therefore, in this preliminary study, we propose to explore the combination of different visual features at global, local and scene levels including global and local color, texture, and gist of the scene. The experiments confirm that higher-level features lead to better performance. Through the experiments, we also found that using 40 nearest neighbors and HSV, HSV (at saliency regions), HAAR, GIST (full scene), GIST (scene at the center) as features produce the best result.We finally identify the weakness in our approach and ways on how the system could be optimized and improved.",
    keywords = "Automatic image annotation, Color, Gist of scene, Joint equal contribution, K nearest neighbors, Saliency, Texture",
    author = "Supheakmungkol Sarin and Wataru Kameyama",
    year = "2009",
    language = "English",
    volume = "1175",
    booktitle = "CEUR Workshop Proceedings",
    publisher = "CEUR-WS",

    }

    TY - GEN

    T1 - Joint equal contribution of global and local features for image annotation

    AU - Sarin, Supheakmungkol

    AU - Kameyama, Wataru

    PY - 2009

    Y1 - 2009

    N2 - Image annotation is a very important task as the number of photographs has gone sky-high. This paper describes our participation in the ImageCLEF Large Scale Visual Concept Detection and Annotation Task 2009. We present the method used for our best run. Our approach is inspired from a recently proposed method where joint equal contribution (JEC) of simple global color and texture features can outperform the state-of-the-art annotation techniques [10]. Our idea is that if such simple features could do so well, then the combination of higher-level features would do even better. Study has shown that the concurrent use of saliency and gist of the scene is a major trait of human vision system. Therefore, in this preliminary study, we propose to explore the combination of different visual features at global, local and scene levels including global and local color, texture, and gist of the scene. The experiments confirm that higher-level features lead to better performance. Through the experiments, we also found that using 40 nearest neighbors and HSV, HSV (at saliency regions), HAAR, GIST (full scene), GIST (scene at the center) as features produce the best result.We finally identify the weakness in our approach and ways on how the system could be optimized and improved.

    AB - Image annotation is a very important task as the number of photographs has gone sky-high. This paper describes our participation in the ImageCLEF Large Scale Visual Concept Detection and Annotation Task 2009. We present the method used for our best run. Our approach is inspired from a recently proposed method where joint equal contribution (JEC) of simple global color and texture features can outperform the state-of-the-art annotation techniques [10]. Our idea is that if such simple features could do so well, then the combination of higher-level features would do even better. Study has shown that the concurrent use of saliency and gist of the scene is a major trait of human vision system. Therefore, in this preliminary study, we propose to explore the combination of different visual features at global, local and scene levels including global and local color, texture, and gist of the scene. The experiments confirm that higher-level features lead to better performance. Through the experiments, we also found that using 40 nearest neighbors and HSV, HSV (at saliency regions), HAAR, GIST (full scene), GIST (scene at the center) as features produce the best result.We finally identify the weakness in our approach and ways on how the system could be optimized and improved.

    KW - Automatic image annotation

    KW - Color

    KW - Gist of scene

    KW - Joint equal contribution

    KW - K nearest neighbors

    KW - Saliency

    KW - Texture

    UR - http://www.scopus.com/inward/record.url?scp=84922051553&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84922051553&partnerID=8YFLogxK

    M3 - Conference contribution

    VL - 1175

    BT - CEUR Workshop Proceedings

    PB - CEUR-WS

    ER -