Fashion style in 128 floats: Joint ranking and classification using weak data for feature extraction

Edgar Simo Serra, Hiroshi Ishikawa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

75 Citations (Scopus)

Abstract

We propose a novel approach for learning features from weakly-supervised data by joint ranking and classification. In order to exploit data with weak labels, we jointly train a feature extraction network with a ranking loss and a classification network with a cross-entropy loss. We obtain highquality compact discriminative features with few parameters, learned on relatively small datasets without additional annotations. This enables us to tackle tasks with specialized images not very similar to the more generic ones in existing fully-supervised datasets. We show that the resulting features in combination with a linear classifier surpass the state-of-the-art on the Hipster Wars dataset despite using features only 0.3% of the size. Our proposed features significantly outperform those obtained from networks trained on ImageNet, despite being 32 times smaller (128 singleprecision floats), trained on noisy and weakly-labeled data, and using only 1.5% of the number of parameters.1.

Original languageEnglish
Title of host publication2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
PublisherIEEE Computer Society
Pages298-307
Number of pages10
Volume2016-January
ISBN (Electronic)9781467388511
Publication statusPublished - 2016
Event2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 - Las Vegas, United States
Duration: 2016 Jun 262016 Jul 1

Other

Other2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
CountryUnited States
CityLas Vegas
Period16/6/2616/7/1

    Fingerprint

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Simo Serra, E., & Ishikawa, H. (2016). Fashion style in 128 floats: Joint ranking and classification using weak data for feature extraction. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 (Vol. 2016-January, pp. 298-307). IEEE Computer Society.