Robust Semantic Segmentation for Street Fashion Photos

Anh H. Dang*, Wataru Kameyama

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we aim to produce the state-of-the-art semantic segmentation for street fashion photos with three contributions. Firstly, we propose a high-performance semantic segmentation network that follows the encoder-decoder structure. Secondly, we propose a guided training process using multiple auxiliary losses. And thirdly, the 2D max-pooling-based scaling operation to produce segmentation feature maps for the aforementioned guided training process. We also propose mIoU+ metric taking noise into account for better evaluation. Evaluations with the ModaNet data set show that the proposed network achieves high benchmark results with less computational cost compared to ever-proposed methods.

Original languageEnglish
Title of host publication22nd International Conference on Advanced Communications Technology
Subtitle of host publicationDigital Security Global Agenda for Safe Society, ICACT 2020 - Proceeding
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1248-1257
Number of pages10
ISBN (Electronic)9791188428045
DOIs
Publication statusPublished - 2020 Feb
Event22nd International Conference on Advanced Communications Technology, ICACT 2020 - Pyeongchang, Korea, Republic of
Duration: 2020 Feb 162020 Feb 19

Publication series

NameInternational Conference on Advanced Communication Technology, ICACT
Volume2020
ISSN (Print)1738-9445

Conference

Conference22nd International Conference on Advanced Communications Technology, ICACT 2020
Country/TerritoryKorea, Republic of
CityPyeongchang
Period20/2/1620/2/19

Keywords

  • label pooling
  • mIoU+
  • semantic segmentation
  • street fashion photos

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Robust Semantic Segmentation for Street Fashion Photos'. Together they form a unique fingerprint.

Cite this