Multi-label image recognition has attracted considerable research attention and achieved great success in recent years. Capturing label correlations is an effective manner to advance the performance of multi-label image recognition. Two types of label correlations were principally studied, i.e., the spatial and semantic correlations. However, in the literature, previous methods considered only either of them. In this work, inspired by the great success of Transformer, we propose a plug-and-play module, named the Spatial and Semantic Transformers (SST), to simultaneously capture spatial and semantic correlations in multi-label images. Our proposal is mainly comprised of two independent transformers, aiming to capture the spatial and semantic correlations respectively. Specifically, our Spatial Transformer is designed to model the correlations between features from different spatial positions, while the Semantic Transformer is leveraged to capture the co-existence of labels without manually defined rules. Other than methodological contributions, we also prove that spatial and semantic correlations complement each other and deserve to be leveraged simultaneously in multi-label image recognition. Benefitting from the Transformer's ability to capture long-range correlations, our method remarkably outperforms state-of-the-art methods on four popular multi-label benchmark datasets. In addition, extensive ablation studies and visualizations are provided to validate the essential components of our method.
ASJC Scopus subject areas
- コンピュータ グラフィックスおよびコンピュータ支援設計