TY - GEN
T1 - Selective Multi-Convolutional Region Feature Extraction based Iterative Discrimination CNN for Fine-Grained Vehicle Model Recognition
AU - Tian, Yanling
AU - Zhang, Weitong
AU - Zhang, Qieshi
AU - Lu, Gang
AU - Wu, Xiaojun
PY - 2018/11/26
Y1 - 2018/11/26
N2 - With the rapid rise of computer vision and driverless technology, vehicle model recognition plays a huge role in the common application and industry field. While fine-grained vehicle model recognition is often influenced by multi-level information, such as the image perspective, inter-feature similarity, vehicle details. Furthermore, pivotal regions extraction and fine-grained feature learning have become a vital obstacle to the fine-grained recognition of vehicle models. In this paper, we propose an iterative discrimination CNN (ID-CNN) based on selective multi-convolutional region (SMCR) feature extraction. The SMCR features, which consist of global and local SMCR features, are extracted from the original image with higher activation response value. As for ID-CNN, we use the global and local SMCR features iteratively to localize deep pivotal features and concatenate them together into a fully-connected fusion layer to predict the vehicle categories. We get better results and improve the accuracy to 91.8% on Stanford Cars-196 dataset and to 96.2% on CompCars dataset.
AB - With the rapid rise of computer vision and driverless technology, vehicle model recognition plays a huge role in the common application and industry field. While fine-grained vehicle model recognition is often influenced by multi-level information, such as the image perspective, inter-feature similarity, vehicle details. Furthermore, pivotal regions extraction and fine-grained feature learning have become a vital obstacle to the fine-grained recognition of vehicle models. In this paper, we propose an iterative discrimination CNN (ID-CNN) based on selective multi-convolutional region (SMCR) feature extraction. The SMCR features, which consist of global and local SMCR features, are extracted from the original image with higher activation response value. As for ID-CNN, we use the global and local SMCR features iteratively to localize deep pivotal features and concatenate them together into a fully-connected fusion layer to predict the vehicle categories. We get better results and improve the accuracy to 91.8% on Stanford Cars-196 dataset and to 96.2% on CompCars dataset.
UR - http://www.scopus.com/inward/record.url?scp=85059065985&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85059065985&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2018.8545375
DO - 10.1109/ICPR.2018.8545375
M3 - Conference contribution
AN - SCOPUS:85059065985
T3 - Proceedings - International Conference on Pattern Recognition
SP - 3279
EP - 3284
BT - 2018 24th International Conference on Pattern Recognition, ICPR 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 24th International Conference on Pattern Recognition, ICPR 2018
Y2 - 20 August 2018 through 24 August 2018
ER -