Perceptual Enhancement for Autonomous Vehicles: Restoring Visually Degraded Images for Context Prediction via Adversarial Training

Feng Ding, Keping Yu, Zonghua Gu, Xiangjun Li, Yunqing Shi

研究成果: Article査読

17 被引用数 (Scopus)

抄録

Realizing autonomous vehicles is one of the ultimate dreams for humans. However, perceptual information collected by sensors in dynamic and complicated environments, in particular, vision information, may exhibit various types of degradation. This may lead to mispredictions of context followed by more severe consequences. Thus, it is necessary to improve degraded images before employing them for context prediction. To this end, we propose a generative adversarial network to restore images from common types of degradation. The proposed model features a novel architecture with an inverse and a reverse module to address additional attributes between image styles. With the supplementary information, the decoding for restoration can be more precise. In addition, we develop a loss function to stabilize the adversarial training with better training efficiency for the proposed model. Compared with several state-of-the-art methods, the proposed method can achieve better restoration performance with high efficiency. It is highly reliable for assisting in context prediction in autonomous vehicles.

本文言語English
ジャーナルIEEE Transactions on Intelligent Transportation Systems
DOI
出版ステータスAccepted/In press - 2021

ASJC Scopus subject areas

  • 自動車工学
  • 機械工学
  • コンピュータ サイエンスの応用

フィンガープリント

「Perceptual Enhancement for Autonomous Vehicles: Restoring Visually Degraded Images for Context Prediction via Adversarial Training」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル