Image defogging framework using segmentation and the dark channel prior

Sabiha Anan, Mohammad Ibrahim Khan, Mir Md Saki Kowsar, Kaushik Deb, Pranab Kumar Dhar, Takeshi Koshiba

Research output: Contribution to journalArticlepeer-review

Abstract

Foggy images suffer from low contrast and poor visibility problem along with little color information of the scene. It is imperative to remove fog from images as a pre-processing step in computer vision. The Dark Channel Prior (DCP) technique is a very promising defogging technique due to excellent restoring results for images containing no homogeneous region. However, having a large homogeneous region such as sky region, the restored images suffer from color distortion and block effects. Thus, to overcome the limitation of DCP method, we introduce a framework which is based on sky and non-sky region segmentation and restoring sky and non-sky parts separately. Here, isolation of the sky and non-sky part is done by using a binary mask formulated by floodfill algorithm. The foggy sky part is restored by using Contrast Limited Adaptive Histogram Equalization (CLAHE) and non-sky part by modified DCP. The restored parts are blended together for the resultant image. The proposed method is evaluated using both synthetic and real world foggy images against state of the art techniques. The experimental result shows that our proposed method provides better entropy value than other stated techniques along with have better natural visual effects while consuming much lower processing time.

Original languageEnglish
Article number285
Pages (from-to)1-21
Number of pages21
JournalEntropy
Volume23
Issue number3
DOIs
Publication statusPublished - 2021 Mar

Keywords

  • Dark channel prior
  • Floodfill algorithm
  • Image blending
  • Image enhancement
  • Segmentation

ASJC Scopus subject areas

  • Physics and Astronomy(all)

Fingerprint

Dive into the research topics of 'Image defogging framework using segmentation and the dark channel prior'. Together they form a unique fingerprint.

Cite this