Text Image Super Resolution Using Deep Attention Neural Network

Yun Liu, Remina Yano, Hiroshi Watanabe, Takuya Suzuki, Takeshi Chujoh, Tomohiro Ikai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we propose a super-resolution method for text images to improve the accuracy of optical character recognition (OCR). The accuracy of OCR is closely related to the resolution of the image, and when OCR is applied to low resolution text images, satisfactory results are often not obtained. In the proposed method, we extract more representative feature information from text images by combining channel and spatial attention. Furthermore, we propose a new loss function called 'edge loss'. Experimental results show that the recognition accuracy of text images by our SR method is 5.87% higher than that of the original low-resolution images, and also higher than the results of BICUBIC and the baseline model.

Original languageEnglish
Title of host publication2021 IEEE 10th Global Conference on Consumer Electronics, GCCE 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages280-282
Number of pages3
ISBN (Electronic)9781665436762
DOIs
Publication statusPublished - 2021
Event10th IEEE Global Conference on Consumer Electronics, GCCE 2021 - Kyoto, Japan
Duration: 2021 Oct 122021 Oct 15

Publication series

Name2021 IEEE 10th Global Conference on Consumer Electronics, GCCE 2021

Conference

Conference10th IEEE Global Conference on Consumer Electronics, GCCE 2021
Country/TerritoryJapan
CityKyoto
Period21/10/1221/10/15

Keywords

  • Optical Character Recognition (OCR)
  • attention
  • convolutional neural network
  • super resolution
  • text image

ASJC Scopus subject areas

  • Computer Science Applications
  • Signal Processing
  • Biomedical Engineering
  • Electrical and Electronic Engineering
  • Media Technology
  • Instrumentation

Fingerprint

Dive into the research topics of 'Text Image Super Resolution Using Deep Attention Neural Network'. Together they form a unique fingerprint.

Cite this