A Transformer-Based Model for Super-Resolution of Anime Image

Shizhuo Xu*, Vibekananda Dutta, Xin He, Takafumi Matsumaru

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image processing, especially in medical images, while relatively little appliedto anime image production. Furthermore, contemporary ISR tools are often based on convolutional neural networks (CNNs), while few methods attempt to use transformers that perform well in other advanced vision tasks. We propose a so-called anime image super-resolution (AISR) method based on the Swin Transformer in this work. The work was carried out in several stages. First, a shallow feature extraction approach was employed to facilitate the features map of the input image’s low-frequency information, which mainly approximates the distribution of detailed information in a spatial structure (shallow feature). Next, we applied deep feature extraction to extract the image semantic information (deep feature). Finally, the image reconstruction method combines shallow and deep features to upsample the feature size and performs sub-pixel convolution to obtain many feature map channels. The novelty of the proposal is the enhancement of the low-frequency information using a Gaussian filter and the introduction of different window sizes to replace the patch merging operations in the Swin Transformer. A high-quality anime dataset was constructed to curb the effects of the model robustness on the online regime. We trained our model on this dataset and tested the model quality. We implement anime image super-resolution tasks at different magnifications (2×, 4×, 8×). The results were compared numerically and graphically with those delivered by conventional convolutional neural network-based and transformer-based methods. We demonstrate the experiments numerically using standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively. The series of experiments and ablation study showcase that our proposal outperforms others.

Original languageEnglish
Article number8126
JournalSensors
Volume22
Issue number21
DOIs
Publication statusPublished - 2022 Nov

Keywords

  • anime dataset
  • anime image
  • deep feature
  • high-frequency information
  • image reconstruction
  • low-frequency information
  • shallow feature
  • super-resolution
  • swin transformer

ASJC Scopus subject areas

  • Analytical Chemistry
  • Information Systems
  • Biochemistry
  • Atomic and Molecular Physics, and Optics
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'A Transformer-Based Model for Super-Resolution of Anime Image'. Together they form a unique fingerprint.

Cite this