Learned Image Compression with Fixed-point Arithmetic

研究成果: Conference contribution

抄録

Learned image compression (LIC) has achieved superior coding performance than traditional image compression standards such as HEVC intra in terms of both PSNR and MS-SSIM. However, most LIC frameworks are based on floating-point arithmetic which has two potential problems. First is that using traditional 32-bit floating-point will consume huge memory and computational cost. Second is that the decoding might fail because of the floating-point error coming from different encoding/decoding platforms. To solve the above two problems. 1) We linearly quantize the weight in the main path to 8-bit fixed-point arithmetic, and propose a fine tuning scheme to reduce the coding loss caused by the quantization. Analysis transform and synthesis transform are fine tuned layer by layer. 2) We exploit look-up-table (LUT) for the cumulative distribution function (CDF) to avoid the floating-point error. When the latent node follows non-zero mean Gaussian distribution, to share the CDF LUT for different mean values, we restrict the range of latent node to be within a certain range around mean. As a result, 8-bit weight quantization can achieve negligible coding gain loss compared with 32-bit floating-point anchor. In addition, proposed CDF LUT can ensure the correct coding at various CPU and GPU hardware platforms.

本文言語English
ホスト出版物のタイトル2021 Picture Coding Symposium, PCS 2021 - Proceedings
出版社Institute of Electrical and Electronics Engineers Inc.
ISBN(電子版)9781665425452
DOI
出版ステータスPublished - 2021 6
イベント35th Picture Coding Symposium, PCS 2021 - Virtual, Online
継続期間: 2021 6 292021 7 2

出版物シリーズ

名前2021 Picture Coding Symposium, PCS 2021 - Proceedings

Conference

Conference35th Picture Coding Symposium, PCS 2021
CityVirtual, Online
Period21/6/2921/7/2

ASJC Scopus subject areas

  • 信号処理
  • メディア記述

フィンガープリント

「Learned Image Compression with Fixed-point Arithmetic」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル