Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA

Zhifeng Zhang, Dajiang Zhou, Shihao Wang, Shinji Kimura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep convolutional neural networks (CNNs) are widely used in many computer vision tasks. Since CNNs involve billions of computations, it is critical to reduce the resource/power consumption and improve parallelism. Compared with extensive researches on fixed point conversion for cost reduction, floating point customization has not been paid enough attention due to its higher cost than fixed point. This paper explores the customized floating point for both the training and inference of CNNs. 9-bit customized floating point is found sufficient for the training of ResNet-20 on CIFAR-10 dataset with less than 1% accuracy loss, which can also be applied to the inference of CNNs. With reduced bit-width, a computational unit (CU) based on Quad-Multiplier Packing is proposed to improve the resource efficiency of CNNs on FPGA. This design can save 87.5% DSP slices and 62.5% LUTs on Xilinx Kintex-7 platform compared to CU using 32-bit floating point. More CUs can be arranged on FPGA and higher throughput can be expected accordingly.

Original languageEnglish
Title of host publicationASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages184-189
Number of pages6
Volume2018-January
ISBN (Electronic)9781509006021
DOIs
Publication statusPublished - 2018 Feb 20
Event23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018 - Jeju, Korea, Republic of
Duration: 2018 Jan 222018 Jan 25

Other

Other23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018
CountryKorea, Republic of
CityJeju
Period18/1/2218/1/25

Fingerprint

Field programmable gate arrays (FPGA)
Neural networks
Cost reduction
Computer vision
Electric power utilization
Throughput
Costs

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design

Cite this

Zhang, Z., Zhou, D., Wang, S., & Kimura, S. (2018). Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA. In ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings (Vol. 2018-January, pp. 184-189). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ASPDAC.2018.8297303

Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA. / Zhang, Zhifeng; Zhou, Dajiang; Wang, Shihao; Kimura, Shinji.

ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 184-189.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhang, Z, Zhou, D, Wang, S & Kimura, S 2018, Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA. in ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 184-189, 23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018, Jeju, Korea, Republic of, 18/1/22. https://doi.org/10.1109/ASPDAC.2018.8297303
Zhang Z, Zhou D, Wang S, Kimura S. Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA. In ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 184-189 https://doi.org/10.1109/ASPDAC.2018.8297303
Zhang, Zhifeng ; Zhou, Dajiang ; Wang, Shihao ; Kimura, Shinji. / Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA. ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 184-189
@inproceedings{5e2e6b4cde3c4caebb65a91abe189abb,
title = "Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA",
abstract = "Deep convolutional neural networks (CNNs) are widely used in many computer vision tasks. Since CNNs involve billions of computations, it is critical to reduce the resource/power consumption and improve parallelism. Compared with extensive researches on fixed point conversion for cost reduction, floating point customization has not been paid enough attention due to its higher cost than fixed point. This paper explores the customized floating point for both the training and inference of CNNs. 9-bit customized floating point is found sufficient for the training of ResNet-20 on CIFAR-10 dataset with less than 1{\%} accuracy loss, which can also be applied to the inference of CNNs. With reduced bit-width, a computational unit (CU) based on Quad-Multiplier Packing is proposed to improve the resource efficiency of CNNs on FPGA. This design can save 87.5{\%} DSP slices and 62.5{\%} LUTs on Xilinx Kintex-7 platform compared to CU using 32-bit floating point. More CUs can be arranged on FPGA and higher throughput can be expected accordingly.",
author = "Zhifeng Zhang and Dajiang Zhou and Shihao Wang and Shinji Kimura",
year = "2018",
month = "2",
day = "20",
doi = "10.1109/ASPDAC.2018.8297303",
language = "English",
volume = "2018-January",
pages = "184--189",
booktitle = "ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Quad-multiplier packing based on customized floating point for convolutional neural networks on FPGA

AU - Zhang, Zhifeng

AU - Zhou, Dajiang

AU - Wang, Shihao

AU - Kimura, Shinji

PY - 2018/2/20

Y1 - 2018/2/20

N2 - Deep convolutional neural networks (CNNs) are widely used in many computer vision tasks. Since CNNs involve billions of computations, it is critical to reduce the resource/power consumption and improve parallelism. Compared with extensive researches on fixed point conversion for cost reduction, floating point customization has not been paid enough attention due to its higher cost than fixed point. This paper explores the customized floating point for both the training and inference of CNNs. 9-bit customized floating point is found sufficient for the training of ResNet-20 on CIFAR-10 dataset with less than 1% accuracy loss, which can also be applied to the inference of CNNs. With reduced bit-width, a computational unit (CU) based on Quad-Multiplier Packing is proposed to improve the resource efficiency of CNNs on FPGA. This design can save 87.5% DSP slices and 62.5% LUTs on Xilinx Kintex-7 platform compared to CU using 32-bit floating point. More CUs can be arranged on FPGA and higher throughput can be expected accordingly.

AB - Deep convolutional neural networks (CNNs) are widely used in many computer vision tasks. Since CNNs involve billions of computations, it is critical to reduce the resource/power consumption and improve parallelism. Compared with extensive researches on fixed point conversion for cost reduction, floating point customization has not been paid enough attention due to its higher cost than fixed point. This paper explores the customized floating point for both the training and inference of CNNs. 9-bit customized floating point is found sufficient for the training of ResNet-20 on CIFAR-10 dataset with less than 1% accuracy loss, which can also be applied to the inference of CNNs. With reduced bit-width, a computational unit (CU) based on Quad-Multiplier Packing is proposed to improve the resource efficiency of CNNs on FPGA. This design can save 87.5% DSP slices and 62.5% LUTs on Xilinx Kintex-7 platform compared to CU using 32-bit floating point. More CUs can be arranged on FPGA and higher throughput can be expected accordingly.

UR - http://www.scopus.com/inward/record.url?scp=85045323738&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85045323738&partnerID=8YFLogxK

U2 - 10.1109/ASPDAC.2018.8297303

DO - 10.1109/ASPDAC.2018.8297303

M3 - Conference contribution

VL - 2018-January

SP - 184

EP - 189

BT - ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -