TY - GEN
T1 - Optimizing CABAC architecture with prediction based context model prefetching
AU - Fu, Chen
AU - Sun, Heming
AU - Xu, Jiayao
AU - Zhang, Zhiqiang
AU - Zhou, Jinjia
N1 - Funding Information:
This work was supported through the activities of VDEC, The University of Tokyo, in collaboration with NIHON SYNOPSYS G.K, and Japan Society for the Promotion of Science (JSPS), under Grant 21K17770
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Context Adaptive Binary Arithmetic Coding (CABAC) is the entropy coding module widely used in recent video coding standards such as HEVC/H.265 and VVC/H.266. CABAC is a well-known throughput bottleneck due to its strong data dependencies. Because the required context model of the current bin often depends on the results of the previous bin, the context model cannot be prefetched early enough, and then costs pipeline stalls. To solve this problem, we propose a prediction-based context model prefetching strategy. If the prediction is correct, pipeline stalls can be eliminated, and the stalling cycles won't get worse with the wrong prediction. Moreover, the data interaction process between CABAC modules and the multi-stage pipeline structure are optimized to maximize the working frequency. The proposed pipeline architecture can reduce pipeline stalls and save up to 45.66% encoding time, the improved results show that it provides more significant gains in All Intra (AI) under low QP test conditions, which is better than the Random Access (RA) and Low Delay (LD) configuration. The highest hardware efficiency (Mbins/s Per k gates) is higher than the existing advanced pipeline architecture.
AB - Context Adaptive Binary Arithmetic Coding (CABAC) is the entropy coding module widely used in recent video coding standards such as HEVC/H.265 and VVC/H.266. CABAC is a well-known throughput bottleneck due to its strong data dependencies. Because the required context model of the current bin often depends on the results of the previous bin, the context model cannot be prefetched early enough, and then costs pipeline stalls. To solve this problem, we propose a prediction-based context model prefetching strategy. If the prediction is correct, pipeline stalls can be eliminated, and the stalling cycles won't get worse with the wrong prediction. Moreover, the data interaction process between CABAC modules and the multi-stage pipeline structure are optimized to maximize the working frequency. The proposed pipeline architecture can reduce pipeline stalls and save up to 45.66% encoding time, the improved results show that it provides more significant gains in All Intra (AI) under low QP test conditions, which is better than the Random Access (RA) and Low Delay (LD) configuration. The highest hardware efficiency (Mbins/s Per k gates) is higher than the existing advanced pipeline architecture.
KW - CABAC
KW - Entropy coding
KW - Hardware Desig
KW - HEVC
KW - Video En-coder
UR - http://www.scopus.com/inward/record.url?scp=85143609598&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143609598&partnerID=8YFLogxK
U2 - 10.1109/MMSP55362.2022.9949499
DO - 10.1109/MMSP55362.2022.9949499
M3 - Conference contribution
AN - SCOPUS:85143609598
T3 - 2022 IEEE 24th International Workshop on Multimedia Signal Processing, MMSP 2022
BT - 2022 IEEE 24th International Workshop on Multimedia Signal Processing, MMSP 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 24th IEEE International Workshop on Multimedia Signal Processing, MMSP 2022
Y2 - 26 September 2022 through 28 September 2022
ER -