Fully utilized and low design effort architecture for H.264/AVC intra predictor generation

Yiqing Huang*, Qin Liu, Takeshi Ikenaga

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Fully exploiting the spatial feature of image makes H.264/ AVC standard superior in intra prediction part. However, when hardware is considered, full support of all intra modes will cause high design effort, especially for large image size. In this paper, we propose a low design effort solution for intra predictor generation, which is the most significant part in intra engine. Firstly, one parallel processing flow is given out, which achieves 37.5% reduction of processing time. Secondly, a fully utilized predictor generation architecture is given out, which saves 77.5% cycles of original one. With 30.11k gates at 200MHz, our design can support full-mode intra prediction for real-time processing of 4k×2k@60fps.

Original languageEnglish
Title of host publicationAdvances in Multimedia Modeling - 16th International Multimedia Modeling Conference, MMM 2010, Proceedings
Pages737-742
Number of pages6
DOIs
Publication statusPublished - 2009 Dec 1
Event16th International Multimedia Modeling Conference on Advances in Multimedia Modeling, MMM 2010 - Chongqing, China
Duration: 2010 Oct 62010 Oct 8

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5916 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th International Multimedia Modeling Conference on Advances in Multimedia Modeling, MMM 2010
Country/TerritoryChina
CityChongqing
Period10/10/610/10/8

Keywords

  • H.264/AVC
  • Hardware Architecture
  • Intra Prediction

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Fully utilized and low design effort architecture for H.264/AVC intra predictor generation'. Together they form a unique fingerprint.

Cite this