全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Memory Hierarchy Model Based on Data Reuse for Full-Search Motion Estimation on High-Definition Digital Videos

DOI: 10.1155/2012/473725

Full-Text   Cite this paper   Add to My Lib

Abstract:

The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using search window and block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames ( pixels) using nearly 299?Mbytes per second of external memory bandwidth. 1. Introduction Nowadays, several electronic devices support high-definition digital videos. Applications like internet and digital television broadcasting are also massively supporting this kind of media. In this scenario, the video coding becomes an essential area to make possible the storage and principally the transmission of these videos, mainly when the focus is in high definition. The most recent and advanced video coding standard is the H.264/AVC (advanced video coding) [1]. This standard includes high complexity on its modules, aiming to achieve high compression rates. This high complexity makes difficult to achieve real time (e.g. 30 frames per second) though software implementations, especially when high definition videos, like pixels, are considered. A digital video is a sequence of still images, called frames, typically sampled at a rate of 30 frames per second. In a video sequence, there is a considerable amount of redundant elements, like background scenes or objects that do not have any motion from a frame to another, that are not really essential for the construction of new images. These elements are usually called redundant information [2]. There are three types of redundancy: spatial redundancy (similarity in homogeneous texture areas), temporal redundancy (similarity between sequential frames) and

References

[1]  J. V. Team, Draft ITU-T Rec. and Final Draft Int. Standard of Joint Video Spec. ITU-T Rec. H.264 and ISO/IEC 14496-10 AVC, May 2003.
[2]  A. Bovik, Handbook of Image and Video Processing, Academic Press, 2000.
[3]  J. C. Tuan, T. S. Chang, and C. W. Jen, “On the data reuse and memory bandwidth analysis for full-search block-matching VLSI architecture,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 1, pp. 61–72, 2002.
[4]  Y. Q. Shi and H. Sun, Image and Video Compression for Multimedia Engineering, CRC Press, 2nd edition, 2008.
[5]  P. A. Kuhn, Complexity Analysis and VLSI Architectures for MPEG-4 Motion Estimation, Kluwe Academic Plubisher, Boston, Mass, USA, 1999.
[6]  I. Richardson, H.264 and MPEG-4 Video Compression: Video Coding for Next-Generation Multimedia, John Wiley & Sons, Chichester, UK, 2003.
[7]  Y. H. Hu and S.-Y. Kung, Handbook of Signal Processing Systems, Springer, New York, NY, USA, 2010.
[8]  A. S. B. Lopes, I. S. Silva, and L. V. Agostini, “An efficient memory hierarchy for full search motion estimation on high definition digital videos,” in Proceedings of the 24th Symposium on Integrated Circuits and Systems Design, pp. 131–136, Joao Pessoa, Brazil, September 2011.
[9]  JM15.1, “H.264/AVC JM Reference Software,” 2011, http://iphome.hhi.de/suehring/tml/.
[10]  Xilinx, “FPGA and CPLD Solutions from Xilinx, Inc,” http://www.xilinx.com/.
[11]  R. S. S. Dornelles, F. M. Sampaio, and L. V. Agostini, “Variable block size motion estimation architecture with a fast bottom-up Decision Mode and an integrated motion compensation targeting the H.264/AVC video coding standard,” in Proceedings of the 23rd Symposium on Integrated Circuits and Systems Design (SBCCI '10), pp. 186–191, September 2010.
[12]  R. Porto, L. Agostini, and S. Bampi, “Hardware design of the H.264/AVC variable block size motion estimation for real-time 1080HD video encoding,” in Proceedings of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI '09), pp. 115–120, May 2009.
[13]  L. Deng, W. Gao, M. Z. Hu, and Z. Z. Ji, “An efficient hardware implementation for motion estimation of AVC standard,” IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1360–1366, 2005.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413