全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Anisotropic Diffusion for Details Enhancement in Multiexposure Image Fusion

DOI: 10.1155/2013/928971

Full-Text   Cite this paper   Add to My Lib

Abstract:

We develop a multiexposure image fusion method based on texture features, which exploits the edge preserving and intraregion smoothing property of nonlinear diffusion filters based on partial differential equations (PDE). With the captured multiexposure image series, we first decompose images into base layers and detail layers to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. Then, we have considered texture features of the base layer to generate a mask (i.e., decision mask) that guides the fusion of base layers in multiresolution fashion. Finally, well-exposed fused image is obtained that combines fused base layer and the detail layers at each scale across all the input exposures. Proposed algorithm skipping complex High Dynamic Range Image (HDRI) generation and tone mapping steps to produce detail preserving image for display on standard dynamic range display devices. Moreover, our technique is effective for blending flash/no-flash image pair and multifocus images, that is, images focused on different targets. 1. Introduction It is impossible to capture the entire dynamic range of the real world scene with single exposure. Human eye is sensitive to relative rather than absolute luminance values [1]. Human eye can observe both indoor and outdoor details simultaneously. This is because the eye adapts locally as we scan the different regions of the scene and can adapt 10 orders of magnitude of intensity variations in the scene [2], while standard digital cameras are unable to record the luminance variation in the entire scene. Currently, there are many applications that involve variable exposure photography to determine the details to be captured optimally in the photographed scene. The intention of exposure setting determination is to control charge capacity of the Charge Coupled Device (CCD). An example is shown in Figure 1(a), and long exposure yields details in the poorly illuminated areas while short exposure provides detail in the brightly illuminated area. Therefore, each exposure gives us trustworthy information about certain pixels, that is, the optimally exposed pixels for that image. In such type of images, for dark pixels, the relative contribution of noise is high and for bright pixels, the sensor may have been saturated. Therefore, it is desirable to ignore very dark and very bright pixels to achieve suprathreshold viewing conditions [2]. Consequently, the scene contains very dark and

References

[1]  E. Reinhard, G. Ward, S. Pattanaik, and P. Debvec, High Dynamic Range Imaging Acquisition, Manipulation, and Display, Morgan Kaufmann, 2005.
[2]  J. A. Ferwerda, S. N. Pattanaik, P. Shirley, and D. P. Greenberg, “Model of visual adaptation for realistic image synthesis,” in Proceedings of the Computer Graphics Conference (SIGGRAPH '96), pp. 249–258, August 1996.
[3]  P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th ACM Annual Conference on Computer Graphics and Interactive techniques (SIGGRAPH '97), pp. 369–378, Los Angeles, Calif, USA, August 1997.
[4]  S. Mann and R. W. Picard, “Being “undigital” with digital cameras: extending dynamic range by combining differently exposed pictures,” in Proceedings of the IS&T's 48th Annual Conference, pp. 442–448, May 1995.
[5]  K. Jacobs, C. Loscos, and G. Ward, “Automatic high-dynamic range image generation for dynamic scenes,” IEEE Computer Graphics and Applications, vol. 28, no. 2, pp. 84–93, 2008.
[6]  G. Ward, “Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures,” Journal of Graphics Tools, vol. 8, no. 2, pp. 17–30, 2003.
[7]  A. Tomaszewska and R. Mantiuk, “Image registration for multi-exposure high dynamic range image acquisition,” in Proceedings of the International Conference on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic, 2007.
[8]  E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 267–276, 2002.
[9]  H. Seetzen, W. Heidrich, W. Stuerzlinger et al., “High dynamic range display system,” ACM Transaction on Graphics, vol. 23, no. 3, pp. 760–768, 2004.
[10]  H. Seetzen, L. A. Whitehead, and G. Ward, “A high dynamic range display using low and high resolution modulator,” in Proceedings of the Society for Information Display International Symposium, vol. 34, pp. 1450–1453, 2003.
[11]  P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983.
[12]  P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.
[13]  T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photography,” Computer Graphics Forum, vol. 28, no. 1, pp. 161–171, 2009.
[14]  R. Fattal, M. Agrawala, and S. Rusinkiewicz, “Multiscale shape and detail enhancement from multi-light image collections,” in Proceedings of the International Conference on Computer Graphics and Interactive Techniques (ACM SIGGRAPH '07), vol. 51, August 2007.
[15]  M. I. Smith and J. P. Heather, “Review of image fusion technology,” in Proceedings of the Defense and Security Symposium, vol. 5782, pp. 29–45, Orlando, Fla, USA, 2005.
[16]  G. W. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction operator for high dynamic range scenes,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291–306, 1997.
[17]  F. Drago, K. Myszkowski, T. Annen, and N. Chiba, “Adaptive logarithmic mapping for displaying high contrast scenes,” Computer Graphics Forum, vol. 22, no. 3, pp. 419–426, 2003.
[18]  E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13–24, 2005.
[19]  Y. Li, L. Sharan, and E. H. Adelson, “Compressing and Companding high dynamic range images with subband architectures,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 836–844, 2005.
[20]  R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 249–256, 2002.
[21]  F. Durand and J. Dorsey, “Fast bilateral Filtering for the display of high dynamic range images,” ACM Transaction on Graphics, vol. 21, no. 3, pp. 257–266, 2002.
[22]  W. F. Lee, T. Y. Lin, M. L. Chu, T. H. Huang, and H. H. Chen, “Perception-based high dynamic range compression in gradient domain,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 1805–1808, November 2009.
[23]  J. M. Ogden, E. H. Adelson, J. R. Bergen, and P. J. Burt, “Pyramid based computer graphics,” RCA Engineer, vol. 30, no. 5, pp. 4–15, 1985.
[24]  A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash exposure sampling,” ACM Transaction on Graphics, vol. 24, no. 3, pp. 828–835, 2005.
[25]  G. Petschnigg, R. Szeliski, M. Agrawala, M. F. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Transaction on Graphics, vol. 23, no. 3, pp. 664–672, 2004.
[26]  S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing, vol. 26, no. 7, pp. 971–979, 2008.
[27]  J. H. Adu and M. Wang, “Multi-focus image fusion based on WNMF and focal point analysis,” Journal of Convergence Information Technology, vol. 6, no. 7, pp. 109–117, 2011.
[28]  S. Raman and S. Chaudhuri, “Bilateral filter based compositing for variable exposure photography,” in Proceedings of Eurographics, Munich, Germany, 2009.
[29]  A. Goshtasby, “Fusion of multi-exposure images,” Image and Vision Computing, vol. 23, pp. 611–618, 2005.
[30]  R. Szeliski, “System and process for improving the uniformity of the exposure and tone of a digital image,” U. S. Patent No. 6687400, 2004.
[31]  Y. Zhao, J. Shen, and Y. He, “Subband architecture based exposure fusion,” in Proceedings of the 4th Pacific-Rim Symposium on Image and Video Technology (PSIVT '10), pp. 501–506, Singapore, November 2010.
[32]  S. G. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 4, pp. 674–693, 1989.
[33]  A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: Theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006.
[34]  M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005.
[35]  M. J. Black, G. Sapiro, D. H. Marimont, and D. Heeger, “Robust anisotropic diffusion,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 421–432, 1998.
[36]  Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Transactions on Graphics, vol. 27, no. 3, article 67, 2008.
[37]  C. Wen, G. Gao, and Z. Chen, “Multiresolution model for image denoising based on total least squares,” in Proceedings of the 4th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD '07), pp. 622–626, August 2007.
[38]  S. Liu, “Adaptive scalar and vector median filtering of noisy colour images based on noise estimation,” IET Image Processing, vol. 5, no. 6, pp. 541–553, 2011.
[39]  D. N. Vizireanu, S. Halunga, and G. Marghescu, “Morphological skeleton decomposition interframe interpolation method,” Journal of Electronic Imaging, vol. 19, no. 2, Article ID 023018, pp. 1–3, 2010.
[40]  K. He, J. Sun, and X. Tang, “Guided image filtering,” in Proceedings of the ECCV, vol. 6311 of Lecture Notes in Computer Science, pp. 1–14, Springer, 2010.
[41]  S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, “Bilateral filtering: theory and applications,” Foundations and Trends in, Computer Graphics and Vision, vol. 4, no. 1, pp. 1–73, 2008.
[42]  C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the ICCV, pp. 839–846, IEEE Computer Society, 1998.
[43]  J. Canny, “Finding edges and lines in images,” Tech. Rep. 720, MIT, Artificial Intelligence Laboratory, 1983.
[44]  J. Shen, Y. Zhao, and Y. He, “Detail-preserving exposure fusion using subband architecture,” The Visual Computer, vol. 28, no. 5, pp. 463–473, 2012.
[45]  A. A. Minai and R. D. Williams, “On the derivatives of the sigmoid,” Neural Networks, vol. 6, no. 6, pp. 845–853, 1993.
[46]  R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Transactions on Image Processing, vol. 99, pp. 3634–3646, 2011.
[47]  W. Zhang and W.-K. Cham, “Gradient-directed multiexposure composition,” IEEE Transaction on Image Processing, vol. 21, no. 4, pp. 2318–2323, 2012.
[48]  J. Tumblin and G. Turk, “LCIS: a boundary hierarchy for detail-preserving contrast reduction,” in Proceedings of the ACM SIGGRAPH, pp. 83–90, A. Rockwood, 1999.
[49]  P. Hodáková, I. Perfilieva, M. D?nková, and M. Vajgl, “Ftransform based image fusion,” in Image Fusion, O. Ukimura, Ed., pp. 3–22, InTech, Rijeka, Croatia, 2011.
[50]  Z. Wang, H. R. Sheikh, and A. C. Bovik, “No reference perceptual quality assessment of JPEG compressed images,” in Proceedings of the International Conference on Image Processing (ICIP '02), pp. 477–480, September 2002.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413