全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Image Mosaic Method Based on SIFT Features of Line Segment

DOI: 10.1155/2014/926312

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. 1. Introduction Recently image mosaic [1–4] has been an important subject in image processing researches. Image mosaic technologies hold extensive potential applications in remote sensing image processing, computer recognition, medical image analysis, artificial intelligence, and other fields. And also there are a number of techniques for capturing panoramic images of real world scenes [5]. Since, in real word application, the input images are taken at varying orientations and exposures, a feature-based registration technique similar to the pieces of literature [2, 6] is used to automatically align the input images. The image matching accuracy will have a direct influence on quality of panoramic image. Currently, there are two types of methods for image matching: one is the grayscale-based method that uses the correlation of grayscale in overlapping regions of two images to obtain optimal matching through correlation maximizing. The grayscale-based method is easy to implement, but it is relatively sensitive to grayscale changes in images, especially under variable lighting. The other matching methods based on image features use image pixel values to extract features. Because these features are partially invariant to lighting changes, matching ambiguity would be excellently resolved in the process of image matching. As for the extraction of image feature points, there already have been many proved methods, for example, Harris method [3], Susan method [7], and Shi-Tomasi method [8]. This feature-based image mosaic method has two main advantages as follows: (1) the computation complexity of image matching will be significantly reduced for the reason that the image feature points are far less than pixels; (2) the feature points have strong robustness for unbalance lighting and noises; as a result, the quality of image mosaic

References

[1]  P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Transactions on Graphics, vol. 2, no. 4, pp. 217–236, 1983.
[2]  M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, vol. 74, no. 1, pp. 59–73, 2007.
[3]  J. Zhu, M. W. Ren, Z. J. Yang, and W. Zhao, “Fast matching algorithm based on corner detection,” Journal of Nanjing University of Science and Technology, vol. 35, no. 6, pp. 755–758, 2011.
[4]  H. S. Sawhney, “True multi-image alignment and its application to mosaicing and lens distortion correction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 3, pp. 235–243, 1999.
[5]  N. Greene, “Environment mapping and other applications of world projections,” IEEE Computer Graphics and Applications, vol. 6, no. 11, pp. 21–29, 1986.
[6]  P. F. McLauchlan and A. Jaenicke, “Image mosaicing using sequential bundle adjustment,” Image and Vision Computing, vol. 20, no. 9-10, pp. 751–759, 2002.
[7]  K. Y. Chae, W. P. Dong, and C. S. Jeong, “SUSAN window based cost calculation for fast stereo matching,” Computational Intelligence and Security, vol. 3802, pp. 947–952, 2005.
[8]  J. Shi and C. Tomasi, “Good features to track,” in Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593–600, June 1994.
[9]  K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005.
[10]  D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[11]  H. Bay, A. Ess, T. Tuytelaars, and L. van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.
[12]  S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: binary Robust invariant scalable keypoints,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), pp. 2548–2555, November 2011.
[13]  M. Ozuysal, M. Calonder, V. Lepetit, and P. Fua, “Fast keypoint recognition using random ferns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 3, pp. 448–461, 2010.
[14]  M. Brown, R. Szeliski, and S. Winder, “Multi-image matching using multi-scale oriented patches,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 510–517, June 2005.
[15]  M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: binary robust independent elementary features,” in Proceedings of the 11th European Conference on Computer Vision, pp. 778–792, September 2010.
[16]  M. Leordeanu, M. Hebert, and R. Sukthankar, “Beyond local appearance: category recognition from pairwise interactions of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, June 2007.
[17]  F. von Hundelshausen, “D-Nets: beyond patch-based image descriptors,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '12), pp. 2941–2948, June 2012.
[18]  H. Bay, V. Ferrari, and L. van Gool, “Wide-baseline stereo matching with line segments,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 329–336, June 2005.
[19]  M. Trajkovi? and M. Hedley, “Fast corner detection,” Image and Vision Computing, vol. 16, no. 2, pp. 75–87, 1998.
[20]  S. Yang, M. Chen, D. Pomerleau, and R. Sukthankar, “Food recognition using statistics of pairwise local features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2249–2256, June 2010.
[21]  M. S. Chris Harris, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference, pp. 147–152, 1988.

Full-Text

Contact Us

[email protected]

QQ:3279437679

WhatsApp +8615387084133