全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Learned Shrinkage Approach for Low-Dose Reconstruction in Computed Tomography

DOI: 10.1155/2013/609274

Full-Text   Cite this paper   Add to My Lib

Abstract:

We propose a direct nonlinear reconstruction algorithm for Computed Tomography (CT), designed to handle low-dose measurements. It involves the filtered back-projection and adaptive nonlinear filtering in both the projection and the image domains. The filter is an extension of the learned shrinkage method by Hel-Or and Shaked to the case of indirect observations. The shrinkage functions are learned using a training set of reference CT images. The optimization is performed with respect to an error functional in the image domain that combines the mean square error with a gradient-based penalty, promoting image sharpness. Our numerical simulations indicate that the proposed algorithm can manage well with noisy measurements, allowing a dose reduction by a factor of 4, while reducing noise and streak artifacts in the FBP reconstruction, comparable to the performance of a statistically based iterative algorithm. 1. Introduction 1.1. Problem Statement Computed tomography (CT) imaging produces a 3D map of the scanned object, where the different materials are distinguished by their X-ray attenuation properties. In medicine, such a map has a great diagnostic value, making the CT scan one of the most frequent noninvasive exploration procedures practiced in almost every hospital. The attenuation of biological tissues is measured by comparing the intensity of the X-rays entering and leaving the body. The main problem precluding pervasive use of the CT scan for diagnostics and monitoring is the damage caused to the tissues by the X-ray radiation. CT manufacturers make great efforts to reduce the X-ray dose required for images of diagnostic quality. In this work we propose an algorithm that enables a high-quality reconstruction from low-dose (and thus noisy) measurements. In ideal conditions, the information obtained in the scan suffices to build an exact attenuation map, called the CT image. In practice, the measurements are degraded by a number of physical phenomena. The main factors are off-focal radiation, afterglow and crosstalk in the detectors, beam hardening, and Compton scattering (see [1] for a detailed overview). These introduce a structured error into the measurements, mostly the type that is modeled by a convolution with some kernel. Another source of deterioration, dominant in the low-dose scenario, is the stochastic noise. One type of such noise stems from the low photon counts, which occur when the X-rays pass through high-attenuation areas. This phenomenon is similar to the shot noise, encountered in photo cameras in poor lighting conditions.

References

[1]  P. J. La Rivière, J. Bian, and P. A. Vargas, “Penalized-likelihood sinogram restoration for computed tomography,” IEEE Transactions on Medical Imaging, vol. 25, no. 8, pp. 1022–1036, 2006.
[2]  M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer, Berlin, Germany, 2010.
[3]  F. Natterer and F. Wübbeling, “Mathematical methods in image reconstruction,” SIAM Monographs on Mathematical Modeling and Computation, pp. 1–207, 2001.
[4]  P. J. La Rivière, J. Bian, and P. A. Vargas, “Comparison of quadratic- and median-based roughness penalties for penalized-likelihood sinogram restoration in computed tomography,” International Journal of Biomedical Imaging, vol. 2006, Article ID 41380, 7 pages, 2006.
[5]  T. Li, X. Li, J. Wang et al., “Nonlinear sinogram smoothing for low-dose X-ray CT,” IEEE Transactions on Nuclear Science, vol. 51, no. 5, pp. 2505–2513, 2004.
[6]  I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for polyenergetic X-ray computed tomography,” IEEE Transactions on Medical Imaging, vol. 21, no. 2, pp. 89–99, 2002.
[7]  J. Wang, T. Li, H. Lu, and Z. Liang, “Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography,” IEEE Transactions on Medical Imaging, vol. 25, no. 10, pp. 1272–1283, 2006.
[8]  J. Hsieh, “Adaptive streak artifact reduction in computed tomography resulting from excessive x-ray photon noise,” Medical Physics, vol. 25, no. 11, pp. 2139–2147, 1998.
[9]  M. Kachelrie?, O. Watzke, and W. A. Kalender, “Generalized multi-dimensional adaptive filtering for conventional and spiral single-slice, multi-slice, and cone-beam CT,” Medical Physics, vol. 28, no. 4, pp. 475–490, 2001.
[10]  H. Lu, X. Li, I. Hsiao, and Z. Liang, “Analytical noise treatment for low-dose CT projection data by penalized weighted least-square smoothing in the K-L domain,” in Medical Imaging 2002: Physics of Medical Imaging, Proceedings of SPIE, pp. 146–152, San Diego, Calif, USA, February 2002.
[11]  B. I. Andía, K. D. Sauer, and C. A. Bouman, “Nonlinear backprojection for tomographic reconstruction,” IEEE Transactions on Nuclear Science, vol. 49, no. 1 I, pp. 61–68, 2002.
[12]  Y. Hel-Or and D. Shaked, “A discriminative approach for wavelet denoising,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 443–457, 2008.
[13]  J. Thibault, C. A. Bouman, K. D. Sauer, and J. Hsieh, “A recursive filter for noise reduction in statistical iterative tomographic imaging,” in Computational Imaging IV, vol. 6065 of Proceedings of SPIE/IS&T, San Jose, Calif, USA, January 2006.
[14]  G. N. Ramachandran and A. V. Lakshminarayanan, “Three-dimensional reconstruction from radiographs and electron micrographs: application of convolutions instead of Fourier transforms,” Proceedings of the National Academy of Sciences of the United States of America, vol. 68, no. 9, pp. 2236–2240, 1971.
[15]  J. H. Kim, K. I. Kim, and C. E. Kwark, “Filter design for optimization of lesion detection in SPECT,” in Proceedings of the 1996 IEEE Nuclear Science Symposium, vol. 3, pp. 1683–1687, November 1996.
[16]  M. Elad, “Why simple shrinkage is still relevant for redundant representations?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5559–5569, 2006.
[17]  A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Review, vol. 51, no. 1, pp. 34–81, 2009.
[18]  M. Elad, B. Matalon, J. Shtok, and M. Zibulevsky, “A wide-angle view at iterated shrinkage algorithms,” in Wavelets XII, Proceedings of SPIE, pp. 26–29, San Diego, Calif, USA, August 2007.
[19]  D. L. Donoho and J. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994.
[20]  Y. Hel-Or, A. Adler, and M. Elad, “A shrinkage learning approach for single image super-resolution with overcomplete representations,” in Computer Vision—ECCV 2010, vol. 6312, pp. 622–635, Springer, Berlin, Germany, 2010.
[21]  P. J. Huber, “Robust estimation of a location parameter,” Annals of Statistics, vol. 53, pp. 73–101, 1964.
[22]  F. J. Anscombe, “The transformation of poisson, binomial and negativebinomial data,” Biometrika, vol. 35, no. 3-4, pp. 246–254, 1948.
[23]  D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Mathematical Programming B, vol. 45, no. 3, pp. 503–528, 1989.
[24]  F. Wubbeling and F. Natterer, Mathematical Methods in Image Reconstruction, SIAM, Philadelphia, Pa, USA, 2001.
[25]  T. H. Yoon and E. K. Joo, “Butterworth window for power spectral density estimation,” ETRI Journal, vol. 31, no. 3, pp. 292–297, 2009.
[26]  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133