全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

一种基于贝叶斯网络的微波雷达和图像融合与分类算法
A Microwave Radar and Image Fusion and Classification Algorithm Based on Bayesian Networks

DOI: 10.12677/JISP.2024.131005, PP. 47-58

Keywords: 合成孔径雷达(SAR),斑点噪声,贝叶斯超参数优化算法
Synthetic Aperture Radar (SAR)
, Speckle Noise, Bayesian Hyperparameter Optimization Algorithm

Full-Text   Cite this paper   Add to My Lib

Abstract:

近年来,深度学习技术的进步在合成孔径雷达(SAR)自动目标识别(ATR)技术中取得了出色的表现。然而,由于斑点噪声的干扰,SAR图像的分类任务仍然具有挑战性。为了解决这个问题,本研究提出了一种多尺度局部–全局特征融合网络(MFN),该网络集成了卷积神经网络(CNN)和Transformer网络。所提出的网络包括三个分支:CovNeXt-SimAM分支,Swin Transformer分支和多尺度特征融合分支。CovNeXt-SimAM分支在不同的尺度上提取SAR图像的局部纹理细节特征。通过将SimAM注意机制结合到CNN块中,从空间和通道注意角度增强了模型的特征提取能力。此外,Swin Transformer分支用于提取不同尺度下的SAR图像全局语义信息。最后,多尺度特征融合分支用于融合局部特征和全局语义信息。此外,为了解决由于经验确定的模型超参数问题而导致模型精度和效率较低的问题,采用贝叶斯超参数优化算法确定了最佳的模型超参数。该研究提出的模型在MSTAR数据集上,标准工作条件(SOCs)和扩展工作条件(EOCs)下,对SAR车辆目标分别取得了99.26%和94.27%的平均识别准确率。与基准模型相比,识别准确率分别提高了12.74%和25.26%。结果表明,贝叶斯-MFN降低了SAR图像之间的类间距离,导致更紧凑的分类特征和更少的斑点噪声干扰。与其他主流模型相比,贝叶斯-MFN模型展现出最佳的分类性能。
In recent years, the advancement of deep learning technology has led to excellent performance in synthetic aperture radar (SAR) automatic target recognition (ATR) technology. However, due to the interference of speckle noise, the task of classifying SAR images remains challenging. To address this issue, a multi-scale local-global feature fusion network (MFN) integrating a convolution neural network (CNN) and a transformer network was proposed in this study. The proposed network comprises three branches: a CovNeXt-SimAM branch, a Swin Transformer branch, and a multi-scale feature fusion branch. The CovNeXt-SimAM branch extracts local texture detail features of the SAR images at different scales. By incorporating the SimAM attention mechanism to the CNN block, the feature extraction capability of the model was enhanced from the perspective of spatial and channel attention. Additionally, the Swin Transformer branch was employed to extract SAR image global semantic information at different scales. Finally, the multi-scale feature fusion branch was used to fuse local features and global semantic information. Moreover, to overcome the problem of poor accuracy and inefficiency of the model due to empirically determined model hyperparameters, the Bayesian hyperparameter optimization algorithm was used to determine the optimal model hyperparameters. The model proposed in this study achieved average recognition accuracies of 99.26% and 94.27% for SAR vehicle targets under standard operating conditions (SOCs) and extended operating conditions (EOCs), respectively, on the MSTAR dataset. Compared with the base-line model, the recognition accuracy has been improved by 12.74% and 25.26%, respectively. The results demonstrated that Bayes-MFN reduces the inter-class distance of the SAR images, resulting in more compact classification features and less interference from speckle noise. Compared with other mainstream models, the Bayes-MFN model exhibited the best

References

[1]  Wang, K., Zhang, G., Leng, Y. and Leung, H. (2018) Synthetic Aperture Radar Image Generation with Deep Generative Models. IEEE Geoscience and Remote Sensing Letters, 16, 912-916.
https://doi.org/10.1109/LGRS.2018.2884898
[2]  Gao, F., Yang, Y., Wang, J., Sun, J., Yang, E. and Zhou, H. (2018) A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sensing, 10, 846.
https://doi.org/10.3390/rs10060846
[3]  Wang, L., Bai, X., Xue, R. and Zhou, F. (2021) Few-Shot SAR Auto-matic Target Recognition Based on Conv-BiLSTM Prototypical Network. Neurocomputing, 443, 235-246.
https://doi.org/10.1016/j.neucom.2021.03.037
[4]  Novak, L.M., Owirka, G.J. and Brower, W.S. (1997) The Au-tomatic Target-Recognition System in SAIP. Linc.Lab.J., 10, 187-202.
[5]  Hummel, R. (2000) Model-Based ATR Using Synthetic Aperture Radar. Proceedings of the IEEE International Radar Conference, Arilington, VA, USA, 7-12 May 2000, 856-861.
[6]  Liu, Z., Mao, H. and Wu, C.Y. (2022) A Convnet for the 2020s. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19-24 June 2022, 11976-11986.
https://doi.org/10.1109/CVPR52688.2022.01167
[7]  Dong, H., Zhang, L. and Zou, B. (2021) Exploring Vision Transformers for Polarimetric SAR Image Classification. IEEE Transactions on Geoscience and Remote Sensing, 60, 5219715.
https://doi.org/10.1109/TGRS.2021.3137383
[8]  Vaswani, A., Shazeer, N. and Parmar, N. (2017) At-tention Is All You Need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4-9 December 2017, 147-152.
[9]  Dosovitskiy, A., Beyer, L. and Kolesnikov, A. (2020) An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929
[10]  Liu, Z., Lin, Y. and Cao, Y. (2021) Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. Proceedings of the IEEE/CVF Internation-al Conference on Computer Vision, Montreal, QC, Canada, 10-17 October 2021, 10012-10022.
https://doi.org/10.1109/ICCV48922.2021.00986
[11]  Yang, L., Zhang, R.Y., Li, L. and Xie, X. (2021) Simam: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks. Proceedings of the International Confer-ence on Machine Learning, Virtual, 18-24 July 2021, 11863-11874.
[12]  Li, S., Wang, S., Dong, Z., Li, A., Qi, L. and Yan, C. (2022) PSBCNN: Fine-Grained Image Classification Based on Pyramid Convolution Networks and SimAM. Proceedings of the IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, In-ternational Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Falerna, Italy, 12-15 September 2022, 1-4.
https://doi.org/10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927801
[13]  You, H., Lu, Y. and Tang, H. (2023) Plant Disease Classification and Adversarial Attack Using SimAM-EfficientNet and GP-MI-FGSM. Sustainabil-ity, 15, 1233.
https://doi.org/10.3390/su15021233
[14]  Yu, T. and Zhu, H. (2020) Hyper-Parameter Optimization: A Review of Algorithms and Applications. arXiv:2003.05689.
[15]  Bergstra, J., Komer, B., Eliasmith, C., Yamins, D. and Cox, D.D. (2015) Hyperopt: A Python Library for Model Selection and Hyperparameter Optimization. Computational Science & Discovery, 8, 014008.
https://doi.org/10.1088/1749-4699/8/1/014008
[16]  Zhang, J., Wang, Q. and Shen, W. (2022) Hyper-Parameter Optimization of Multiple Machine Learning Algorithms for Molecular Property Prediction Using Hyperopt Library. Chi-nese Journal of Chemical Engineering, 52, 115-125.
https://doi.org/10.1016/j.cjche.2022.04.004
[17]  Bergstra, J., Bardenet, R. and Bengio, Y. (2011) Algorithms for Hyper-Parameter Optimization. Proceedings of the Advances in Neural Information Processing Systems 24, Granada, Spain, 12-15 December 2011, 241-253.
[18]  Kang, K. and Ryu, H. (2019) Predicting Types of Occupational Accidents at Construction Sites in Korea Using Random Forest Model. Safety Science, 120, 226-236.
https://doi.org/10.1016/j.ssci.2019.06.034

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413