全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于多头注意力卷积Transformer的假新闻检测
Fake News Detection Based on Multi-Head Attention Convolution Transformer

DOI: 10.12677/HJDM.2023.134029, PP. 288-289

Keywords: 假新闻检测,注意力卷积,Transformer
Fake News Detection
, Attention Convolution, Transformer

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着通信技术和社交媒体的迅速发展,假新闻的广泛传播已经成为一个严重的问题,对国家和社会造成了巨大的损失。因此,检测假新闻已经成为备受关注的研究领域。虽然卷积神经网络(CNN)在局部特征提取方面效果出色,但其对顺序依赖和长距离依赖的处理能力较弱。因此,本文提出了一种注意力卷积Transformer模型,结合了Transformer架构和CNN提取局部特征的优点,并实现高效的假新闻检测。本文引入了一种新的注意力机制——多头注意力卷积机制,通过卷积过滤器将复杂的词空间转换为信息更丰富的卷积过滤器空间,从而捕捉重要的n-gram信息。该模型不仅能够捕捉局部和全局的依赖关系,还能保留词语之间的序列关系。实验结果在两个真实数据集上表明,多头注意力卷积Transformer在假新闻检测任务中的准确率、召回率和F1值明显高于TextCNN、BiGRU和传统的Transformer模型。
With the rapid development of communication technology and social media, the widespread dis-semination of fake news has become a serious problem, causing huge losses to the country and society. Therefore, detecting fake news has become a research area that has attracted much attention. Although the convolutional neural network (CNN) is excellent in local feature extraction, its ability to deal with sequential dependencies and long-distance dependencies is weak. Therefore, this pa-per proposes an attentional convolution Transformer model, which combines the advantages of Transformer architecture and CNN to extract local features, and achieves efficient fake news detection. This paper introduces a new attention mechanism—multi-head attention convolution mecha-nism, which transforms the complex word space into a more informative convolution filter space through convolution filters, thereby capturing important n-gram information. The model not only captures local and global dependencies, but also preserves the sequential relationship between words. Experimental results on two real datasets show that the accuracy, recall and F1 value of multi-head attention convolution Transformer in fake news detection tasks are significantly higher than TextCNN, BiGRU and traditional Transformer models.

References

[1]  Zhang, X. and Ghorbani, A.A. (2020) An Overview of Online Fake News: Characterization, Detection, and Discussion. Information Processing and Management, 57, Article ID: 102025.
https://doi.org/10.1016/j.ipm.2019.03.004
[2]  Ngadiron, S., Abd Aziz, A. and Mohamed, S.S. (2021) The Spread of Covid-19 Fake News on Social Media and Its Impact Among Malaysians. International Journal of Law, Government and Communication, 6, 253-260.
https://doi.org/10.35631/IJLGC.6220024
[3]  Aslam, N., Ullah Khan, I., Alotaibi, F.S., Aldaej, L.A. and Aldu-baikil, A.K. (2021) Fake Detect: A Deep Learning Ensemble Model for Fake News Detection. Complexity, 2021, Article ID: 5557784.
https://doi.org/10.1155/2021/5557784
[4]  Allcott, H. and Gentzkow, M. (2017) Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31, 211-236.
https://doi.org/10.1257/jep.31.2.211
[5]  Roy, A., Basak, K., Ekbal, A. and Bhattacharyya, P. (2018) A Deep Ensemble Framework for Fake News Detection and Clas-sification. ArXiv Preprint ArXiv: 1811.04670.
[6]  Pérez-Rosas, V., Kleinberg, B., Lefevre, A. and Mihalcea, R. (2018) Automatic Detection of Fake News. Proceedings of the 27th International Conference on Computational Linguis-tics, Santa Fe, 20-26 August 2018, 3391-3401.
[7]  Ma, J., Gao, W., Wei, Z., et al. (2015) Detect Rumors Using Time Series of Social Context Information on Microblogging Websites. Proceedings of the 24th ACM International on Con-ference on Information and Knowledge Management, 1751-1754.
[8]  Shu, K., Mahudeswaran, D., Wang, S., Lee, D. and Liu, H. (2020) FakeNewsNet: A Data Repository with News Content, Social Context, and Dynamic Information for Studying Fake News on Social Media. Big Data, 8, 171-188.
https://doi.org/10.1089/big.2020.0062
[9]  Yu, F., Liu, Q., Wu, S., Wang, L. and Tan, T. (2017) A Convolutional Approach for Misinformation Identification. IJCAI’17: Proceedings of the 26th International Joint Conference on Artifi-cial Intelligence, Melbourne, 19-25 August 2017, 3901-3907. https://www.ijcai.org/proceedings/2017/0545.pdf
https://doi.org/10.24963/ijcai.2017/545
[10]  Wang, W.Y. (2017) “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection. Proceedings of the 55th Annual Meeting of the Association for Computational Linguis-tics, Vancouver, 30 July-4 August 2017, 422-426.
https://doi.org/10.18653/v1/P17-2067
[11]  Yang, Y., Zheng, L., Zhang, J., et al. (2018) TI-CNN: Convolutional Neural Networks for Fake News Detection. arXiv: 1806.00749.
[12]  Li, H., Kadav, A., Durdanovic, I., Samet, H. and Graf, H.P. (2017) Pruning Filters for Efficient ConvNets. 5th International Conference on Learning Representations, ICLR 2017, Toulon, 24-26 April 2017.
[13]  Zhang, Y., Zhong, V., Chen, D., Angeli, G. and Manning, C.D. (2017) Position-Aware Attention and Supervised Data Improve Slot Filling. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, 7-11 September 2017, 35-45.
https://doi.org/10.18653/v1/D17-1004
[14]  Zhong, P., Wang, D. and Miao, C. (2019) Knowledge-Enriched Trans-former for Emotion Detection in Textual Conversations. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, 3-7 November 2019, 165-176.
https://doi.org/10.18653/v1/D19-1016
[15]  Yang, B., Tu, Z., Wong, D.F., Meng, F., Chao, L.S. and Zhang, T. (2018) Modeling Localness for Self-Attention Networks. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, 31 October-4 November 2018, 4449-4458.
https://doi.org/10.18653/v1/D18-1475
[16]  Guo, M., Zhang, Y. and Liu, T. (2019) Gaussian Transformer: A Lightweight Approach for Natural Language Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 6489-6496.
https://doi.org/10.1609/aaai.v33i01.33016489
[17]  Wang, S. and Manning, C.D. (2012) Baselines and Bigrams: Simple, Good Sentiment and Topic Classification. Proceedings of the 50th Annual Meeting of the Association for Com-putational Linguistics, Jeju Island, 8-14 July 2012, 90-94.
[18]  Cho, K., Van Merri?nboer, B., Bahdanau, D. and Ben-gio, Y. (2014) On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Proceedings of SSST-8, 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, 25 October 2014, 103-111.
https://doi.org/10.3115/v1/W14-4012
[19]  Hochreiter, S. and Schmidhuber, J. (1997) Long Short-Term Memory. Neural Computation, 9, 1735-1780.
https://doi.org/10.1162/neco.1997.9.8.1735
[20]  Chen, T., Li, X., Yin, H. and Zhang, J. (2018) Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection. In: Ganji, M., Rashidi, L., Fung, B. and Wang, C., Eds., Trends and Applications in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science, Vol. 11154, Springer, Cham, 40-52.
https://doi.org/10.1007/978-3-030-04503-6_4
[21]  Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B.J., Wong, K.F. and Cha, M. (2016) Detecting Rumors from Microblogs with Recurrent Neural Networks. IJCAI’16: Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, 9-15 July 2016, 3818-3824. https://ink.library.smu.edu.sg/sis_research/4630
[22]  Li, S., Zhao, Z., Hu, R., Li, W., Liu, T. and Du, X. (2018) Ana-logical Reasoning on Chinese Morphological and Semantic Relations. Proceedings of the 56th Annual Meeting of the As-sociation for Computational Linguistics (Volume 2: Short Papers), Melbourne, 15-20 July 2018, 138-143.
https://doi.org/10.18653/v1/P18-2023
[23]  Sajjad, H., Durrani, N., Dalvi, F., Alam, F., Khan, A.R. and Xu, J. (2022) Analyzing Encoded Concepts in Transformer Language Models. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, 10-15 July 2022, 3082-3101.
https://doi.org/10.18653/v1/2022.naacl-main.225

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133