%0 Journal Article %T 基于多头注意力卷积Transformer的假新闻检测
Fake News Detection Based on Multi-Head Attention Convolution Transformer %A 张亚立 %A 李征宇 %A 孙平 %J Hans Journal of Data Mining %P 288-289 %@ 2163-1468 %D 2023 %I Hans Publishing %R 10.12677/HJDM.2023.134029 %X 随着通信技术和社交媒体的迅速发展,假新闻的广泛传播已经成为一个严重的问题,对国家和社会造成了巨大的损失。因此,检测假新闻已经成为备受关注的研究领域。虽然卷积神经网络(CNN)在局部特征提取方面效果出色,但其对顺序依赖和长距离依赖的处理能力较弱。因此,本文提出了一种注意力卷积Transformer模型,结合了Transformer架构和CNN提取局部特征的优点,并实现高效的假新闻检测。本文引入了一种新的注意力机制——多头注意力卷积机制,通过卷积过滤器将复杂的词空间转换为信息更丰富的卷积过滤器空间,从而捕捉重要的n-gram信息。该模型不仅能够捕捉局部和全局的依赖关系,还能保留词语之间的序列关系。实验结果在两个真实数据集上表明,多头注意力卷积Transformer在假新闻检测任务中的准确率、召回率和F1值明显高于TextCNN、BiGRU和传统的Transformer模型。
With the rapid development of communication technology and social media, the widespread dis-semination of fake news has become a serious problem, causing huge losses to the country and society. Therefore, detecting fake news has become a research area that has attracted much attention. Although the convolutional neural network (CNN) is excellent in local feature extraction, its ability to deal with sequential dependencies and long-distance dependencies is weak. Therefore, this pa-per proposes an attentional convolution Transformer model, which combines the advantages of Transformer architecture and CNN to extract local features, and achieves efficient fake news detection. This paper introduces a new attention mechanism—multi-head attention convolution mecha-nism, which transforms the complex word space into a more informative convolution filter space through convolution filters, thereby capturing important n-gram information. The model not only captures local and global dependencies, but also preserves the sequential relationship between words. Experimental results on two real datasets show that the accuracy, recall and F1 value of multi-head attention convolution Transformer in fake news detection tasks are significantly higher than TextCNN, BiGRU and traditional Transformer models. %K 假新闻检测,注意力卷积,Transformer
Fake News Detection %K Attention Convolution %K Transformer %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=73105