全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于多维动态卷积的运动想象脑电识别
Motion Imagery EEG Recognition Model Based on Multi-Dimensional Dynamic Convolution

DOI: 10.12677/CSA.2024.143052, PP. 1-9

Keywords: 多维动态卷积,运动想象,脑电信号解码,注意力权重
Multidimensional Dynamic Convolution
, Motor Imagery, EEG Signal Decoding, Attention Weights

Full-Text   Cite this paper   Add to My Lib

Abstract:

基于运动想象的脑机接口(Brain Computer Interface, BCI)可以帮助残疾人控制机械手臂等外部设备,其中脑电信号解码是关键所在。但是不同个体间的脑电信号差异很大,使得传统的深度学习模型所采用的静态卷积很难自适应地提取脑电特征。为解决这个问题,本文提出了基于多维动态卷积的深度学习模型(Multidimensional Dynamic Convolution Net, MDconvnet),该模型通过三层多维动态卷积来提取特征,并将提取的特征输入到全连接层来获取分类结果。其中多维动态卷积会依据输入的数据,生成卷积多维度的注意力权重,并将该权重与卷积参数相乘来动态地调节卷积参数,以便更好地挖掘数据时空特征。本文采用2023运动想象数据集RankA和数据集RankB对MDConvnet模型进行了测试,同时与多个经典的运动想象识别模型(FBCSP、EEGnet、EEGTCN、FBCnet、Tesecption、STASCNN、Deepconvnet和VIT)进行性能对比。结果显示MDConvnet模型在RankA和RankB数据集上的平均准确率分别为64.20%和67.04%,超过其他算法模型,展现出了MDConvnet模型在运动想象脑电识别任务上的优异性能,为残疾人通过脑机接口控制外部设备提供了有力的支持。
The motor imagery Brain-Computer Interface (BCI) represents a self-paced paradigm for individu-als with impaired mobility to control external devices like robotic arms. Decoding electroenceph-alography (EEG) signals is pivotal in this context. However, substantial variations in EEG signals pose challenges for many static convolution-based deep learning models in adaptive feature extrac-tion. To address this, we propose the Multidimensional Dynamic Convolution (MDConvnet) model. This model employs three layers of multi-dimensional dynamic convolutions for feature extraction, followed by a fully connected layer for classification. The multi-dimensional dynamic convolution generates attention weights across multiple dimensions, dynamically adjusting convolution pa-rameters. This study tested the MDConvnet model on the 2023 Motor Imagery datasets RankA and datasets RankB, and compared its performance with other models (FBCSP, EEGnet, EEGTCN, FBCnet, Tesecption, STASCNN, Deepconvnet, and VIT). Results show MDConvnet outperformed of other models, achieving average accuracys of 64.20% and 67.04% on datasets A and B, respectively. It exhibits exceptional performance in EEG-based motion imagination recognition, offering robust support for disabled individuals controlling external devices through brain-machine interfaces.

References

[1]  Willett, F.R., Avansino, D.T., Hochberg, L.R., et al. (2021) High-Performance Brain-to-Text Communication via Hand-writing. Nature, 593, 249-254.
https://doi.org/10.1038/s41586-021-03506-2
[2]  Pfurtscheller, G. and Da Silva, F.H.L. (1999) Event-Related EEG/MEG Synchronization and Desynchronization: Basic Principles. Clinical Neurophysi-ology, 110, 1842-1857.
https://doi.org/10.1016/S1388-2457(99)00141-8
[3]  Mcfarland, D.J., Miner, L.A., Vaughan, T.M., et al. (2000) Mu and Beta Rhythm Topographies during Motor Imagery and Actual Movements. Brain Topography, 12, 177-186.
https://doi.org/10.1023/A:1023437823106
[4]  毕文龙, 魏笑, 谭草, 等. 基于粒子群优化支持向量机康复下肢外骨骼的脑电控制研究[J]. 科学技术与工程, 2023, 23(16): 6952-6958.
[5]  Müller-Gerking, J., Pfurtscheller, G. and Flyvbjerg, H. (1999) Designing Optimal Spatial Filters for Single-Trial EEG Classification in a Movement Task. Clinical Neurophysiology, 110, 787-798.
https://doi.org/10.1016/S1388-2457(98)00038-8
[6]  Novi, Q., Guan, C., Dat, T.H., et al. (2007) Sub-Band Common Spatial Pattern (SBCSP) for Brain-Computer Interface. 2007 3rd International IEEE/EMBS Conference on Neural Engineering, Kohala Coast, 02-05 May 2007, 204-207.
https://doi.org/10.1109/CNE.2007.369647
[7]  Ang, K.K., Chin, Z.Y., Zhang, H., et al. (2008) Filter Bank Com-mon Spatial Pattern (FBCSP) in Brain-Computer Interface. 2008 IEEE International Joint Conference on Neural Net-works (IEEE World Congress on Computational Intelligence), Hong Kong, 01-08 June 2008, 2390-2397.
[8]  雪峰豪, 蒋海波, 唐聃. 深度学习在健康医疗中的应用研究综述[J]. 计算机科学, 2023, 50(4): 1-15.
[9]  王婷, 王娜, 崔运鹏, 等. 基于深度学习的医疗电子数据特征学习方法[J]. 应用科学学报, 2023, 41(1): 41-54.
[10]  Tang, C., Xu, L., Chen, P., et al. (2020) A Novel Multiple Motor Imagery Experimental Paradigm Design and Neural Decoding. 2020 Chinese Automation Congress (CAC), Shanghai, 06-08 November 2020, 4024-4028.
https://doi.org/10.1109/CAC51589.2020.9327703
[11]  Wang, H., Xu, T., Tang, C., et al. (2020) Diverse Feature Blend Based on Filter-Bank Common Spatial Pattern and Brain Functional Connectivity for Multiple Motor Imagery De-tection. IEEE Access, 8, 155590-155601.
https://doi.org/10.1109/ACCESS.2020.3018962
[12]  Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., et al. (2017) Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Human Brain Map-ping, 38, 5391-5420.
https://doi.org/10.1002/hbm.23730
[13]  Lawhern, V.J., Solon, A.J., Waytowich, N.R., et al. (2018) EEGNet: A Compact Convolutional Neural Network for EEG-Based Brain–Computer Interfaces. Journal of Neural Engineering, 15, Article ID: 056013.
https://doi.org/10.1088/1741-2552/aace8c
[14]  Song, Y., Jia, X., Yang, L., et al. (2021) Transformer-Based Spa-tial-Temporal Feature Learning for EEG Decoding. arXiv:2106.11170.
[15]  Chen, Y., Dai, X., Liu, M., et al. (2020) Dynamic Convolution: Attention over Convolution Kernels. Proceedings of the IEEE/CVF Conference on Computer Vi-sion and Pattern Recognition, Seattle, 13-19 June 2020, 11030-11039.
https://doi.org/10.1109/CVPR42600.2020.01104
[16]  Li, C., Zhou, A. and Yao, A. (2022) Omni-Dimensional Dynamic Convolution. arXiv:2209.07947.
[17]  Liu, X., Shen, Y., Liu, J., et al. (2020) Parallel Spatial-Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI. Frontiers in Neuroscience, 14, Article ID: 587520.
https://doi.org/10.3389/fnins.2020.587520

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413