Treffer: MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification.

Title:
MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification.
Authors:
Zhao R; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China. Electronic address: zhaoruiyu@mail.ecust.edu.cn., Daly I; the Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, Essex, UK., Chen Y; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China., Wu W; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China., Liu L; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China., Wang X; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China., Cichocki A; Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Wako-shi, 351-0198, Japan; Systems Research Institute, Polish Academy of Sciences, Warsaw, 01-447, Poland; Department of Informatics, Nicolaus Copernicus University, Torun, 87-100, Poland., Jin J; School of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Center of Intelligent Computing, School of Math, East China University of Science and Technology, Shanghai, 200237, China; The Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, 200237, China. Electronic address: jinjing@ecust.edu.cn.
Source:
Journal of neuroscience methods [J Neurosci Methods] 2025 Dec; Vol. 424, pp. 110578. Date of Electronic Publication: 2025 Sep 12.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Elsevier/North-Holland Biomedical Press Country of Publication: Netherlands NLM ID: 7905558 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1872-678X (Electronic) Linking ISSN: 01650270 NLM ISO Abbreviation: J Neurosci Methods Subsets: MEDLINE
Imprint Name(s):
Original Publication: Amsterdam, Elsevier/North-Holland Biomedical Press.
Contributed Indexing:
Keywords: Attention convolution; Brain-Computer Interfaces; Convolutional neural network; Motor imagery
Entry Date(s):
Date Created: 20250914 Date Completed: 20251028 Latest Revision: 20251028
Update Code:
20251029
DOI:
10.1016/j.jneumeth.2025.110578
PMID:
40946865
Database:
MEDLINE

Weitere Informationen

Background: Convolutional neural networks (CNNs) are widely employed in motor imagery (MI) classification. However, due to cumbersome data collection experiments, and limited, noisy, and non-stationary EEG signals, small MI datasets present considerable challenges to the design of these decoding algorithms.
New Method: To capture more feature information from inadequately sized data, we propose a new method, a multi-scale attention convolutional neural network (MSAttNet). Our method includes three main components-a multi-band segmentation module, an attention spatial convolution module, and a multi-scale temporal convolution module. First, the multi-band segmentation module adopts a filter bank with overlapping frequency bands to enhance features in the frequency domain. Then, the attention spatial convolution module is used to adaptively adjust different convolutional kernel parameters according to the input through the attention mechanism to capture the features of different datasets. The outputs of the attention spatial convolution module are grouped to perform multi-scale temporal convolution. Finally, the output of the multi-scale temporal convolution module uses the bilinear pooling layer to extract temporal features and perform noise elimination. The extracted features are then classified.
Results: We use four datasets, including BCI Competition IV Dataset IIa, BCI Competition IV Dataset IIb, the OpenBMI dataset and the ECUST-MI dataset, to test our proposed method. MSAttNet achieves accuracies of 78.20%, 84.52%, 75.94% and 78.60% in cross-session experiments, respectively.
Comparison With Existing Methods: Compared with state-of-the-art algorithms, MSAttNet enhances the decoding performance of MI tasks.
Conclusion: MSAttNet effectively addresses the challenges of MI-EEG datasets, improving decoding performance by robust feature extraction.
(Copyright © 2025 Elsevier B.V. All rights reserved.)

Declaration of competing interest We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.