Treffer: Explainable AI for pain perception: subject-independent EEG decoding using DeepSHAP and CNNs.

Title:
Explainable AI for pain perception: subject-independent EEG decoding using DeepSHAP and CNNs.
Authors:
Aktaş FA; Department of Biomedical Engineering, TOBB University of Economics and Technology, Ankara, Turkey.; Department of Biomedical Engineering, Samsun University, Samsun, Turkey., Eken A; Department of Biomedical Engineering, TOBB University of Economics and Technology, Ankara, Turkey.; Neurocognition and Functional Neurorehabilitation group, Neuropsychology lab, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany., Eroğul O; Department of Biomedical Engineering, TOBB University of Economics and Technology, Ankara, Turkey.
Source:
Biomedical physics & engineering express [Biomed Phys Eng Express] 2026 Jan 15; Vol. 12 (1). Date of Electronic Publication: 2026 Jan 15.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: IOP Publishing Ltd Country of Publication: England NLM ID: 101675002 Publication Model: Electronic Cited Medium: Internet ISSN: 2057-1976 (Electronic) Linking ISSN: 20571976 NLM ISO Abbreviation: Biomed Phys Eng Express Subsets: MEDLINE
Imprint Name(s):
Original Publication: Bristol : IOP Publishing Ltd., [2015]-
Contributed Indexing:
Keywords: BCI; EEG; LOSO; deep learning; explainable AI; machine learning; pain decoding
Entry Date(s):
Date Created: 20260107 Date Completed: 20260115 Latest Revision: 20260115
Update Code:
20260119
DOI:
10.1088/2057-1976/ae34b4
PMID:
41499809
Database:
MEDLINE

Weitere Informationen

Objective. Accurate classification of pain levels is essential for clinical monitoring, particularly in clinical populations with limited verbal communication. This study explores the feasibility of decoding pain from EEG using explainable deep learning. Approach. EEG signals from 50 subjects exposed to low and high pain stimuli were analyzed. A 1D convolutional neural network (CNN) was trained using leave-one-subject-out (LOSO) cross-validation. To enhance interpretability, DeepSHAP was applied to identify frequency-specific contributions of EEG features to the model's decisions. Main Results. The CNN achieved a classification accuracy of 95.85%, outperforming traditional classifiers (SVM, LDA, RF, etc.) on the same dataset. Explainability analysis showed that increased beta activity (14-15 Hz) was associated with high pain, while alpha (11-12 Hz) theta and delta bands correlated with lower pain states. Significance. This work demonstrates the potential of explainable deep learning in real-time, subject-independent pain decoding. The results support the integration of XAI techniques into EEG-based brain-computer interface (BCI) systems for objective pain monitoring.
(Creative Commons Attribution license.)