Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction with Deep Convolutional Neural Networks

Haojin Deng, Shiqi Wang, Yimin Yang, W.G.Will Zhao, Hui Zhang, Ruizhong Wei, Q.M.Jonathan Wu, Bao-Liang Lu

Abstract:Emotion is one of the main psychological factors that affect human behaviour. Using a neural network model trained with Electroencephalography (EEG)-based frequency features have been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular two-dimensional kernels of convolutional neural networks (CNN) has rarely been explored in the extant literature. This paper addresses these challenges by proposing an EEG-based Spatial-frequency-based framework for recognizing human emotion, resulting in fewer human-interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84\% on the SEED dataset and 68.61\% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01\%, 92.04\% and 91.74\% in valence, arousal and dominance dimensions. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on one-dimensional psychological signals.


Framework Structure:


Code Link:

Here is a link the preprocessing code and a demo: EEG Demo