Classification of Positive and Negative Stimuli Using EEG Data Functional Connectivity and Machine Learning
Bhamidipati, Sai Jaya Sasanka
MetadataShow full item record
Electroencephalography (EEG) provides electrical measures of brain activity by monitoring voltage fluctuations of the collective neural activity in different parts on the cortex of the brain. Recently, there have been numerous applications of machine learning techniques to classify events or participants based on EEG data in the biomedical field. EEG data are rich in the sense that one can extract many features from the data. This makes feature selection and reduction an important step in EEG based classification. Feature selection and correlations between features for classification of EEG data typically depend on time-frequency characteristics of the EEG channels, which represent data from different parts of the brain cortex. In this proposed work, we calculated functional connectivity (FC) between different EEG channels as our features for classification and applied it for classification of positive and negative visual stimuli. Previously, EEG data were collected from 12 participants (6 females and 6 males) while they were observing positive and negative images in a random order and the data were completely de-identified. After filtering of the noise in the data, we extracted FC features. From these FC features for each of the stimuli, we reduced the number of features using techniques which included correlation-based and principal components based methods. Once the features were selected, we implemented classification of positive vs. negative stimuli using classification techniques support vector machines, decision trees, random forests, k nearest neighbors, Gaussian process, Adaboost, quadratic discriminant analysis and logistic regression. We compared the classification accuracy results Support vector machine and Logistic regression provided the highest classification accuracy of whether a participant was seeing a positive or negative image, with accuracies of up to 71.9% and 71.4% for each of the participant, respectively.