Abstract
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Original language | English |
---|---|
Article number | 1383 |
Journal | Sensors (Switzerland) |
Volume | 18 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2018 May |
Bibliographical note
Funding Information:This work was supported by The Institute for Information & Communications Technology Promotion funded by the Korean Government (MSIP) (R0124-16-0002, Emotional Intelligence Technology to Infer Human Emotion and Carry on Dialogue Accordingly).
Funding Information:
Funding: This work was supported by The Institute for Information & Communications Technology Promotion
Publisher Copyright:
© 2018 by the authors. Licensee MDPI, Basel, Switzerland.
All Science Journal Classification (ASJC) codes
- Analytical Chemistry
- Information Systems
- Instrumentation
- Atomic and Molecular Physics, and Optics
- Electrical and Electronic Engineering
- Biochemistry