Search Advanced Search
Total result 7
    Select All
    Display Type:|
    • Emotion recognition based on multi-modal lightweight hybrid model

      2024, 47(3):9-18.

      Keywords:emotion recognition;multimode signal fusion;EEG;EMG;TEM;support vector machine
      Abstract (550)HTML (0)PDF 5.48 M (1345)Favorites

      Abstract:It is a challenging and meaningful task to achieve more accurate emotion recognition. Because of the complex diversity of emotions, it is difficult to measure emotions comprehensively and objectively with a single mode of EEG signal. Therefore, a multi-modal lightweight hybrid model PCA-MWReliefF-GAPSO-SVM is proposed in this paper. The hybrid model consists of a PCA-MWReliefF feature channel selector and a GAPSO-SVM classifier. Electroencephalogram (EEG), electromyographic signal (EMG) and temperature signal (TEM) were used for emotion recognition. Through many experiments on DEAP public data set, the classification accuracy of 97.500 0%, 95.833 3% and 95.833 3% in titer dimension, wake dimension and four categories were obtained, respectively. The experimental results show that the proposed mixed model can improve the emotion recognition accuracy and is significantly better than the single mode emotion recognition. Compared with the recent similar work, the hybrid model proposed in this paper has the advantages of higher accuracy, less computation and fewer channels, and is easier to be applied in practice.

    • Research on EEG emotion recognition method based on deep graph convolutional

      2024, 47(4):18-22.

      Keywords:EEG;emotion recognition;deep graph convolutional neural networks;global brain region
      Abstract (395)HTML (0)PDF 1.94 M (1000)Favorites

      Abstract:To address the issue of insufficient spatial correlation information for emotion EEG characterization extracted by shallow graph convolution, this paper proposes a deep graph convolutional network model. The model utilizes deep graph convolution to learn the intrinsic relationships among global channels of emotional EEG, applying residual connections and weight self-mapping during the convolutional propagation process to address the problem of node features in deep graph convolution networks converging to a fixed space and failing to learn effective features. Additionally, PN regularization is added after the convolutional layer to expand the distance between different emotional features and improve emotion recognition performance. Experimental results on the SEED dataset show that compared to shallow graph convolution networks, the accuracy of the proposed model has increased by 0.7% while the standard deviation has decreased by 3.15. These results demonstrate the effectiveness of the global brain region spatial correlation information extracted by this model for emotion recognition.

    • Research on EEG emotion recognition based on multi-domain information fusion

      2024, 47(2):168-175.

      Keywords:EEG;multi-domain information fusion;emotion recognition;parallel convolutional neural network
      Abstract (276)HTML (0)PDF 6.58 M (948)Favorites

      Abstract:EEG signal recognition methods rarely integrate spatial, temporal and frequency information, in order to fully explore the rich information contained in EEG signals, this paper proposes a multi-information fusion EEG emotion recognition method. The method utilizes a parallel convolutional neural network model (Parallel Convolutional Neural Network, PCNN) that combines a two-dimensional convolutional neural network(2D-CNN) and a one-dimensional convolutional neural network(1D-CNN) to learn the spatial, temporal, and frequency features of the EEG signals to categorize the human emotional states. Among them, 2D-CNN is used to mine spatial and frequency information between neighboring EEG channels, and 1D-CNN is used to mine temporal and frequency information of EEG. Finally, the information extracted from the two parallel CNN modules is fused for emotion recognition. The experimental results of emotion triple classification on the dataset SEED show that the overall classification accuracy of the PCNN fusing spatial, temporal, and frequency features reaches 98.04%, which is an improvement of 1.97% and 0.60%, respectively, compared to the 2D-CNN extracting only null-frequency information and the 1D-CNN extracting temporal-frequency information. And compared with recent similar work, the method proposed in this paper is superior for EEG emotion classification.

    • EEG emotion recognition by 4DC-BGRU based on multi-level attention mechanism

      2023, 46(8):134-141.

      Keywords:EEG emotion recognition;two-way convolution neural network;multi-scale features;multi-level attention mechanism;bidirectional gated recurrent unit
      Abstract (414)HTML (0)PDF 1.42 M (678)Favorites

      Abstract:In order to improve the accuracy of EEG emotion recognition, extract richer feature information and improve the stability of network model, an improved EEG emotion recognition model based on multi-level attention mechanism is proposed. In the aspect of feature extraction, the original EEG signal was transformed into four-dimensional space spectrum time structure to extract rich EEG information. In the aspect of network model, a two-way convolution neural network was constructed to learn spatial and frequency information. It can effectively extract multi-scale features and increase the network width to learn richer feature information. After the convolution layer and pool layer, the batch normalization layer was integrated to prevent over fitting. Finally, a multi-level attention mechanism-bidirectional gated recurrent unit module was constructed to process the time characteristics and cooperate with Softmax classification. The bidirectional gated recurrent unit was used to learn more comprehensive upper and lower level feature information. The multi-level attention mechanism was used to correlate different time slices with the overall time slices in four-dimensional features. The evaluation experiments were carried out in two dimensions of arousal and potency of DEAP data set. The average accuracy of two classifications were 96.38% and 96.73% respectively, and the average accuracy of four classifications was 93.78%. The experimental results show that the average accuracy of this algorithm is improved compared with single channel convolutional neural network and other literature algorithms, which shows that this algorithm can effectively improve the performance of EEG emotion recognition.

    • Emotion Classification Method of EEG Signal Based on Convolutional Neural Network

      2022, 45(1):1-7.

      Keywords:electroencephalogram signal emotion recognition deep learning convolutional neural network
      Abstract (222)HTML (0)PDF 1.00 M (502)Favorites

      Abstract:As an advanced function of the human brain, emotion has a great impact on peoples mental health and personality characteristics.The classification of EEG emotion data sets can provide further theoretical and practical basis for real-time monitoring of normal and depressed patients emotions in the future. The article uses the differential entropy features extracted from the public EEG emotion data set, and uses traditional moving average and linear dynamic system methods. Using the convolutional neural network in deep learning as the basic premise, a convolutional neural networks EEG signal emotion classification model is designed, which includes 4 convolutional layers, 4 maximum pooling layers, 2 fully connected layers, and 1 A Softmax layer, and batch normalization is used to make the parameter search problem easier and suppress the model over-fitting. The experimental results show that the average accuracy of the three emotion recognition of the SEED data set using this model reached 98.73%, the precision, recall and F1 score were 99.69%, 98.12% and 98.86%, respectively, and the area under the ROC curve reached 0.998. Compared with recent similar work, the convolutional neural network structure proposed in this paper has certain advantages for EEG signal emotion classification.

    • Research on Robot Interaction of Facial Expression Recognition Based on Channel Attention Mechanism

      2021, 44(11):169-174.

      Keywords:deep learning;facial expression recognition;NAO robot human-computer interaction;channel attention mechanism emotion recognition Emoticon classification RAF-DB
      Abstract (59)HTML (0)PDF 903.88 K (212)Favorites

      Abstract:In order to enhance the interaction between robots and humans, the human-computer interaction is more intelligent and natural. A facial expression recognition method based on channel attention mechanism is proposed. Using the biped humanoid robot NAO as an experimental platform, a human-computer interaction system capable of facial expression recognition is designed. First, the facial expression recognition algorithm of the attention mechanism is trained through the RAF-DB data set. The training results show that the model can recognize 7 basic expressions (happy, angry, nauseous, fear, sad, surprised and natural). Its accuracy rate can reach 76.7%. Secondly, design the voice and actions of the NAO robot when facing different expressions, and finally, test the entire human-computer interaction system. The test results show that when the NAO robot receives the emotions recognized by the computer, it will speak and act like a human.

    • Multi layer SVM speech emotion recognition based on genetic optimization

      2017, 40(10):122-126.

      Keywords:speech emotion recognition genetic algorithm(GA) feature dimension reduction SVM
      Abstract (1606)HTML (0)PDF 1.18 M (1941)Favorites

      Abstract:Aiming at the problem of high feature dimension and low recognition rate in speech emotion recognition, In this paper, we propose a genetic algorithm for feature dimension reduction and construct a multilayer SVM classifier based on binary tree structure for the recognition of speech emotion. First, the common emotional features are extracted after preprocessing the speech signal. As there are many features and redundant data, the genetic algorithm is used to optimize the extracted features. Then, the hierarchical SVM classification model of the binary tree structure is trained by using the most discriminative features. The experimental results demonstrate the effectiveness of the proposed speech emotion recognition scheme on the Berlin emotion corpus containing 7 emotions.

    Prev1Next
    Page 1 Result 7 Jump toPageGO
Year of publication