Video Analysis Based Improved Multi-Facial Emotion Recognition and Classification Framework Using GCRCNN
International Journal of Electrical and Electronics Engineering |
© 2024 by SSRG - IJEEE Journal |
Volume 11 Issue 7 |
Year of Publication : 2024 |
Authors : Jyoti S. Bedre, P. Lakshmi Prasanna |
How to Cite?
Jyoti S. Bedre, P. Lakshmi Prasanna, "Video Analysis Based Improved Multi-Facial Emotion Recognition and Classification Framework Using GCRCNN," SSRG International Journal of Electrical and Electronics Engineering, vol. 11, no. 7, pp. 317-331, 2024. Crossref, https://doi.org/10.14445/23488379/IJEEE-V11I7P129
Abstract:
Facial emotions are the varying expressions of a person’s face that communicate one’s feelings and moods. Facial emotion in videos can be detected using techniques that analyze keyframes for facial muscle movements and patterns. However, these detections can be challenging due to potential simultaneous expressions and camera angle complexities. To overcome these pitfalls, this paper provides a practical framework for detecting facial emotions in videos. Firstly, the input key frames are pre-processed by MF and IN algorithms to acquire an enhanced image. Secondly, human detection and tracking occur using YOLOV7 and BYTE tracking algorithms. Then, the T-SNEVJ algorithm is used for face detection. Thirdly, facial landmark extraction using the HC technique, mesh generation, and feature extraction are done. Here, ED-SVR is utilized for mesh generation. In the meantime, feature point tracking followed by motion analysis is done using CC_OF. Finally, the GCRCNN algorithm classifies multi-facial emotions. The proposed system achieves a better accuracy and recall of 99.34% and 99.20%. Thus, the proposed methodology outperforms the existing FER techniques.
Keywords:
Graph Convolution and Regular 1-D convolutional based Convolutional Neural Network (GCRCNN), You Only Look Once Version7 (YOLO V7), Intensity Normalization (IN), Median Filter (MF), Facial expressions, Human detection, Facial Emotion Recognition (FER).
References:
[1] Hugo Carneiro, Cornelius Weber, and Stefan Wermter, “Whose Emotion Matters? Speaking Activity Localization without Prior Knowledge,” Neurocomputing, vol. 545, pp. 1-13, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Sunsern Cheamanunkul, and Sanchit Chawla, “Drowsiness Detection Using Facial Emotions and Eye Aspect Ratios,” 24th International Computer Science and Engineering Conference (ICSEC), Bangkok, Thailand, pp. 1-4, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Mateusz Piwowarski, and Patryk Wlekly, “Factors Disrupting the Effectiveness of Facial Expression Analysis in Automated Emotion Detection,” Procedia Computer Science, vol. 207, pp. 4296-4305, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Hanyu Liu, Jiabei Zeng, and Shiguang Shan, “Facial Expression Recognition for in-the-Wild Videos,” 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, pp. 615-618, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Shruti Japee et al., “Inability to Move One’s Face Dampens Facial Expression Perception,” Cortex, vol. 169, pp. 35-49, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Surya Teja Chavali et al., “Smart Facial Emotion Recognition with Gender and Age Factor Estimation,” Procedia Computer Science, vol. 218, pp. 113-123, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Prameela Naga, Swamy Das Marri, and Raiza Borreo, “Facial Emotion Recognition Methods, Datasets, and Technologies: A Literature Survey,” Materials Today: Proceedings, vol. 80, part 3, pp. 2824-2828, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Meaad Hussein Abdul-Hadi, and Jumana Waleed, “Human Speech and Facial Emotion Recognition Technique Using SVM,” International Conference on Computer Science and Software Engineering (CSASE), pp. 191-196, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Carmen Bisogni et al., “Emotion Recognition at a Distance: The Robustness of Machine Learning Based on Handcrafted Facial Features vs. Deep Learning Models,” Image and Vision Computing, vol. 136, pp. 1-15, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Akriti Jaiswal, A. Krishnama Raju, and Suman Deb, “Facial Emotion Detection Using Deep Learning,” International Conference for Emerging Technology (INCET), India, pp. 1-5, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[11] P. Kaviya, and T. Arumugaprakash, “Group Facial Emotion Analysis System Using Convolutional Neural Network,” International Conference on Trends in Electronics and Informatics (ICOEI), India, pp. 643-647, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Chirag Dalvi et al., “A Survey of AI-Based Facial Emotion Recognition: Features, ML & DL Techniques, Age-Wise Datasets, and Future Directions,” IEEE Access, vol. 9, pp. 165806-165840, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Chahak Gautam, and K.R. Seeja, “Facial Emotion Recognition Using Handcrafted Features and CNN,” Procedia Computer Science, vol. 218, pp. 1295-1303, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Muhammad Abdullah, Mobeen Ahmad, and Dongil Han, “Facial Expression Recognition in Videos: an CNN-LSTM Based Model for Video Classification,” International Conference on Electronics, Information, and Communication (ICEIC), Spain, pp. 1-3, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Divina Lawrance, and Suja Palaniswamy, “Emotion Recognition from Facial Expressions for 3D Videos Using Siamese Network,” International Conference on Communication, Control and Information Sciences (ICCISC), India, pp. 1-6, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Mingjing Yu et al., “Facial Expression Recognition Based on a Multi-Task Global-Local Network,” Pattern Recognition Letters, vol. 131, pp. 166-171, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Ninad Mehendale, “Facial Emotion Recognition Using Convolutional Neural Networks (FERC),” SN Applied Sciences, vol. 2, pp. 1-8, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Ke Zhang et al. “Real-Time Video Emotion Recognition Based on Reinforcement Learning and Domain Knowledge,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1034-1047, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Haipeng Zeng et al., “EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 927-937, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Juan Miguel López-Gil, and Nestor Garay-Vitoria, “Photogram Classification-Based Emotion Recognition,” IEEE Access, vol. 9, pp. 136974-136984, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Wenbo Li et al., “A Spontaneous Driver Emotion Facial Expression (DEFE) Dataset for Intelligent Vehicles: Emotions Triggered by Video-Audio Clips in Driving Scenarios,” IEEE Transactions on Affective Computing, vol. 14, no. 1, pp. 474-760, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Min Hu et al., “A Two-Stage Spatiotemporal Attention Convolution Network for Continuous Dimensional Emotion Recognition from Facial Video,” IEEE Signal Processing Letters, vol. 28, pp. 698-702, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Karnati Mohan et al., “Facial Expression Recognition Using Local Gravitational Force Descriptor-Based Deep Convolution Neural Networks,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Najmeh Samadiani et al., “Happy Emotion Recognition from Unconstrained Videos Using 3D Hybrid Deep Features,” IEEE Access, vol. 9, pp. 35524-35538, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Jiyoung Lee et al., “Multi-Modal Recurrent Attention Networks for Facial Expression Recognition,” IEEE Transactions on Image Processing, vol. 29, pp. 6977-6991, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Xi Zhang, Feifei Zhang, and Changsheng Xu, “Joint Expression Synthesis and Representation Learning for Facial Expression Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1681-1695, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Noushin Hajarolasvadi, Enver Bashirov, and Hasan Demirel, “Video-Based Person-Dependent and Person-Independent Facial Emotion Recognition,” Signal, Image and Video Processing, vol. 15, pp. 1049-1056, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Moises Garcia Villanueva, and Salvador Ramirez Zavala, “Deep Neural Network Architecture: Application for Facial Expression Recognition,” IEEE Latin America Transactions, vol. 18, no. 7, pp. 1311-1319, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[29] ByungOk Han et al., “Deep Emotion Change Detection via Facial Expression Analysis,” Neurocomputing, vol. 549, p. 1-16, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Sanghyun Lee, David K. Han, and Hanseok Ko, “Multimodal Emotion Recognition Fusion Analysis Adapting BERT with Heterogeneous Feature Unification,” IEEE Access, vol. 9, pp. 94557-94572, 2021.
[CrossRef] [Google Scholar] [Publisher Link]