Recognizing Facial Emotions for Educational Learning Settings

Akputu Oryina Kingsley

Abstract


Educational learning settings exploit cognitive factors as ultimate feedbacks to enhance personalization in teaching and learning situations. But besides cognition, emotions of the learner which reflect the affective learning dimension also play important role in the learning process. The emotions can be recognized by asking from the user or tracking implicit parameters like facial expressions. Despite reasonable efforts to recognize emotions, the research community is currently constraints by two issues, namely, (i) the lack of efficient feature descriptors to accurately represents and prospectively recognize (detect) emotions of the learner (ii) affects have contextual antecedence, but most existing systems does not utilize contextual datasets to benchmark performances of emotion recognizers in the learning-specific scenarios, resulting to poor generalizations. This paper presents a Facial Emotion Recognition Technique (FERT). The FERT is realized through results of preliminary analysis across various facial feature descriptors. Emotions are classified using the Multiple Kernel Learning method which reportedly possesses good merits. A contextually relevant SLE (Simulated Learning Emotion) dataset is introduced to ground truth the FERT scheme. Recognition performance of FERT scheme generalizesto 90.3% on the SLE dataset. On more popular but noncontextually datasets: 90.0% and 82.8% are reported the (CK+) Extended Cohn Kanade and AFEW datasets. A test for the null hypothesis that there is no significant difference in the performances accuracies of the descriptors rather proved otherwise ( , df = 5, p=0.01212) for a model considered at 95% confidence interval.

Keywords


emotion recognition, affects recognition, education, learning setting, descriptors

References


V. Caputi and A. Garrido, “Student-oriented planning of e-learning contents for Moodle,” J. Netw. Comput. Appl., vol. 53, pp. 115–127, Jul. 2015.

M. Imani and G. A. Montazer, “A survey of emotion recognition methods with emphasis on E-Learning environments,” Journal of Network and Computer Applications, vol. 147. Academic Press, p. 102423, 01-Dec-2019.

B. De Carolis, F. D’Errico, N. Macchiarulo, M. Paciello, and G. Palestra, “Recognizing Cognitive Emotions in E-Learning Environment,” in Communications in Computer and Information Science, 2021, vol. 1344, pp. 17–27.

C. M. Chen and H. P. Wang, “Using emotion recognition technology to assess the effects of different multimedia materials on learning emotion and performance,” Libr. Inf. Sci. Res., vol. 33, no. 3, pp. 244–255, 2011.

M. Bouhlal, K. Aarika, R. AitAbdelouahid, S. Elfilali, and E. Benlahmar, “Emotions recognition as innovative tool for improving students’ performance and learning approaches,” Procedia Comput. Sci., vol. 175, pp. 597–602, 2020.

X. Huang et al., “Multi-modal emotion analysis from facial expressions and electroencephalogram,” Comput. Vis. Image Underst., vol. 147, pp. 114–124, Jun. 2016.

S. D’Mello, A. Kappas, and J. Gratch, “The Affective Computing Approach to Affect Measurement,” Emot. Rev., vol. 10, no. 2, pp. 174–183, Apr. 2018.

K. Bahreini, R. Nadolski, and W. Westera, “Towards real-time speech emotion recognition for affective e-learning,” Educ. Inf. Technol., vol. 21, no. 5, pp. 1367–1386, Sep. 2016.

J. Xiaoqing, X. Kewen, L. Yongliang, and B. Jianchuan, “Noisy speech emotion recognition using sample reconstruction and multiple-kernel learning,” J. China Univ. Posts Telecommun., vol. 24, no. 2, pp. 1,17-9, Apr. 2017.

O. K. Akputu, K. P. Seng, Y. Lee, and L. M. Ang, “Emotion recognition using multiple kernel learning toward E-learning applications,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 14, no. 1, pp. 1–20, Jan. 2018.

G. Tonguç and B. Ozaydın Ozkara, “Automatic recognition of student emotions from facial expressions during a lecture,” Comput. Educ., vol. 148, p. 103797, Apr. 2020.

M. Kas, Y. El merabet, Y. Ruichek, and R. Messoussi, “New framework for person-independent facial expression recognition combining textural and shape analysis through new feature extraction approach,” Inf. Sci. (Ny)., vol. 549, pp. 200–220, Mar. 2021.

W. Li, Z. Zhang, and A. Song, “Physiological-signal-based emotion recognition: An odyssey from methodology to philosophy,” Meas. J. Int. Meas. Confed., vol. 172, p. 108747, Feb. 2021.

D. Huang, S. Chen, C. Liu, L. Zheng, Z. Tian, and D. Jiang, “Differences first in asymmetric brain: A bi-hemisphere discrepancy convolutional neural network for EEG emotion recognition,” Neurocomputing, vol. 448, pp. 140–151, Aug. 2021.

S. N. Fatima and E. Erzin, “Use of affect context in dyadic interactions for continuous emotion recognition,” Speech Commun., May 2021.

K. Bahreini, R. Nadolski, and W. Westera, “Towards multimodal emotion recognition in e-learning environments,” Interact. Learn. Environ., vol. 24, no. 3, pp. 590–605, Apr. 2016.

S. Zhang, X. Tao, Y. Chuang, and X. Zhao, “Learning deep multimodal affective features for spontaneous speech emotion recognition,” Speech Commun., vol. 127, pp. 73–81, Mar. 2021.

O. K. Akputu and A. O. Adedolapo, “Emotion Recognition for User Centred E-Learning,” IEEE 40th Annu. Comput. Softw. Appl. Conf., 2016.

N. Farajzadeh and M. Hashemzadeh, “Exemplar-based facial expression recognition,” Inf. Sci. (Ny)., vol. 460–461, pp. 318–330, Sep. 2018.

X. Wang, X. Chen, and C. Cao, “Human emotion recognition by optimally fusing facial expression and speech feature,” Signal Process. Image Commun., vol. 84, p. 115831, 2020.

S. Liu, S. Guo, W. Wang, H. Qiao, Y. Wang, and W. Luo, “Multi-view laplacian eigenmaps based on bag-of-neighbors for RGB-D human emotion recognition,” Inf. Sci. (Ny)., vol. 509, pp. 243–256, Jan. 2020.

M. S. Hossain and G. Muhammad, “Emotion recognition using secure edge and cloud computing,” Inf. Sci. (Ny)., vol. 504, pp. 589–601, Dec. 2019.

M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1. pp. 71–86, 1991.

F. Tabassum, M. Imdadul Islam, R. Tasin Khan, and M. R. Amin, “Human face recognition with combination of DWT and machine learning,” J. King Saud Univ. - Comput. Inf. Sci., Feb. 2020.

J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Trans. Acoust., vol. 36, no. 7, pp. 1169–1179, Jul. 1988.

K. Delac, M. Grgic, and S. Grgic, “Independent comparative study of PCA, ICA, and LDA on the FERET data set,” Int. J. Imaging Syst. Technol., vol. 15, no. 5, pp. 252–260, 2005.

S. D’Mello, R. W. Picard, and A. Graesser, “Toward an Affect-Sensitive AutoTutor,” IEEE Intell. Syst., vol. 22, no. 4, pp. 53–61, Jul. 2007.

K. R. Premlatha and V. Geetha, “Learning content design and learner adaptation for adaptive e-learning environment : a survey,” Artif. Intell. Rev., 2015.

M. Basitere and N. N. Ivala, “An Evaluation of the Effectiveness of the use of Multimedia and Wiley Plus Web-Based Homework System in Enhancing Learning in The Chemical Engineering Extended Curriculum Program Physics Course,” Electron. J. e-Learning, vol. 15, no. 2, pp. 156–173, 2017.

P. Brusilovsky, S. Somyurek, J. Guerra, R. Hosseini, V. Zadorozhny, and P. J. Durlach, “Open Social Student Modeling for Personalized Learning,” IEEE Trans. Emerg. Top. Comput., vol. 4, no. 3, pp. 450–461, 2016.

S. Fatahi, “An experimental study on an adaptive e-learning environment based on learner’s personality and emotion,” Educ. Inf. Technol., vol. 24, no. 4, pp. 2225–2241, Jul. 2019.

S. Graf et al., “Adaptivity and Personalization in Learning Systems based on Students’ Characteristics and Context,” 2012.

M. Taub, R. Azevedo, R. Rajendran, E. B. Cloude, G. Biswas, and M. J. Price, “How are students’ emotions related to the accuracy of cognitive and metacognitive processes during learning with an intelligent tutoring system?,” Learn. Instr., p. 101200, May 2019.

S. Afzal and P. Robinson, “Modelling Affect in Learning Environments - Motivation and Methods,” in 2010 10th IEEE International Conference on Advanced Learning Technologies, 2010, pp. 438–442.

R. Pekrun, “The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice,” Educ. Psychol. Rev., vol. 18, no. 4, pp. 315–341, Dec. 2006.

C. M. Kim and R. Pekrun, “Emotions and motivation in learning and performance,” in Handbook of Research on Educational Communications and Technology: Fourth Edition, Springer New York, 2014, pp. 65–75.

J. A. Russell, “A circumplex model of affect.,” J. Personal. Soc. Psychol., vol. 39, no. 6, pp. 1161–1178, 1980.

S. Craig, A. Graesser, J. Sullins, and B. Gholson, “Affect and learning: An exploratory look into the role of affect in learning with AutoTutor,” J. Educ. Media, vol. 29, no. 3, pp. 241–250, Oct. 2004.

B. Kort, R. Reilly, and R. Picard, “An Affective Model of Interplay between Emotions and Learning: Reengineering Educational Pedagogy-Building a Learning Companion.,” in icalt, 2001.

O. K. Akputu, Y. Lee, and K. P. Seng, “Comparative analysis of multiple kernel learning on learning emotion recognition,” in Proceedings of the 6th International Conference on Information Technology and Multimedia, 2014, pp. 357–362.

D. J. Litman and K. Forbes-Riley, “Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors,” Speech Commun., vol. 48, no. 5, pp. 559–590, 2006.

K. W. Brawner and B. S. Goldberg, “Real-Time Monitoring of ECG and GSR Signals during Computer-Based Training,” in Intelligent Tutoring Systems, Springer Berlin Heidelberg, 2012, pp. 72–77.

M. Pantic, N. Sebe, J. F. Cohn, and T. Huang, “Affective multimodal human-computer interaction,” in Proceedings of the 13th annual ACM international conference on Multimedia - MULTIMEDIA ’05, 2005, p. 669.

V. Caputi and A. Garrido, “Student-oriented planning of e-learning contents for Moodle,” J. Netw. Comput. Appl., vol. 53, pp. 115–127, Jul. 2015.

M. Imani and G. A. Montazer, “A survey of emotion recognition methods with emphasis on E-Learning environments,” Journal of Network and Computer Applications, vol. 147. Academic Press, p. 102423, 01-Dec-2019.

B. De Carolis, F. D’Errico, N. Macchiarulo, M. Paciello, and G. Palestra, “Recognizing Cognitive Emotions in E-Learning Environment,” in Communications in Computer and Information Science, 2021, vol. 1344, pp. 17–27.

C. M. Chen and H. P. Wang, “Using emotion recognition technology to assess the effects of different multimedia materials on learning emotion and performance,” Libr. Inf. Sci. Res., vol. 33, no. 3, pp. 244–255, 2011.

M. Bouhlal, K. Aarika, R. AitAbdelouahid, S. Elfilali, and E. Benlahmar, “Emotions recognition as innovative tool for improving students’ performance and learning approaches,” Procedia Comput. Sci., vol. 175, pp. 597–602, 2020.

X. Huang et al., “Multi-modal emotion analysis from facial expressions and electroencephalogram,” Comput. Vis. Image Underst., vol. 147, pp. 114–124, Jun. 2016.

S. D’Mello, A. Kappas, and J. Gratch, “The Affective Computing Approach to Affect Measurement,” Emot. Rev., vol. 10, no. 2, pp. 174–183, Apr. 2018.

K. Bahreini, R. Nadolski, and W. Westera, “Towards real-time speech emotion recognition for affective e-learning,” Educ. Inf. Technol., vol. 21, no. 5, pp. 1367–1386, Sep. 2016.

J. Xiaoqing, X. Kewen, L. Yongliang, and B. Jianchuan, “Noisy speech emotion recognition using sample reconstruction and multiple-kernel learning,” J. China Univ. Posts Telecommun., vol. 24, no. 2, pp. 1,17-9, Apr. 2017.

O. K. Akputu, K. P. Seng, Y. Lee, and L. M. Ang, “Emotion recognition using multiple kernel learning toward E-learning applications,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 14, no. 1, pp. 1–20, Jan. 2018.

G. Tonguç and B. Ozaydın Ozkara, “Automatic recognition of student emotions from facial expressions during a lecture,” Comput. Educ., vol. 148, p. 103797, Apr. 2020.

M. Kas, Y. El merabet, Y. Ruichek, and R. Messoussi, “New framework for person-independent facial expression recognition combining textural and shape analysis through new feature extraction approach,” Inf. Sci. (Ny)., vol. 549, pp. 200–220, Mar. 2021.

W. Li, Z. Zhang, and A. Song, “Physiological-signal-based emotion recognition: An odyssey from methodology to philosophy,” Meas. J. Int. Meas. Confed., vol. 172, p. 108747, Feb. 2021.

D. Huang, S. Chen, C. Liu, L. Zheng, Z. Tian, and D. Jiang, “Differences first in asymmetric brain: A bi-hemisphere discrepancy convolutional neural network for EEG emotion recognition,” Neurocomputing, vol. 448, pp. 140–151, Aug. 2021.

S. N. Fatima and E. Erzin, “Use of affect context in dyadic interactions for continuous emotion recognition,” Speech Commun., May 2021.

K. Bahreini, R. Nadolski, and W. Westera, “Towards multimodal emotion recognition in e-learning environments,” Interact. Learn. Environ., vol. 24, no. 3, pp. 590–605, Apr. 2016.

S. Zhang, X. Tao, Y. Chuang, and X. Zhao, “Learning deep multimodal affective features for spontaneous speech emotion recognition,” Speech Commun., vol. 127, pp. 73–81, Mar. 2021.

O. K. Akputu and A. O. Adedolapo, “Emotion Recognition for User Centred E-Learning,” IEEE 40th Annu. Comput. Softw. Appl. Conf., 2016.

N. Farajzadeh and M. Hashemzadeh, “Exemplar-based facial expression recognition,” Inf. Sci. (Ny)., vol. 460–461, pp. 318–330, Sep. 2018.

X. Wang, X. Chen, and C. Cao, “Human emotion recognition by optimally fusing facial expression and speech feature,” Signal Process. Image Commun., vol. 84, p. 115831, 2020.

S. Liu, S. Guo, W. Wang, H. Qiao, Y. Wang, and W. Luo, “Multi-view laplacian eigenmaps based on bag-of-neighbors for RGB-D human emotion recognition,” Inf. Sci. (Ny)., vol. 509, pp. 243–256, Jan. 2020.

M. S. Hossain and G. Muhammad, “Emotion recognition using secure edge and cloud computing,” Inf. Sci. (Ny)., vol. 504, pp. 589–601, Dec. 2019.

M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1. pp. 71–86, 1991.

F. Tabassum, M. Imdadul Islam, R. Tasin Khan, and M. R. Amin, “Human face recognition with combination of DWT and machine learning,” J. King Saud Univ. - Comput. Inf. Sci., Feb. 2020.

J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Trans. Acoust., vol. 36, no. 7, pp. 1169–1179, Jul. 1988.

K. Delac, M. Grgic, and S. Grgic, “Independent comparative study of PCA, ICA, and LDA on the FERET data set,” Int. J. Imaging Syst. Technol., vol. 15, no. 5, pp. 252–260, 2005.

S. D’Mello, R. W. Picard, and A. Graesser, “Toward an Affect-Sensitive AutoTutor,” IEEE Intell. Syst., vol. 22, no. 4, pp. 53–61, Jul. 2007.

K. R. Premlatha and V. Geetha, “Learning content design and learner adaptation for adaptive e-learning environment : a survey,” Artif. Intell. Rev., 2015.

M. Basitere and N. N. Ivala, “An Evaluation of the Effectiveness of the use of Multimedia and Wiley Plus Web-Based Homework System in Enhancing Learning in The Chemical Engineering Extended Curriculum Program Physics Course,” Electron. J. e-Learning, vol. 15, no. 2, pp. 156–173, 2017.

P. Brusilovsky, S. Somyurek, J. Guerra, R. Hosseini, V. Zadorozhny, and P. J. Durlach, “Open Social Student Modeling for Personalized Learning,” IEEE Trans. Emerg. Top. Comput., vol. 4, no. 3, pp. 450–461, 2016.

S. Fatahi, “An experimental study on an adaptive e-learning environment based on learner’s personality and emotion,” Educ. Inf. Technol., vol. 24, no. 4, pp. 2225–2241, Jul. 2019.

S. Graf et al., “Adaptivity and Personalization in Learning Systems based on Students’ Characteristics and Context,” 2012.

M. Taub, R. Azevedo, R. Rajendran, E. B. Cloude, G. Biswas, and M. J. Price, “How are students’ emotions related to the accuracy of cognitive and metacognitive processes during learning with an intelligent tutoring system?,” Learn. Instr., p. 101200, May 2019.

S. Afzal and P. Robinson, “Modelling Affect in Learning Environments - Motivation and Methods,” in 2010 10th IEEE International Conference on Advanced Learning Technologies, 2010, pp. 438–442.

R. Pekrun, “The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice,” Educ. Psychol. Rev., vol. 18, no. 4, pp. 315–341, Dec. 2006.

C. M. Kim and R. Pekrun, “Emotions and motivation in learning and performance,” in Handbook of Research on Educational Communications and Technology: Fourth Edition, Springer New York, 2014, pp. 65–75.

J. A. Russell, “A circumplex model of affect.,” J. Personal. Soc. Psychol., vol. 39, no. 6, pp. 1161–1178, 1980.

S. Craig, A. Graesser, J. Sullins, and B. Gholson, “Affect and learning: An exploratory look into the role of affect in learning with AutoTutor,” J. Educ. Media, vol. 29, no. 3, pp. 241–250, Oct. 2004.

B. Kort, R. Reilly, and R. Picard, “An Affective Model of Interplay between Emotions and Learning: Reengineering Educational Pedagogy-Building a Learning Companion.,” in icalt, 2001.

O. K. Akputu, Y. Lee, and K. P. Seng, “Comparative analysis of multiple kernel learning on learning emotion recognition,” in Proceedings of the 6th International Conference on Information Technology and Multimedia, 2014, pp. 357–362.

D. J. Litman and K. Forbes-Riley, “Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors,” Speech Commun., vol. 48, no. 5, pp. 559–590, 2006.

K. W. Brawner and B. S. Goldberg, “Real-Time Monitoring of ECG and GSR Signals during Computer-Based Training,” in Intelligent Tutoring Systems, Springer Berlin Heidelberg, 2012, pp. 72–77.

M. Pantic, N. Sebe, J. F. Cohn, and T. Huang, “Affective multimodal human-computer interaction,” in Proceedings of the 13th annual ACM international conference on Multimedia - MULTIMEDIA ’05, 2005, p. 669.

N. Sebe, I. Cohen, T. Gevers, and T. S. Huang, “Emotion Recognition Based on Joint Visual and Audio Cues,” in 18th International Conference on Pattern Recognition (ICPR’06), 2006, vol. 1, pp. 1136–1139.

S. K. D’Mello and A. Graesser, “Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features,” User Model. User-adapt. Interact., vol. 20, no. 2, pp. 147–187, May 2010.

K. Bahreini, R. Nadolski, and W. Westera, “Improved Multimodal Emotion Recognition for Better Game-Based Learning,” in Games and Learning Alliance, Springer International Publishing, 2015, pp. 107–120.

Y. Yan, Z. Zhang, S. Chen, and H. Wang, “Low-resolution facial expression recognition: A filter learning perspective,” Signal Processing, vol. 169, p. 107370, Apr. 2020.

F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan, “Multiple kernel learning, conic duality, and the SMO algorithm,” in Twenty-first international conference on Machine learning - ICML ’04, 2004, p. 6.

T. Senechal, V. Rapp, H. Salam, R. Seguier, K. Bailly, and L. Prevost, “Facial Action Recognition Combining Heterogeneous Features via Multikernel Learning,” IEEE Trans. Syst. Man Cybern. B Cybern., vol. 42, no. 4, pp. 993–1005, 2012.

C.-C. Tsai, Y.-Z. Chen, and C.-W. Liao, “Interactive emotion recognition using Support Vector Machine for human-robot interaction,” in 2009 IEEE International Conference on Systems, Man and Cybernetics, 2009, pp. 407–412.

T. Pumlumchiak and S. Vittayakorn, “Facial expression recognition using local Gabor filters and PCA plus LDA,” in 2017 9th International Conference on Information Technology and Electrical Engineering, ICITEE 2017, 2017, vol. 2018-January, pp. 1–6.

A. Dhall, R. Goecke, J. Joshi, K. Sikka, and T. Gedeon, “Emotion Recognition In The Wild Challenge 2014,” in Proceedings of the 16th International Conference on Multimodal Interaction - ICMI ’14, 2014, pp. 461–466.

T. Kanade and J. F. Cohn, “Comprehensive database for facial expression analysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), 2000, pp. 46–53.

Z. Sun, Z. P. Hu, R. Chiong, M. Wang, and W. He, “Combining the Kernel Collaboration Representation and Deep Subspace Learning for Facial Expression Recognition,” J. Circuits, Syst. Comput., vol. 27, no. 8, Jul. 2018.

A. Dhall, R. Goecke, J. Joshi, K. Sikka, and T. Gedeon, “Emotion Recognition In The Wild Challenge 2014 : Baseline , Data and Protocol,” in ICMI ’14 Proceedings of the 16th International Conference on Multimodal Interaction, 2014, pp. 461–466.

X. Huang, Q. He, X. Hong, G. Zhao, and M. Pietikänen, “Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild,” in ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction, 2014, pp. 514–520.

H. Yan, “Collaborative discriminative multi-metric learning for facial expression recognition in video,” Pattern Recognit., vol. 75, pp. 1339–1351, Mar. 2018.

H. Kaya, F. Gürpınar, and A. A. Salah, “Video-based emotion recognition in the wild using deep transfer learning and score fusion,” Image Vis. Comput., vol. 65, pp. 66–75, Sep. 2017.




DOI: http://doi.org/10.11591/ijra.v11i1.pp%25p

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IJRA Visitor Statistics