Automatic Facial Emotion Recognition Method Based on Eye Region Changes
محورهای موضوعی : Image ProcessingMina Navraan 1 , charkari charkari 2 , Muharram Mansoorizadeh 3
1 - Tarbiat Modares University
2 - Tarbiat Modares University
3 - Bu-Ali Sina University
کلید واژه: Facial emotion recognition , Gabor filter , Support Vector Machine (SVM) , Eye region,
چکیده مقاله :
Emotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the area of computer vision for the last two decades. This paper focuses on facial expression to identify seven universal human emotions i.e. anger, disgust, fear, happiness, sadness, surprise, and neu7tral. Unlike the majority of other approaches which use the whole face or interested regions of face, we restrict our facial emotion recognition (FER) method to analyze human emotional states based on eye region changes. The reason of using this region is that eye region is one of the most informative regions to represent facial expression. Furthermore, it leads to lower feature dimension as well as lower computational complexity. The facial expressions are described by appearance features obtained from texture encoded with Gabor filter and geometric features. The Support Vector Machine with RBF and poly-kernel functions is used for proper classification of different types of emotions. The Facial Expressions and Emotion Database (FG-Net), which contains spontaneous emotions and Cohn-Kanade(CK) Database with posed emotions have been used in experiments. The proposed method was trained on two databases separately and achieved the accuracy rate of 96.63% for spontaneous emotions recognition and 96.6% for posed expression recognition, respectively
Emotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the area of computer vision for the last two decades. This paper focuses on facial expression to identify seven universal human emotions i.e. anger, disgust, fear, happiness, sadness, surprise, and neu7tral. Unlike the majority of other approaches which use the whole face or interested regions of face, we restrict our facial emotion recognition (FER) method to analyze human emotional states based on eye region changes. The reason of using this region is that eye region is one of the most informative regions to represent facial expression. Furthermore, it leads to lower feature dimension as well as lower computational complexity. The facial expressions are described by appearance features obtained from texture encoded with Gabor filter and geometric features. The Support Vector Machine with RBF and poly-kernel functions is used for proper classification of different types of emotions. The Facial Expressions and Emotion Database (FG-Net), which contains spontaneous emotions and Cohn-Kanade(CK) Database with posed emotions have been used in experiments. The proposed method was trained on two databases separately and achieved the accuracy rate of 96.63% for spontaneous emotions recognition and 96.6% for posed expression recognition, respectively
[1] Kaimin Yu, “Toward Realistic Facial Expression Recognition“, Doctoral dissertation, School of Information Technologies at The University of Sydney, 2013.#
[2] P.Viola, MJ. Jones, “Robust Real-Time Face Detection“, Interna-tional Journal of Computer Vision, 57(2):137{154, May 2004.#
[3] Li,Peiyao, “Adaptive feature extraction and selection for robust facial expression“, Master of Engineering by Research thesis, University of Wollongong, School of Electrical, Computer and Telecommunications Engineering,UniversityofWollongong,2010.http://ro.uow.edu.au/theses/3268.#
[4] A. Mehrabian, “Communication without words“. Psychology today, vol.2, no.4, pp.53-56, 1968.#
[5] M. Mansoorizadeh, N. Moghaddam Charkari, “Multimodal information fusion application to humanemotion recognition from face and speech“ Springer Science, Multimed Tools Appl 2010.#
[6] Z. Zeng, M. Pantic, G.I. Rosman, and T.S. Huang. “A Survey of Af-fection Recognition Methods: Audio, Visual, and Spontaneous Expressions“.IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39{58,January 2009.#
[7] A.Samal, A. Lyengar. “Automatic recognition and analysis of human faces and facial expressions: a survey“. Pattern Recognition, 25(1):65{77,January 1992.#
[8] M.Pantic, J.M. Rothkrantz. “ Automatic Analysis of Facial Ex pressions: The State of the Art“. IEEE Transaction on Pattern Analysis and Machine I Intelligence, 22(12):1424{1445, December 2000.#
[9] B.Fasel, J.Luettin. “ Automatic facial expression analysis: a Survey“,Pattern Recognition, 36(1):259{275, January 2003.#
[10] Y.Tian, Takeo Kanade,F. Cohn, “Facial Expression Analysis.In Handbook of Face Recognition“, Springer New York, 2005.#
[11] Y.Tian, T. Kanade, and J. Cohn. “Eye-state action unit detection by Gabor wavelets“,In Proceedings of International Conference on Multi-modalInterfaces, pages 143{150, 2000.#
[12] Y. Tian, T. Kanade, and J. Cohn. “Evaluation of gabor-wavelet-based facialaction unit recognition in image sequences of increasing complexity“, In IEEEInternational Conference on Automatic Face and Gesture Recognition, page 229,2002.#
[13] Y. Tian, T. Kanade and J. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence,vol. 23, no. 2, pp. 97–115, 2001.#
[14] I. Kotsia, I. Buciu and I. Pitas, “An Analysis of Facial Expression Recognition under Partial Facial Image Occlusion,” Image and Vision Computing, vol. 26, no. 7, pp. 1052-1067, July 2008.#
[15] I. Cohn, N. Sebe, F. Cozman, M. Cirelo, and T. Huang, “Learning Bayesian Network Classifiers for Facial Expression Recognition Using Both Labeled and Unlabeled Data,” Proc. IEEE Conf. Computer Vision and Pattern Recognition(CVPR),vol. 1, pp. I-595-I-604, 2003.#
[16] I. Cohn, N. Sebe, A. Garg, L.S. Chen, and T.S. Huang, “Facial Expression Recognition From Video Sequences: Temporal and Static Modeling”, Computer Vision and Image Understanding,vol. 91, pp. 160-187, 2003.#
[17] M. Jones, P. Viola. “Fast Multi-view Face Detection“, Technical report, Mitsubishi Electric Research Laboratories, 2003.#
[18] J. Wu, S.C. Brubaker, M. Mullin, and J. Rehg. “Fast asymmetric learning for cascade face detection“, IEEE Transaction on Pattern Analysis and Machine Intelligence, 30(3):369{382, 2008.#
[19] M. Pantic, M. Rothkrantz. “Facial action recognition for facial expressionanalysis from static face images“, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34(3):1449{1461, June 2004.#
[20] M. Pantic,I. Patras, “Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences“,IEEETransactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 36(2):433{449, April 2006.#
[21] G. Donato, M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski. “Classifying facial actions“, IEEE Transactions on Pattern Analysis and Machine Intelligence,21(10):974{989, 1999.#
[22] S.Koelstra, M.Pantic, “A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models“,IEEE TRANSACTIONS on Pattern Analysis and Machine Intelligence, VOL. 32, NO. 11, NOVEMBER 2010.#
[23] M.S. Bartlett, G. Littlewort, I. Fasel, and R. Movellan, “Real Time Face Detection and Facial Expression Recognition: Development and Application to Human Computer Interaction,” Proc. CVPR Workshop on Computer Vision and Pattern Recognition for Human-Computer Interaction,vol. 5, 2003.#
[24] A.Savran, B. Sankur, M. Taha Bilge, ”Regression-based intensity estimation of facial action units”, Image and Vision Computing 2012.#
[25] M. Pantic and L. Rothkrantz, "Expert System for Automatic Analysis of Facial Expression", Image and Vision Computing J., Vol. 18, No. 11, p. 881-905,2000.#
[26] C. Zhan, W. Li, P. Ogunbona,F. Safaei, A Real-Time Facial Expression Recognition Systemfor Online Games, International Journal of Computer Games Technology, Volume 2008.#
[27] Ekman, P. and Friesen, W. V, 'Detecting deception from body or face.'Journal of Nonverb Behav,(1974).#
[28] D.Reisfeld and Y.Yeshurun,”Robust detection of facial features by generalized symmetry”, in Proceeding of 11th IAPR International Conference on Pattern Recognition, PP.117-120, 1992.#
[29] G.A. Ramirez and O.Fuentes,”Face detection using combination of classifiers”, in Proceeding of The Canadian Conference on Computer and Robot Vision, PP.610-615,2005.#
[30] P.Zhang,”A video-based face detection and recognition system using cascade face verification modules”,in Proceeding of 37th IEEE Applied Imagery Pattern Recognition Workshop, PP.1-8, 2008.#
[31] B.Yipp,”Face and eye rectification in video conference using artificial neutral network”, in IEEE International Conference on Multimedia and Expo,PP.690-693,2005.#
[32] [32] P.Viola, M.J.Jones, and D.Snow,”Dtecting pedestrians using patterns of motion and appearance”, in Proceeding of Ninth IEEE International Conrefence on Computer Vision, Vol.2,PP.734-741, 2002.#
[33] Z. Zhang, M.J. Lyons, M. Schuster, and S. Akamatus. “Comparison betweengeometry-based and Gabor-wavelets-based facial expression recognition usingmulti-layer perceptron”. In IEEE International Conference on Automatic Face& Gesture Recognition, 1998.#
[34] R.E. Kaliouby and P. Robinson. “Real-time inference of complex mental states from facial expressions and head gestures”. In IEEE CVPR Workshop on Real-time Vision for HumanCComputer Interaction, 2004.#
[35] Michel Valstar, I. Patras, and M. Pantic. “Facial Action Unit Detection using Probabilistic Actively Learned Support Vector Machines on Tracked Facial Point”. In IEEE Conference on Computer Vision and Pattern Recognition, page 76, 2005.#
[36] G. Ford. “Tutorial on gabor filters”. Technical report, Machine Perception Lab, Institute of Neural Computation, 2002.#
[37] Ying-Li Tian, T. Kanade, and J. Cohn. “Eye-state action unit detection by Gabor wavelets”. In Proceedings of International Conference on Multi-modal Interfaces, pages 143{150, 2000.#
[38] T. Ojala, M. Pietikainen, and D. Harwood. “A comparative study of texture measures with classification based on featured distribution”. Pattern Recognition, 29(1):51{59, 1996.#
[39] T. Ahonen, A. Hadid, and M. Pietikainen. “Face recognition with local binary Patterns”. In European Conference on Computer Vision, 2004.#
[40] X. Feng, A. Hadid, and M. Pietikainen. “A coarse-to-fine classification scheme for facial expression recognition”. In International Conference on Image Analysis and Recognition, 2004.#
[41] Caifeng Shan, Shaogang Gong, and Peter W. McOwan. “Facial expression recognition based on Local Binary Patterns: A comprehensive study”. Image andVision Computing, 27(6):803{816, May 2009.#
[42] M.S.Barlett, G.Littlewort, M.Frank, C.Lainscsek, I.Fasel, and J.Movellan. “Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior”. In Computer Vision and Pattern Recognition, 2005.#
[43] P. Lekshmi.V1. , Dr.M.Sasikumar, “Analysis of Facial Expression using Gabor and SVM” , International Journal of Recent Trends in Engineering, Vol. 1, No. 2, May 2009.#
[44] M.S. Bartlett, G. Littlewort, M.G. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Fully Automatic Facial Action Recognition in Spontaneous Behavior,” Proc. IEEE Int’l Conf. Automatic Face and Gesture Recognition (AFGR ’06), pp. 223-230, 2006.#
[45] M. Osadchy, Y. LeCun, and M. Miller. “Synergistic face detection and pose estimation with energy-based models”. Journal of Machine Learning Research, 8:1197{1215, May 2007.#
[46] B. Heisele, T. Serre, and T. Poggio. “A component-based framework for face detection and identification. “,International Journal of Computer Vision, 74(2):167{181, 2007.#
[47] D. Keren, M. Osadchy, and C. Gotsman. “Antifaces: A novel fast method for image detection”. IEEE Transaction on Pattern Analysis and Machine Intelligence,23(7):747{761, 2001.#
[48] H. Schneiderman and T. Kanade.” Object detecion using the statistics of parts”. International Journal of Computer Vision, 56(3):151{177, 2004.#
[49] M. Pardas and A. Bonafonte, “Facial animation parameters extraction and expression recognition using Hidden Markov Models,” Signal Processing: Image Communication,vol. 17, pp.675–688, 2002.#
[50] M.J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba. “Coding facial expression with Gabor wavelets”. In IEEE International Conference on Automatic Face &Gesture Recognition, pages 200{205, 1998.#
[51] Priya Metri, Jayshree Ghorpade, Ayesha Butalia, " Facial Emotion Recognition Using Context Based Multimodal Approach", International Journal of Emerging Sciences, 2(1), 171-182, March 2012.#
[52] Ekman, Paul, and Wallace V. Friesen. “Manual for the facial action coding system”. Consulting Psychologists Press, 1978.#
[53] Wu, Tingfan, Marian S. Bartlett, and Javier R. Movellan. "Facial expression recognition using gabor motion energy filters." In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pp. 42-47. IEEE, 2010.#
[54] Zhang, Ligang, Dian Tjondronegoro, and Vinod Chandran. "Random Gabor based templates for facial expression recognition in images with facial occlusion." Neurocomputing 145 (2014): 451-464.#
[55] Kanade, Takeo, Jeffrey F. Cohn, and Yingli Tian. "Comprehensive database for facial expression analysis." In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pp. 46-53. IEEE, 2000.#
[56] Wallhoff, Frank. "Facial expressions and emotion database." Technische Universität München (2006).#