Ecamera: a real-time facial expression recognition system
MetadataShow full item record
According to a report from the United Nations, “There is approximately 20 percent of youth experiencing a mental health condition each year on a global level”. Young generation is at great risk for a variety of mental-health conditions. Mental health problems affect about 1 in 10 children and young people, including depression, anxiety and conduct disorder, and are often a direct response to what is happening in their lives. Facial expression is one of the direct reflections during daily life to show their emotions which is a key factor to mental status. Therefore, the design of a real-time facial expression recognition system to look after our young generation is an urgent and vital thing to do. This project is devoted to build an e-camera which is a facial expression recognition system based on Raspberry Pi from a live Pi Camera feed and get results in real time processing. Although computer vision and facial expression recognition technology have made significant progress in recent years with many professional systems available for real-world applications, it still gains strong interest to implement such a system on a smaller device at a reasonable price such as a single-board computer. The proposed system combines image pre-processing and Convolutional Neural Network (CNN) to build the facial expression recognition model. In pre-processing, the Haar-Cascade is implemented for face detection. Moreover, 68 facial landmarks are collected for expression feature extraction. Then, CNN is used for training and testing of face expressions classification. The trained CNN model is saved in Raspberry Pi for real-time facial expression recognition. All the computing algorithms are performed on the eCamera. Only the face expression recognition results are delivered to users. The two public datasets, JAFFE and CK+, are used in simulations to evaluate our proposed expression recognition procedures. The initial real-time experimental testing is also provided in results. Compared with other previous models that build upon OpenCV, our proposed model shows a great improvement in accuracy with robust computing. Overall, compared with above methods, our work: presents a higher accuracy in the JAFFE and CK+ databases ; a more robust evaluation methodology; recognition of seven expression categories instead of only five or six as done by Song et al..
Institutional Repository URIhttps://hdl.handle.net/10657.1/1413