Deep Convolutional Neural Network for Facial Expression

Date

2017-12-07

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This research designs a Facial Expression Recognition (FER) system based on the deep convolutional neural network using facial parts. An FER is one of the most important nonverbal channels through which Human Machine Interaction (HMI) systems can recognize humans’ internal emotions and intent. It is a type of biometric authentication that focuses on uniquely recognizing human facial appearance based on one or more physical or behavioral traits and inside emotions portrayed on one’s face. Facial expression recognition has attracted considerable attention because of its use in numerous fields such as behavioral science, education, entertainment, medicine, and securitysurveillance. Although humans recognizefacial expressions virtuallywithout effort, reliable expression recognition by machine is still a challenge. Recently, several works on FER successfully uses Convolutional Neural Networks (CNNs) for feature extraction and classification. A CNN is a type of deep neural network that works on data representations such that the input is decomposed into features and each deeper layer has a more complex representation which builds upon the previous layer. The final feature representations are then used for classification. The proposed method uses a two-channel convolutional neural network in which Facial Parts (FPs) are used as input to the first convolutional layer. The extracted eyes are used as input to the first channel while the mouth is the input into the second channel. Information from both channels converges in a fully connected layer which is used to learn global information from these local features and is then used for classification. Experiments are carried out on the Japanese Female Facial Expression (JAFFE) and the Extended Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. The results achieved shows that the system provides improved classification accuracy when compared to other methods.

Description

Keywords

Citation