Deep Convolutional Neural Network for Facial Expression

dc.contributor.advisorLu, Jiang
dc.contributor.committeeMemberUnwala, Ishaq
dc.contributor.committeeMemberYang, Xiaokun
dc.creatorNwosu, Lucy
dc.creator.orcid0000-0003-1401-5396
dc.date.accessioned2018-08-02T14:57:06Z
dc.date.available2018-08-02T14:57:06Z
dc.date.created2017-12
dc.date.issued2017-12-07
dc.date.submittedDecember 2017
dc.date.updated2018-08-02T14:57:07Z
dc.description.abstractThis research designs a Facial Expression Recognition (FER) system based on the deep convolutional neural network using facial parts. An FER is one of the most important nonverbal channels through which Human Machine Interaction (HMI) systems can recognize humans’ internal emotions and intent. It is a type of biometric authentication that focuses on uniquely recognizing human facial appearance based on one or more physical or behavioral traits and inside emotions portrayed on one’s face. Facial expression recognition has attracted considerable attention because of its use in numerous fields such as behavioral science, education, entertainment, medicine, and securitysurveillance. Although humans recognizefacial expressions virtuallywithout effort, reliable expression recognition by machine is still a challenge. Recently, several works on FER successfully uses Convolutional Neural Networks (CNNs) for feature extraction and classification. A CNN is a type of deep neural network that works on data representations such that the input is decomposed into features and each deeper layer has a more complex representation which builds upon the previous layer. The final feature representations are then used for classification. The proposed method uses a two-channel convolutional neural network in which Facial Parts (FPs) are used as input to the first convolutional layer. The extracted eyes are used as input to the first channel while the mouth is the input into the second channel. Information from both channels converges in a fully connected layer which is used to learn global information from these local features and is then used for classification. Experiments are carried out on the Japanese Female Facial Expression (JAFFE) and the Extended Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. The results achieved shows that the system provides improved classification accuracy when compared to other methods.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/10657.1/1027
dc.language.isoen
dc.subject.lcshHuman face recognition (Computer science)
dc.subject.lcshFace--Identification
dc.titleDeep Convolutional Neural Network for Facial Expression
dc.typeThesis
dc.type.materialtext
thesis.degree.grantorUniversity of Houston-Clear Lake
thesis.degree.levelMasters
thesis.degree.nameMaster of Science

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
NWOSU-MASTERSTHESIS-2017.pdf
Size:
1.59 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.86 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.45 KB
Format:
Plain Text
Description: