- 著者
-
西銘 大喜
遠藤 聡志
當間 愛晃
山田 孝治
赤嶺 有平
- 出版者
- 一般社団法人 人工知能学会
- 雑誌
- 人工知能学会論文誌 (ISSN:13460714)
- 巻号頁・発行日
- vol.32, no.5, pp.F-H34_1-8, 2017-09-01 (Released:2017-09-01)
- 参考文献数
- 20
- 被引用文献数
-
6
Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are “happiness”, “sadness”, “surprise”, “anger”, “disgust”, “fear” and “neutral”. As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions (Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognised in the following order “happiness”, “surprise”, “neutral”, “anger”, “disgust”, “sadness” and “fear”. From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognise the facial expression using hidden layer units having the area corresponding to each facial expression.