著者
野村 光江 布井 雅人 吉川 左紀子
出版者
日本認知科学会
雑誌
認知科学 (ISSN:13417924)
巻号頁・発行日
vol.18, no.3, pp.441-452, 2011 (Released:2012-03-09)
参考文献数
35

The ability to recognize emotional states of others is a fundamental social skill. In this study, we investigated the extent to which complex emotions can be inferred from facial or vocal cues in speech. Several sentences were prepared that intended to appreciate, blame, apologize, or congratulate others. Japanese university students uttered these sentences with congruent or incongruent emotional states, and they were recorded with a video camera. The speakers' friends and strangers were shown these videos in a single modality (face or voice only) and they were asked to rate the perceived emotional states of the speakers. The results showed that the raters discriminated congruent message conditions from incongruent message conditions, and that this discrimination largely depended on voice cues, rather than face cues. The results also showed that the effects of familiarity of target person modulated the way of inferring emotional states. These results suggest that we could detect subtle emotional nuances of others in spoken interaction, and that we use facial and vocal information in some different ways.