著者
角森 唯子 Graham Neubig Sakriani Sakti 平岡 拓也 水上 雅博 戸田 智基 中村 哲
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会研究会資料 言語・音声理解と対話処理研究会 75回 (2015/10) (ISSN:09185682)
巻号頁・発行日
pp.04, 2015-10-26 (Released:2021-06-28)

When humans attempt to detect deception, they perform two actions: looking for telltale signs of deception, and asking questions to attempt to unveil a deceptive conversational partner. There has been a significant amount of prior work on automatic deception detection, which focuses on the former. On the other hand, we focus on the latter, constructing a dialog system for an interview task that acts as an interviewer asking questions to attempt to catch a potentially deceptive interviewee. We propose several dialog strategies for this system, and measure the utterance-level deception detection accuracy of each, finding that a more intelligent dialog strategy results in slightly better deception detection accuracy.
著者
角森 唯子 東中 竜一郎 吉村 健 礒田 佳徳
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会論文誌 (ISSN:13460714)
巻号頁・発行日
vol.35, no.1, pp.DSI-B_1-10, 2020-01-01 (Released:2020-01-01)
参考文献数
23

A chat-oriented dialogue system can become more likeable if it can remember information about users and use that information during a dialogue. We propose a chat-oriented dialogue system that can use user information acquired during a dialogue and discuss its effectiveness on the interaction over multiple days. In our subjective evaluation over five consecutive days, we compared three systems: A system that can remember and use user information over multiple days (proposed system), one that can only remember user information within a single dialogue session, and another that does not remember any user information. We found that users were significantly more satisfied with our proposed system than with the other two. This paper is the first to verify the effectiveness of remembering on the interaction over multiple days with a fully automated chat-oriented dialogue system.
著者
角森 唯子 東中 竜一郎 高橋 哲朗 稲葉 通将
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会論文誌 (ISSN:13460714)
巻号頁・発行日
vol.35, no.1, pp.DSI-G_1-10, 2020-01-01 (Released:2020-01-01)
参考文献数
26
被引用文献数
1

The task of detecting dialogue breakdown, the aim of which is to detect whether a system utterance causes dialogue breakdown in a given dialogue context, has been actively researched in recent years. However, currently, it is not clear which evaluation metrics should be used to evaluate dialogue breakdown detectors, hindering progress in dialogue breakdown detection. In this paper, we propose finding appropriate metrics for evaluating the detectors in dialogue breakdown detection challenge 3. In our approach, we first enumerate possible evaluation metrics and then rank them on the basis of system ranking stability and discriminative power. By using the submitted runs (results of dialogue breakdown detection of participants) of dialogue breakdown detection challenge 3, we experimentally found that RSNOD(NB,PB,B) is an appropriate metric for dialogue breakdown detection in dialogue breakdown detection challenge 3 for English and Japanese, although NMD(NB,PB,B) and MSE(NB,PB,B) were found appropriate specifically for English and Japanese, respectively.