著者
勝見 久央 吉野 幸一郎 平岡 拓也 秋元 康佑 山本 風人 本浦 庄太 定政 邦彦 中村 哲
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会論文誌 (ISSN:13460714)
巻号頁・発行日
vol.35, no.1, pp.DSI-D_1-12, 2020-01-01 (Released:2020-01-01)
参考文献数
28
被引用文献数
1

Argumentation-based dialogue systems, which can handle and exchange arguments through dialogue, have been widely researched. It is required that these systems have sufficient supporting information to argue their claims rationally; however, the systems do not often have enough information in realistic situations. One way to fill in the gap is acquiring such missing information from dialogue partners (information-seeking dialogue). Existing informationseeking dialogue systems were based on handcrafted dialogue strategies that exhaustively examine missing information. However, these strategies were not specialized in collecting information for constructing rational arguments. Moreover, the number of system’s inquiry candidates grows in accordance with the size of the argument set that the system deal with. In this paper, we formalize the process of information-seeking dialogue as Markov decision processes (MDPs) and apply deep reinforcement learning (DRL) for automatic optimization of a dialogue strategy. By utilizing DRL, our dialogue strategy can successfully minimize objective functions: the number of turns it takes for our system to collect necessary information in a dialogue. We also proposed another dialogue strategy optimization based on the knowledge existence. We modeled the knowledge of the dialogue partner by using Bernoulli mixture distribution. We conducted dialogue experiments using two datasets from different domains of argumentative dialogue. Experimental results show that the proposed dialogue strategy optimization outperformed existing heuristic dialogue strategies.
著者
角森 唯子 Graham Neubig Sakriani Sakti 平岡 拓也 水上 雅博 戸田 智基 中村 哲
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会研究会資料 言語・音声理解と対話処理研究会 75回 (2015/10) (ISSN:09185682)
巻号頁・発行日
pp.04, 2015-10-26 (Released:2021-06-28)

When humans attempt to detect deception, they perform two actions: looking for telltale signs of deception, and asking questions to attempt to unveil a deceptive conversational partner. There has been a significant amount of prior work on automatic deception detection, which focuses on the former. On the other hand, we focus on the latter, constructing a dialog system for an interview task that acts as an interviewer asking questions to attempt to catch a potentially deceptive interviewee. We propose several dialog strategies for this system, and measure the utterance-level deception detection accuracy of each, finding that a more intelligent dialog strategy results in slightly better deception detection accuracy.