著者
勝島 修平 穴田 一 江上 周作 福田 賢一郎
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会第二種研究会資料 (ISSN:24365556)
巻号頁・発行日
vol.2022, no.SWO-056, pp.17, 2022-03-11 (Released:2022-03-24)

In recent years, there is an interpretability problem that even experts cannot explain the reasoning process of machine learning. A contest featuring interpretability, "First Knowledge Graph Reasoning Challenge 2018." was held in Tokyo. A previous study presented a method based on embedding with triple for learning the sense of words. However, information about object simultaneity, such as location and time, which should be learned at the same time, is lost. Therefore, we propose an inference method that learns the graph structure by means of a graph convolutional network (GCN) and explains important connections on the graph by means of layered relevance propagation (LRP). The experimental results show that the proposed approach indicates the reasoning process using additional knowledge and the propagation of relevance by LRP.