著者
石下 円香 佐藤 充 森 辰則
出版者
一般社団法人 人工知能学会
雑誌
人工知能学会論文誌 (ISSN:13460714)
巻号頁・発行日
vol.24, no.4, pp.339-350, 2009 (Released:2009-05-22)
参考文献数
15
被引用文献数
2

In this paper, we propose a method of non-factoid Web question-answering that can uniformly deal with any class of Japanese non-factoid question by using a large number of example Q&A pairs. Instead of preparing classes of questions beforehand, the method retrieves already asked question examples similar to a submitted question from a set of Q&A pairs. Then, instead of preparing clue expressions for the writing style of answers according to each question class beforehand, it dynamically extracts clue expressions from the answer examples corresponding to the retrieved question examples. This clue expression information is combined with topical content information from the question to extract appropriate answer candidates. The score of an answer candidate is measured by the density of submitted question's keywords, words associated with the question and the clue expressions. Note that we utilize the set of Q&A pairs, not to find answers from them, but to obtain clue expressions about the writing style of their answers. The information source for question answering is the Web documents retrieved by using an API of a Web search engine. Experimental results showed that the clue expressions obtained from the set of examples improved the accuracy of answer candidate extraction.
著者
中野 正寛 渋木 英潔 宮崎 林太郎 石下 円香 森 辰則
出版者
一般社団法人情報処理学会
雑誌
情報処理学会研究報告自然言語処理(NL) (ISSN:09196072)
巻号頁・発行日
vol.2008, no.90, pp.107-114, 2008-09-17
被引用文献数
3

本稿では情報信憑性の判断に役立つ要約について扱う.プロードバンド化の進展やブログの普及に伴って爆発的に増加する情報の中には利用者に不利益をもたらす情報も含まれており,情報の信憑性判断を支援する技術の実現は重要な課題である.我々は情報信憑性の判断に役立つ要約の自動生成に向けて,複数の作業者の人手により情報信憑性判断のための要約を作成する実験を行った.そして,得られた要約文章の安定性や複数作業者間の一致度を分析し,情報信憑性の判断に役立つ要約を作成する際に人間が重要視する情報を調査した.実験結果では,要約対象として収集した文書から要約に必要な記述を抜き出すまでの何段階かの絞り込みで中程度の一致が期待できる事がわかった.事後の作業者へのアンケート調査によれば,要約として適当な長さと考えたのは 1 000 から 3 000 文字程度であり,作業者がまとまっていることである.また,情報信憑性の判断に役立つ要約の自動生成に向けて,作業者が作成した要約を参照要約とし,それに対応する原文からの文の抜粋に関するデータを整備した.In this paper, we investigated processes of text summarization that supports the judgment of the information credibility. The realization of technology that supports the judgment of the information credibility is one of important issues, because information harmful to users creeps into the huge amount of available information in the era of information explosion. Aiming at automated summarization, we conducted experiments in which multiple subjects manually created text summaries from the viewpoint of the judgment of the information credibility. We studied the stability of the summarization and the degree of agreement in the results of extraction of important information. We also investigated the information that subjects supposed to be important in the process of the creation of summaries, by using questioners after the experiments. The experimental results show that moderate agreement can be expected in the result of extraction of important information. The length of summaries was between about 1,000 and 3,000 characters. According to the questioners, the documents that were well-organized and information about information holders were supposed to be important. Aiming at the automated summarization, we also prepared the information of the extracted sentences that correspond to the created summaries.