- 著者
-
陳 嘉修
田中 健一
小寺 正明
船津 公人
- 出版者
- Division of Chemical Information and Computer Sciences The Chemical Society of Japan
- 雑誌
- ケモインフォマティクス討論会予稿集
- 巻号頁・発行日
- pp.2B10, 2018 (Released:2018-10-26)
- 参考文献数
- 9
In the chemical industry, designing novel compounds with desired characteristics is a bottleneck in the chemical manufacturing development. Quantitative structure–property relationship (QSPR) modeling with machine learning techniques can move the chemical design forward to work more efficiently. A challenge of current QSPR models is the lack of interpretability operating black-box models. Hence, interpretable machine learning methods will be essential for researchers to understand, trust, and effectively manage a QSPR model. Global interpretability and local interpretability are two typical ways to define the scope of model interpretation. Global interpretation is information on structure−property relationships for a series of compounds, helping shed some light on mechanisms of property of compounds. Local interpretability gives information about how different structural motifs of a single compound influence the property. In this presentation, we focus on the designs of interpretable frameworks for typical machine learning models. Two different approaches based on ensemble learning and deep learning to interpretable models will be presented to achieve global interpretation and local interpretation respectively which are equal to or even better than typical trustworthy models. We believe that trust in QSPR models can be enhanced by interpretable machine learning methods that conform to human knowledge and expectations.