- 著者
-
今津 拓哉
半田 久志
阿部 匡伸
- 出版者
- 一般社団法人日本機械学会
- 雑誌
- インテリジェントシステム・シンポジウム講演論文集
- 巻号頁・発行日
- vol.2011, no.21, pp.215-218, 2011-09-01
Recently, evaluation functions for Shogi by using computer has attracted much attention due to Bonanza based on machine learning. The Bonanza has achieved one of the strongest computer players for Shogi, which often defeat human players. In order to learn the evaluation functions, Bonanza utilizes a considerable number of game records. Meanwhile, reinforcement learning can learn evaluation values based on experiences. The reinforcement learning, however, has not succeeded in learning with a large number of fine-grained feature values. In this paper, we investigate the effects of the state representations in the evaluation functions for learning results, where the state representations are derived from the ones of 'Bonanza'.