著者
Ching-Nung Lin Shi-Jim Yen
雑誌
ゲームプログラミングワークショップ2016論文集
巻号頁・発行日
vol.2016, pp.65-68, 2016-10-28

The performance of Deep Learning Inference is a serious issue when combining with speed delicate Monte Carlo Tree Search. Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This paper proposes a method making Deep Convolution Neural Network prediction and MCTS execution simultaneously at Intel Xeon Phi. This outperforms all present solutions. With our methodology, high quality simulation with pure DCNN can be done in a reasonable time.