- 著者
-
横井 創磨
佐藤 一誠
中川 裕志
- 出版者
- 一般社団法人 人工知能学会
- 雑誌
- 人工知能学会論文誌 (ISSN:13460714)
- 巻号頁・発行日
- vol.31, no.6, pp.AI30-C_1-9, 2016-11-01 (Released:2016-11-02)
- 参考文献数
- 16
Topic models are generative models of documents, automatically clustering frequently co-occurring words (topics) from corpora. Topics can be used as stable features that represent the substances of documents, so that topic models have been extensively studied as technology for extracting latent information behind large data. Unfortunately, the typical time complexity of topic model computation is the product of the data size and the number of topics, therefore the traditional Markov chain Monte Carlo (MCMC) method cannot estimate many topics on large corpora within a realistic time. The data size is a common concern in Bayesian learning and there are general approaches to avoid it, such as variational Bayes and stochastic gradient MCMC. On the other hand, the number of topics is a specific problem to topic models and most solutions are proposed to the traditional Gibbs sampler. However, it is natural to solve these problems at once, because as the data size grows, so does the number of topics in corpora. Accordingly, we propose new methods coping with both data and topic scalability, by using fast computing techniques of the Gibbs sampler on stochastic gradient MCMC. Our experiments demonstrate that the proposed method outperforms the state-of-the-art of traditional MCMC in mini-batch setting, showing a better mixing rate and faster updating.