- 著者
-
林 勝悟
谷本 啓
鹿島 久嗣
- 出版者
- 一般社団法人 人工知能学会
- 雑誌
- 人工知能学会論文誌 (ISSN:13460714)
- 巻号頁・発行日
- vol.35, no.5, pp.B-K33_1-9, 2020-09-01 (Released:2020-09-01)
- 参考文献数
- 26
The recent rapid and significant increase of big data in our society has led to major impacts of machine learningand data mining technologies in various fields ranging from marketing to science. On the other hand, there still existareas where only small-sized data are available for various reasons, for example, high data acquisition costs or therarity of targets events. Machine learning tasks using such small data are usually difficult because of the lack ofinformation available for training accurate prediction models. In particular, for long-term time-series prediction, thedata size tends to be small because of the unavailability of the data between input and output times in training. Suchlimitations on the size of time-series data further make long-term prediction tasks quite difficult; in addition, thedifficulty that the far future is more uncertain than the near future.In this paper, we propose a novel method for long-term prediction of small time-series data designed in theframework of generalized distillation. The key idea of the proposed method is to utilize the middle-time data betweenthe input and output times as “privileged information,” which is available only in the training phase and not in thetest phase. We demonstrate the effectiveness of the proposed method on both synthetic data and real-world data. Theexperimental results show the proposed method performs well, particularly when the task is difficult and has highinput dimensions.