- Meteorological Society of Japan
- SOLA (ISSN:13496476)
- vol.13, pp.235-239, 2017 (Released:2017-12-21)
We have proposed a deep convolution neural network (CNN) approach for the accurate estimation of the cloud coverage (CC) from images captured by a consumer camera, i.e., snapshot pictures. This CNN can successfully estimate the CC to within the level of the inherent error in the training dataset. A segmentation-based method using a linear support vector machine (SVM) is shown to be unable to distinguish between water surfaces and the sky, while the present CNN can correctly distinguish between them, possibly because the CNN can understand the positioning of components in the images; the sky is over a water surface. The present CNN can also be applied to photo-realistic computer-graphic (CG) images from numerical simulations. Comparisons between the CNN estimates for camera images and for the CG images can provide useful information for data assimilation, and thus contribute to numerical weather forecasting. The CC is a sort of far-field (remote) information. The present CNN has the potential to allow consumer cameras to be used as remote weather sensors.