- 著者
-
岩渕 弘信
岡村 凜太郎
Sebastian Schmidt
- 出版者
- 日本地球惑星科学連合
- 雑誌
- JpGU-AGU Joint Meeting 2017
- 巻号頁・発行日
- 2017-03-10
Estimation of cloud properties such as the cloud optical thickness and effective droplet radius is usually based on the independent pixel approximation (IPA) assuming a plane-parallel, homogeneous cloud for each pixel of a satellite image. Prior studies have pointed out that horizontal and vertical inhomogeneities produce significant errors in the retrieved cloud properties. The observed reflectance at each pixel is influenced by the spatial arrangement of cloud water in adjacent pixels, which necessitates the consideration of the adjacent cloud effects when estimating the cloud properties at a target pixel. We study the feasibility of a multi-spectral, multi-pixel approach to estimate the cloud optical thickness and effective droplet radius using a deep neural network (DNN), which is a kind of machine-learning technique and has capabilities of multi-variable estimation, automatic characterization of data, and non-linear approximation. A Monte Carlo three-dimensional radiative transfer model is used to simulate the reflectances with a resolution of 280 m for large eddy simulation cloud fields in cases of boundary layer clouds. Two retrieval methods are constructed: 1) DNN-2r that correct IPA retrievals using the reflectances (from 3D simulations) at 0.86 and 2.13 µm and 2) DNN-4w that uses the so-called convolution layer and directly retrieve cloud properties from the reflectances at 0.86, 1.64, 2.13 and 3.75 µm. Both DNNs efficiently derive the spatial distribution of cloud properties at about 6×6 pixels all at once from reflectances at multiple pixels. Both DNNs outperform the IPA-based retrieval in estimating cloud optical thickness and effective droplet radius more accurately. The DNN-4w can robustly estimate cloud properties even for optically thick clouds, and the use of a convolution layer in the DNN seems adequate to represent three-dimensional radiative transfer effects.