48 0 0 0 OA 情報幾何学

著者
甘利 俊一
出版者
一般社団法人 日本応用数理学会
雑誌
応用数理 (ISSN:24321982)
巻号頁・発行日
vol.2, no.1, pp.37-56, 1992-03-16 (Released:2017-04-08)
参考文献数
26

Information geometry is a new theoretical method to elucidate intrinsic geometrical structures underlying information systems. It is applicable to wide areas of information sciences including statistics, information theory, systems theory, etc. More concretely, information geometry studies the intrinsic geometrical structure of the manifold of probability distributions. It is found that the manifold of probability distributions leads us to a new and rich differential geometrical theory. Since most of information sciences are closely related to probability distributions, it gives a powerful method to study their intrinsic structures. A manifold consisting of a smooth family of probability distributions has a unique invariant Riemannian metric given by the Fisher information. It admits a one-parameter family of invariant affine connections, called the α-connection, where α and-α-connections are dually coupled with the Riemannian metric. The duality in affine connections is a new concept in differential geometry. When a manifold is dually flat, it admits an invariant divergence measure for which a generalized Pythagorian theorem and a projection theorem hold. The dual structure of such manifolds can be applied to statistical inference, multiterminal information theory, control systems theory, neural networks manifolds, etc. It has potential ability to be applied to general disciplines including physical and engineering sciences.
著者
甘利 俊一
出版者
一般社団法人 日本数学会
雑誌
数学 (ISSN:0039470X)
巻号頁・発行日
vol.35, no.3, pp.229-246, 1983-07-26 (Released:2008-12-25)
参考文献数
37

10 0 0 0 情報幾何学

著者
甘利 俊一
出版者
一般社団法人 日本応用数理学会
雑誌
応用数理 (ISSN:09172270)
巻号頁・発行日
vol.2, no.1, pp.37-56, 1992
参考文献数
26
被引用文献数
3

Information geometry is a new theoretical method to elucidate intrinsic geometrical structures underlying information systems. It is applicable to wide areas of information sciences including statistics, information theory, systems theory, etc. More concretely, information geometry studies the intrinsic geometrical structure of the manifold of probability distributions. It is found that the manifold of probability distributions leads us to a new and rich differential geometrical theory. Since most of information sciences are closely related to probability distributions, it gives a powerful method to study their intrinsic structures. A manifold consisting of a smooth family of probability distributions has a unique invariant Riemannian metric given by the Fisher information. It admits a one-parameter family of invariant affine connections, called the α-connection, where α and-α-connections are dually coupled with the Riemannian metric. The duality in affine connections is a new concept in differential geometry. When a manifold is dually flat, it admits an invariant divergence measure for which a generalized Pythagorian theorem and a projection theorem hold. The dual structure of such manifolds can be applied to statistical inference, multiterminal information theory, control systems theory, neural networks manifolds, etc. It has potential ability to be applied to general disciplines including physical and engineering sciences.
著者
甘利 俊一
雑誌
情報処理
巻号頁・発行日
vol.41, no.10, pp.1090-1095, 2000-10-15
著者
甘利 俊一 尾関 智子 朴 慧暎
出版者
日本神経回路学会
雑誌
日本神経回路学会誌 (ISSN:1340766X)
巻号頁・発行日
vol.10, no.4, pp.189-200, 2003-12-05 (Released:2011-03-14)
参考文献数
40
被引用文献数
2 2

多層パーセプトロンなどの神経回路網の全体を多様体として幾何学的に考察するとき, ここには階層構造に由来する特異点が本質的に含まれることがわかる. これには, 学習の遅滞, 精度の劣化など, 実際の多くの問題が関係している. 本稿は, 主に日本で発展している, 特異構造を含むモデルの統計的推論と学習のダイナミックスを取り扱い, その考え方を示し, 現在までに著者らが得ている研究の成果と構想について解説する.
著者
甘利 俊一
出版者
日本認知科学会
雑誌
認知科学 (ISSN:13417924)
巻号頁・発行日
vol.29, no.1, pp.5-13, 2022-03-01 (Released:2022-03-15)
参考文献数
1

Deep learning makes it possible to recognize patterns, play games, process and translate sentences, or do other works by learning from examples. It sometimes outperforms humans for some specific problems. Then, there naturally arises a fundamental question how different are the ways of information processing in deep learning and humans. To answer this question, we recapitulate the history of AI and deep learning shortly. We then show that deep learning generates very high-dimensional experimental formulae of interpolation and extrapolation. Humans do similar, but after finding the experimental formulae, they search for the reasons why such formulae work well. Humans search for fundamental principles underlying phenomena in the environment whereas deep learning does not. Humans cognize and understand the world they live in with consciousness. Furthermore, humans have a mind. Humans have obtained mind and consciousness through a long history of evolution, which deep learning does not. What is the role of mind and consciousness for cognition and understanding? The human brain has an excellent ability of prediction (as well as other animals), which is fundamental for surviving in the severe environment. However, humans have developed the ability of postdiction, which reviews the action plan based on a prediction before execution by integrating various pieces of evidence. This is an important function of consciousness, which deep learning does not have.
著者
甘利 俊一
出版者
一般社団法人 日本生物物理学会
雑誌
生物物理 (ISSN:05824052)
巻号頁・発行日
vol.21, no.4, pp.210-218, 1981-07-25 (Released:2009-05-25)
参考文献数
5

The neural system is believed to have such a capacity for self-organization that it can modify its structures or behavior in adapting to the information structures of the environment. We have constructed a mathematical theory of self-organizing nerve nets, with the aim of elucidating the modes and capabilities of this peculiar information processing in nerve nets.We first present a unified theoretical framework for analyzing learning and selforganization of a system of neurons with modifiable synapses, which receive signals from a stationary information source. We consider the dynamics of self-organization, which shows how the synaptic weights are modified, together with the dynamics of neural excitation patterns. It is proved that a neural system has the ability automatically to form, by selforganization, detectors or processors for every signal included in the information source of the environment.A model of self-organization in nerve fields is then presented, and the dynamics of pattern formation is analyzed in nerve fields. The theory is applied to the formation of topographic maps between two nerve fields. It is shown that under certain conditions columnar microstructures are formed in nerve fields by self-organization.

2 0 0 0 OA 人工知能

著者
南雲 仁一 甘利 俊一 中野 馨
出版者
公益社団法人 計測自動制御学会
雑誌
計測と制御 (ISSN:04534662)
巻号頁・発行日
vol.11, no.1, pp.58-68, 1972 (Released:2009-11-26)
参考文献数
76
被引用文献数
1