著者
堀内 靖雄 中野 有紀子 小磯 花絵 石崎 雅人 鈴木 浩之 岡田 美智男 仲 真紀子 土屋 俊 市川 熹
出版者
社団法人人工知能学会
雑誌
人工知能学会誌 (ISSN:09128085)
巻号頁・発行日
vol.14, no.2, pp.261-272, 1999-03-01
被引用文献数
16

The Japanese Map Task Corpus was created between 1994 and 1998 and contains a collection of 23 hour digital recordings, digitized maps and orthographic transcriptions of 128 dialogues by 64 native Japanese speakers. Map task dialogues are dialogues participated in by two speakers, the instruction giver who has a map with a route and the instruction follower who has a map without a route. The giver verbally instructs the follower to draw a route on his map. The two maps are slightly different so that there may emerge a natural interaction in spite of the fact that the flow of information internal to the task is basically one way. The principle and design of the recordings are described with special reference to the augmentations and improvements to the original HCRC Map Task corpus. Annotations to the orthographic transcriptions are viewed as "tags" that provide the start and end times of utterances, the duration of pauses, non-verbal events and synchronization of overlapping utterances, in a format which provides a view to giving a basis for further tagging in terms of linguistic and discourse phenomena in a interchangeable and sharable manner. Discourse and linguistic phenomena peculiar to spontaneous spoken dialogues, such as overlapping, are analyzed and the method of recording such phenomena in the transcription is discussed and proposed, with an implication for the requirement that one dialogue be represented in one digitized sound file for the preservation and reference of the information on timing. The tags emp1oyed in the corpus also provide an easy way of characterizing it in terms of the number and the duration of utteraI1ces and pauses. The statistical figures thus ob-tained are relatively independent of design factors like kinds of maps, but familiarity does significantly correlate with the duration and number of utterances.
著者
山村 雅幸 小野 貴久 小林 重信 Masayuki Yamamura Takahisa Ono Shigenobu Kobayashi
雑誌
人工知能学会誌 = Journal of Japanese Society for Artificial Intelligence (ISSN:09128085)
巻号頁・発行日
vol.7, no.6, pp.1049-1059, 1992-11-01

Genetic Algorithms (GA) is a new learning paradigm that models a natural evolution mechanism. The framework of GA straightly corresponds to an optimization problem. They are classified into functional optimization and combinatorial one, and have been studied in different manners. GA can be applied to both types of problems and moreover their combinations. According to generations, GA will discover and accumulate building blocks in the form of schemata, and find the global solution as their combinations. It is said GA can find the global solution rapidly if the population holds sufficient varieties. However, this expectation has not been confirmed rigidly. Indeed, there are some problems pointed out such as the early convergence problem in functional optimization, and the encode/decode-crossover problem in combinatorial one. In this paper, we give a solution to the encode/decode-crossover problem for traveling salesman problems (TSP) with a character-preserving GA. In section 2, we define the encode/decode-crossover problem. The encode-decode problem is to define a correspondence between GA space and problem space. The crossover problem is to define a crossover method in GA space. They are closely related to the performance of GA. We point out some problems with conventional approaches for TSP. We propose three criteria to define better encode/decode ; the completeness, soundness and non-redundancy. We also propose a criterion to define better crossover ; character-preservingness. In section 3, we propose a character-preserving GA. In TSP, good subtours are worth preserving for descendants. We propose a subtour exchange crossover, that will not break subtours as possible. We also propose a compress method to improve efficiency. In section 4, we design an experiment to confirm usefulness of our character-preserving GA. We use a double-circled TSP in which the same numbers of cities are placed on two concentrated circles. There are two kinds of local solutions ; "C"-type and "O"-type. The ratio between outer and inner radius determines which is the optimum solution. We vary the radius ratio and see how much optimal solutions are obtained. In the result, character-preserving GA finds optimal solutions effectively.
著者
渡辺 澄夫 Sumio Watanabe 東京工業大学精密工学研究所 Precision and Intelligence Laboratory Tokyo Institute of Technology
雑誌
人工知能学会誌 = Journal of Japanese Society for Artificial Intelligence (ISSN:09128085)
巻号頁・発行日
vol.16, no.2, pp.308-315, 2001-03-01

The parameter space of a hierarchical learning machine is not a Riemannian manifold since the rank of the Fisher information metric depends on the parameter.In the previous paper, we proved that the stochastic complexity is asymptotically equal to λ log n-(m-1)log log n, where λ is a rational number, m is a natural number, and n is the number of empirical samples.Also we proved that both λ and m are calculated by resolution of singularties.However, both λ and m depend on the parameter representation and the size of the true distribution.In this paper, we study Jeffreys' prior distribution which is coordinate free, and prove that 2λ is equal to the dimension of the parameter set and m=1 independently of the parameter representation and singularities.This fact indicated that Jeffreys' prior is useful in model selection and knowledge discovery, in spite that it makes the prediction error to be larger than positive distributions.