著者
Daisuke SAITO Nobuaki MINEMATSU Keikichi HIROSE
出版者
The Institute of Electronics, Information and Communication Engineers
雑誌
IEICE Transactions on Information and Systems (ISSN:09168532)
巻号頁・発行日
vol.E103.D, no.6, pp.1395-1405, 2020-06-01 (Released:2020-06-01)
参考文献数
28

This paper describes a novel approach to flexible control of speaker characteristics using tensor representation of multiple Gaussian mixture models (GMM). In voice conversion studies, realization of conversion from/to an arbitrary speaker's voice is one of the important objectives. For this purpose, eigenvoice conversion (EVC) based on an eigenvoice GMM (EV-GMM) was proposed. In the EVC, a speaker space is constructed based on GMM supervectors which are high-dimensional vectors derived by concatenating the mean vectors of each of the speaker GMMs. In the speaker space, each speaker is represented by a small number of weight parameters of eigen-supervectors. In this paper, we revisit construction of the speaker space by introducing the tensor factor analysis of training data set. In our approach, each speaker is represented as a matrix of which the row and the column respectively correspond to the dimension of the mean vector and the Gaussian component. The speaker space is derived by the tensor factor analysis of the set of the matrices. Our approach can solve an inherent problem of supervector representation, and it improves the performance of voice conversion. In addition, in this paper, effects of speaker adaptive training before factorization are also investigated. Experimental results of one-to-many voice conversion demonstrate the effectiveness of the proposed approach.
著者
Xinyi Zhao Nobuaki Minematsu Daisuke Saito
雑誌
研究報告音声言語情報処理(SLP) (ISSN:21888663)
巻号頁・発行日
vol.2018-SLP-125, no.17, pp.1-4, 2018-12-03

In English education, speech synthesis technologies can be effectively used to develop a reading tutor to show students how to read given sentences in a natural and native way. The tutor can not only provide native-like audio of the input sentences but also visualize required prosodic structure to read those sentences aloud naturally. As the first step to develop such a reading tutor, prosodic events that can imply the intonation of the sentence need to be predicted from plain text. In this research, phrase boundary and 4-level stress instead of the traditional binary stress level are taken into consideration as prosodic events. 4-level stress labels not only categorize syllables into stressed ones and unstressed ones, but also indicate where phrase stress and sentence stress should appear in a sentence. Conditional Random Fields as a popular sequence labeling method are employed to do the prediction work. Experiments showed that applying our proposed method can improve the performance of prosody prediction compared to previous researches.
著者
Jianwu Dang Aijun Li Donna Erickson Atsuo Suemitsu Masato Akagi Kyoko Sakuraba Nobuaki Minematsu Keikichi Hirose
出版者
ACOUSTICAL SOCIETY OF JAPAN
雑誌
Acoustical Science and Technology (ISSN:13463969)
巻号頁・発行日
vol.31, no.6, pp.394-402, 2010-11-01 (Released:2010-11-01)
参考文献数
16
被引用文献数
1 11

In this study, we conducted a comparative experiment on emotion perception among different cultures. Emotional components were perceived by subjects from Japan, the United States and China, all of whom had no experience living abroad. An emotional speech database without linguistic information was used in this study and evaluated using three- and/or six-emotional dimensions. Principal component analysis (PCA) indicates that the common factors could explain about 60% variance of the data among the three cultures by using a three-emotion description and about 50% variance between Japanese and Chinese cultures by using a six-emotion description. The effects of the emotion categories on perception results were investigated. The emotions of anger, joy and sadness (group 1) have consistent structures in PCA-based spaces when switching from three-emotion categories to six-emotion categories. Disgust, surprise, and fear (group 2) appeared as paired counterparts of anger, joy and sadness, respectively. When investigating the subspaces constructed by these two groups, the similarity between the two emotion groups was found to be fairly high in the two-dimensional space. The similarity becomes lower in 3- or higher dimensional spaces, but not significantly different. The results from this study suggest that a wide range of human emotions might fall into a small subspace of basic emotions.
著者
Nobuaki MINEMATSU Ryuji KITA Keikichi HIROSE
出版者
The Institute of Electronics, Information and Communication Engineers
雑誌
IEICE TRANSACTIONS on Information and Systems (ISSN:09168532)
巻号頁・発行日
vol.E86-D, no.3, pp.550-557, 2003-03-01

Accurate estimation of accentual attribute values of words, which is required to apply rules of Japanese word accent sandhi to prosody generation, is an important factor to realize high-quality text-to-speech (TTS) conversion. The rules were already formulated by Sagisaka et al. and are widely used in Japanese TTS conversion systems. Application of these rules, however, requires values of a few accentual attributes of each constituent word of input text. The attribute values cannot be found in any public database or any accent dictionaries of Japanese. Further, these values are difficult even for native speakers of Japanese to estimate only with their introspective consideration of properties of their mother tongue. In this paper, an algorithm was proposed, where these values were automatically estimated from a large amount of data of accent types of accentual phrases, which were collected through a long series of listening experiments. In the proposed algorithm, inter-speaker differences of knowledge of accent sandhi were well considered. To improve the coverage of the estimated values over the obtained data, the rules were tentatively modified. Evaluation experiments using two-mora accentual phrases showed the high validity of the estimated values and the modified rules and also some defects caused by varieties of linguistic expressions of Japanese.
著者
Greg Short Keikichi Hirose Nobuaki Minematsu
出版者
一般社団法人 日本音響学会
雑誌
Acoustical Science and Technology (ISSN:13463969)
巻号頁・発行日
vol.35, no.2, pp.73-85, 2014-02-01 (Released:2014-03-01)
参考文献数
34

For Japanese speech processing, being able to automatically recognize between geminate and singleton consonants can have many benefits. In standard recognition methods, hidden Markov Models (HMMs) are used. However, HMMs are not good at differentiating between items that are distinguished primarily by temporal differences rather than spectral differences. Also, gemination depends on the length of the sounds surrounding the consonant. Because of this, we propose the construction of a method that automatically distinguishes geminates from singletons and takes these factors into account. In order to do this, it is necessary to determine which surrounding sounds are cues and what the mechanism of human recognition is. For this, we conduct perceptual experiments to examine the relationship between surrounding sounds and primary cues. Then, using these results, we design a method that can automatically recognize gemination. We test this method on two datasets including a speaking rate database. The results attained well-outperform the HMM-based method and overall outperform the case when only the primary cue is used for recognition as well as show more robustness against speaking rate.