This paper develops a new statistical method of building language models (LMs) of Japanese dialects for automatic speech recognition (ASR). One possible application is to recognize a variety of utterances in our daily lives. The most crucial problem in training language models for dialects is the shortage of linguistic corpora in dialects. Our solution is to transform linguistic corpora into dialects at a level of pronunciations of words. We develop phonemesequence transducers based on weighted finite-state transducers (WFSTs). Each word in common language (CL) corpora is automatically labelled as dialect word pronunciations. For example, anta (Kansai dialect) is labelled anata (the most common representation of 'you' in Japanese). Phoneme-sequence transducers are trained from parallel corpora of a dialect and CL. We evaluate the word recognition accuracy of our ASR system. Our method outperforms the ASR system with LMs trained from untransformed corpora in written language by 9.9 points.
|ホスト出版物のタイトル||24th International Conference on Computational Linguistics - Proceedings of COLING 2012: Technical Papers|
|出版ステータス||Published - 2012|
|イベント||24th International Conference on Computational Linguistics, COLING 2012 - Mumbai|
継続期間: 2012 12月 8 → 2012 12月 15
|Other||24th International Conference on Computational Linguistics, COLING 2012|
|Period||12/12/8 → 12/12/15|
ASJC Scopus subject areas