Crowdsourcing for evaluating machine translation quality

Shinsuke Goto, Donghui Lin, Toru Ishida

研究成果

5 被引用数 (Scopus)

抄録

The recent popularity of machine translation has increased the demand for the evaluation of translations. However, the traditional evaluation approach, manual checking by a bilingual professional, is too expensive and too slow. In this study, we confirm the feasibility of crowdsourcing by analyzing the accuracy of crowdsourcing translation evaluations. We compare crowdsourcing scores to professional scores with regard to three metrics: translation-score, sentence-score, and system-score. A Chinese to English translation evaluation task was designed using around the NTCIR-9 PATENT parallel corpus with the goal being 5-range evaluations of adequacy and fluency. The experiment shows that the average score of crowdsource workers well matches professional evaluation results. The system-score comparison strongly indicates that crowdsourcing can be used to find the best translation system given the input of 10 source sentence.

本文言語English
ホスト出版物のタイトルProceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014
編集者Nicoletta Calzolari, Khalid Choukri, Sara Goggi, Thierry Declerck, Joseph Mariani, Bente Maegaard, Asuncion Moreno, Jan Odijk, Helene Mazo, Stelios Piperidis, Hrafn Loftsson
出版社European Language Resources Association (ELRA)
ページ3456-3463
ページ数8
ISBN(電子版)9782951740884
出版ステータスPublished - 2014
外部発表はい
イベント9th International Conference on Language Resources and Evaluation, LREC 2014 - Reykjavik, Iceland
継続期間: 2014 5 262014 5 31

出版物シリーズ

名前Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014

Other

Other9th International Conference on Language Resources and Evaluation, LREC 2014
国/地域Iceland
CityReykjavik
Period14/5/2614/5/31

ASJC Scopus subject areas

  • 言語学および言語
  • 図書館情報学
  • 教育
  • 言語および言語学

フィンガープリント

「Crowdsourcing for evaluating machine translation quality」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル