Can Humans Correct Errors From System? Investigating Error Tendencies in Speaker Identification Using Crowdsourcing

Yuta Ide, Susumu Saito, Teppei Nakano, Tetsuji Ogawa

研究成果: Conference article査読

抄録

An attempt was made to clarify the effectiveness of crowdsourcing on reducing errors in automatic speaker identification (ASID). It is possible to efficiently reduce errors by manually revalidating the unreliable results given by ASID systems. Ideally, errors should be corrected appropriately, and correct answers should not be miscorrected. In addition, a low false acceptance rate is desirable in authentication, but a high false rejection rate should be avoided from a usability viewpoint. It, however, is not certain that humans can achieve such an ideal SID, and in the case of crowdsourcing, the existence of malicious workers cannot be ignored. This study, therefore, investigates whether manual verification of error-prone inputs by crowd workers can reduce ASID errors and whether the resulting corrections are ideal. Experimental investigations on Amazon Mechanical Turk, in which 426 qualified workers identified 256 speech pairs from VoxCeleb data, demonstrated that crowdsourced verification can significantly reduce the number of false acceptances without increasing the number of false rejections compared to the results from the ASID system.

本文言語English
ページ(範囲)5100-5104
ページ数5
ジャーナルProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
2022-September
DOI
出版ステータスPublished - 2022
イベント23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
継続期間: 2022 9月 182022 9月 22

ASJC Scopus subject areas

  • 言語および言語学
  • 人間とコンピュータの相互作用
  • 信号処理
  • ソフトウェア
  • モデリングとシミュレーション

フィンガープリント

「Can Humans Correct Errors From System? Investigating Error Tendencies in Speaker Identification Using Crowdsourcing」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル