TY - GEN
T1 - Generalising Kendall's Tau for noisy and incomplete preference judgements
AU - Togashi, Riku
AU - Sakai, Tetsuya
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/9/23
Y1 - 2019/9/23
N2 - We propose a new ranking evaluation measure for situations where multiple preference judgements are given for each item pair but they may be noisy (i.e., some judgements are unreliable) and/or incomplete (i.e., some judgements are missing). While it is generally easier for assessors to conduct preference judgements than absolute judgements, it is often not practical to obtain preference judgements for all combinations of documents. However, this problem can be overcome if we can effectively utilise noisy and incomplete preference judgements such as those that can be obtained from crowdsourcing. Our measure, η, is based on a simple probabilistic user model of the labellers which assumes that each document is associated with a graded relevance score for a given query. We also consider situations where multiple preference probabilities, rather than preference labels, are given for each document pair. For example, in the absence of manual preference judgements, one might want to employ an ensemble of machine learning techniques to obtain such estimated probabilities. For this scenario, we propose another ranking evaluation measure called ηp . Through simulated experiments, we demonstrate that our proposed measures τ and ηp can evaluate rankings more reliably than τ -b, a popular rank correlation measure.
AB - We propose a new ranking evaluation measure for situations where multiple preference judgements are given for each item pair but they may be noisy (i.e., some judgements are unreliable) and/or incomplete (i.e., some judgements are missing). While it is generally easier for assessors to conduct preference judgements than absolute judgements, it is often not practical to obtain preference judgements for all combinations of documents. However, this problem can be overcome if we can effectively utilise noisy and incomplete preference judgements such as those that can be obtained from crowdsourcing. Our measure, η, is based on a simple probabilistic user model of the labellers which assumes that each document is associated with a graded relevance score for a given query. We also consider situations where multiple preference probabilities, rather than preference labels, are given for each document pair. For example, in the absence of manual preference judgements, one might want to employ an ensemble of machine learning techniques to obtain such estimated probabilities. For this scenario, we propose another ranking evaluation measure called ηp . Through simulated experiments, we demonstrate that our proposed measures τ and ηp can evaluate rankings more reliably than τ -b, a popular rank correlation measure.
KW - Crowdsourcing
KW - Evaluation measures
KW - Graded relevance
KW - Preference judgements
UR - http://www.scopus.com/inward/record.url?scp=85074225842&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074225842&partnerID=8YFLogxK
U2 - 10.1145/3341981.3344246
DO - 10.1145/3341981.3344246
M3 - Conference contribution
AN - SCOPUS:85074225842
T3 - ICTIR 2019 - Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval
SP - 193
EP - 196
BT - ICTIR 2019 - Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval
PB - Association for Computing Machinery, Inc
T2 - 9th ACM SIGIR International Conference on the Theory of Information Retrieval, ICTIR 2019
Y2 - 2 October 2019 through 5 October 2019
ER -