TY - GEN
T1 - Precision-at-ten considered redundant
AU - Webber, William
AU - Moffat, Alistair
AU - Zobel, Justin
AU - Sakai, Tetsuya
PY - 2008
Y1 - 2008
N2 - Information retrieval systems are compared using evaluation metrics, with researchers commonly reporting results for simple metrics such as precision-at-10 or reciprocal rank together with more complex ones such as average precision or discounted cumulative gain. In this paper, we demonstrate that complex metrics are as good as or better than simple metrics at predicting the performance of the simple metrics on other topics. Therefore, reporting of results from simple metrics alongside complex ones is redundant.
AB - Information retrieval systems are compared using evaluation metrics, with researchers commonly reporting results for simple metrics such as precision-at-10 or reciprocal rank together with more complex ones such as average precision or discounted cumulative gain. In this paper, we demonstrate that complex metrics are as good as or better than simple metrics at predicting the performance of the simple metrics on other topics. Therefore, reporting of results from simple metrics alongside complex ones is redundant.
KW - Experimentation
KW - Measurement
KW - Performance
UR - http://www.scopus.com/inward/record.url?scp=57349093460&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=57349093460&partnerID=8YFLogxK
U2 - 10.1145/1390334.1390456
DO - 10.1145/1390334.1390456
M3 - Conference contribution
AN - SCOPUS:57349093460
SN - 9781605581644
T3 - ACM SIGIR 2008 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Proceedings
SP - 695
EP - 696
BT - ACM SIGIR 2008 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Proceedings
T2 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM SIGIR 2008
Y2 - 20 July 2008 through 24 July 2008
ER -