Precision-at-ten considered redundant

William Webber*, Alistair Moffat, Justin Zobel, Tetsuya Sakai

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Citations (Scopus)

Abstract

Information retrieval systems are compared using evaluation metrics, with researchers commonly reporting results for simple metrics such as precision-at-10 or reciprocal rank together with more complex ones such as average precision or discounted cumulative gain. In this paper, we demonstrate that complex metrics are as good as or better than simple metrics at predicting the performance of the simple metrics on other topics. Therefore, reporting of results from simple metrics alongside complex ones is redundant.

Original languageEnglish
Title of host publicationACM SIGIR 2008 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Proceedings
Pages695-696
Number of pages2
DOIs
Publication statusPublished - 2008
Externally publishedYes
Event31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM SIGIR 2008 - Singapore, Singapore
Duration: 2008 Jul 202008 Jul 24

Publication series

NameACM SIGIR 2008 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Proceedings

Conference

Conference31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM SIGIR 2008
Country/TerritorySingapore
CitySingapore
Period08/7/2008/7/24

Keywords

  • Experimentation
  • Measurement
  • Performance

ASJC Scopus subject areas

  • Information Systems
  • Software

Fingerprint

Dive into the research topics of 'Precision-at-ten considered redundant'. Together they form a unique fingerprint.

Cite this