The probability that your hypothesis is correct, credible intervals, and effect sizes for IR evaluation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Using classical statistical significance tests, researchers can only discuss PD+jH, the probability of observing the data D at hand or something more extreme, under the assumption that the hypothesis H is true (i.e., the p-value). But what we usually want is PHjD, the probability that a hypothesis is true, given the data. If we use Bayesian statistics with state-of-The-Art Markov Chain Monte Carlo (MCMC) methods for obtaining posterior distributions, this is no longer a problem. .at is, instead of the classical p-values and 95% confidence intervals, which are offen misinterpreted respectively as "probability that the hypothesis is (in)correct" and "probability that the true parameter value drops within the interval is 95%," we can easily obtain PHjD and credible intervals which represent exactly the above. Moreover, with Bayesian tests, we can easily handle virtually any hypothesis, not just "equality of means," and obtain an Expected A Posteriori (EAP) value of any statistic that we are interested in. We provide simple tools to encourage the IR community to take up paired and unpaired Bayesian tests for comparing two systems. Using a variety of TREC and NTCIR data, we compare PHjD with p-values, credible intervals with con.-dence intervals, and Bayesian EAP effect sizes with classical ones. Our results show that (a) p-values and confidence intervals can respectively be regarded as approximations of what we really want, namely, PHjD and credible intervals; and (b) sample effect sizes from classical significance tests can di.er considerably from the Bayesian EAP effect sizes, which suggests that the former can be poor estimates of population effect sizes. For both paired and unpaired tests, we propose that the IR community report the EAP, the credible interval, and the probability of hypothesis being true, not only for the raw di.erence in means but also for the effect size in terms of Glass's.δ.

Original languageEnglish
Title of host publicationSIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
PublisherAssociation for Computing Machinery, Inc
Pages25-34
Number of pages10
ISBN (Electronic)9781450350228
DOIs
Publication statusPublished - 2017 Aug 7
Event40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017 - Tokyo, Shinjuku, Japan
Duration: 2017 Aug 72017 Aug 11

Publication series

NameSIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval

Other

Other40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017
CountryJapan
CityTokyo, Shinjuku
Period17/8/717/8/11

Keywords

  • Bayesian Hypothesis Tests
  • Confidence Intervals
  • Credible Intervals
  • Effect Sizes
  • Hamiltonian Monte Carlo
  • Markov Chain Monte Carlo
  • P-Values
  • Statistical Significance

ASJC Scopus subject areas

  • Information Systems
  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'The probability that your hypothesis is correct, credible intervals, and effect sizes for IR evaluation'. Together they form a unique fingerprint.

  • Cite this

    Sakai, T. (2017). The probability that your hypothesis is correct, credible intervals, and effect sizes for IR evaluation. In SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 25-34). (SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval). Association for Computing Machinery, Inc. https://doi.org/10.1145/3077136.3080766