### Abstract

Using classical statistical significance tests, researchers can only discuss PD+jH, the probability of observing the data D at hand or something more extreme, under the assumption that the hypothesis H is true (i.e., the p-value). But what we usually want is PHjD, the probability that a hypothesis is true, given the data. If we use Bayesian statistics with state-of-The-Art Markov Chain Monte Carlo (MCMC) methods for obtaining posterior distributions, this is no longer a problem. .at is, instead of the classical p-values and 95% confidence intervals, which are offen misinterpreted respectively as "probability that the hypothesis is (in)correct" and "probability that the true parameter value drops within the interval is 95%," we can easily obtain PHjD and credible intervals which represent exactly the above. Moreover, with Bayesian tests, we can easily handle virtually any hypothesis, not just "equality of means," and obtain an Expected A Posteriori (EAP) value of any statistic that we are interested in. We provide simple tools to encourage the IR community to take up paired and unpaired Bayesian tests for comparing two systems. Using a variety of TREC and NTCIR data, we compare PHjD with p-values, credible intervals with con.-dence intervals, and Bayesian EAP effect sizes with classical ones. Our results show that (a) p-values and confidence intervals can respectively be regarded as approximations of what we really want, namely, PHjD and credible intervals; and (b) sample effect sizes from classical significance tests can di.er considerably from the Bayesian EAP effect sizes, which suggests that the former can be poor estimates of population effect sizes. For both paired and unpaired tests, we propose that the IR community report the EAP, the credible interval, and the probability of hypothesis being true, not only for the raw di.erence in means but also for the effect size in terms of Glass's.δ.

Original language | English |
---|---|

Title of host publication | SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval |

Publisher | Association for Computing Machinery, Inc |

Pages | 25-34 |

Number of pages | 10 |

ISBN (Electronic) | 9781450350228 |

DOIs | |

Publication status | Published - 2017 Aug 7 |

Event | 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017 - Tokyo, Shinjuku, Japan Duration: 2017 Aug 7 → 2017 Aug 11 |

### Publication series

Name | SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval |
---|

### Other

Other | 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017 |
---|---|

Country | Japan |

City | Tokyo, Shinjuku |

Period | 17/8/7 → 17/8/11 |

### Keywords

- Bayesian Hypothesis Tests
- Confidence Intervals
- Credible Intervals
- Effect Sizes
- Hamiltonian Monte Carlo
- Markov Chain Monte Carlo
- P-Values
- Statistical Significance

### ASJC Scopus subject areas

- Information Systems
- Software
- Computer Graphics and Computer-Aided Design

## Fingerprint Dive into the research topics of 'The probability that your hypothesis is correct, credible intervals, and effect sizes for IR evaluation'. Together they form a unique fingerprint.

## Cite this

*SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval*(pp. 25-34). (SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval). Association for Computing Machinery, Inc. https://doi.org/10.1145/3077136.3080766