Abstract
IR tasks have diversified: human assessments of items such as social media posts can be highly subjective, in which case it becomes necessary to hire many assessors per item to reflect their diverse views. For example, the value of a tweet for a given purpose may be judged by (say) ten assessors, and their ratings could be summed up to define its gain value for computing a graded-relevance evaluation measure. In the present study, we propose a simple variant of this approach, which takes into account the fact that some items receive unanimous ratings while others are more controversial. We generate simulated ratings based on a real social-media-based IR task data to examine the effect of our unanimity-aware approach on the system ranking and on statistical significance. Our results show that incorporating unanimity can affect statistical significance test results even when its impact on the gain value is kept to a minimum. Moreover, since our simulated ratings do not consider the correlation present in the assessors' actual ratings, our experiments probably underestimate the effect of introducing unanimity into evaluation. Hence, if researchers accept that unanimous votes should be valued more highly than controversial ones, then our proposed approach may be worth incorporating.
Original language | English |
---|---|
Pages (from-to) | 39-42 |
Number of pages | 4 |
Journal | CEUR Workshop Proceedings |
Volume | 2008 |
Publication status | Published - 2017 Jan 1 |
Event | 8th International Workshop on Evaluating Information Access, EVIA 2017 - Tokyo, Japan Duration: 2017 Dec 5 → … |
Keywords
- Effect sizes
- Evaluation measures
- Inter-assessor agreement
- P-values
- Social media
- Statistical significance
ASJC Scopus subject areas
- Computer Science(all)