We propose a new ranking evaluation measure for situations where multiple preference judgements are given for each item pair but they may be noisy (i.e., some judgements are unreliable) and/or incomplete (i.e., some judgements are missing). While it is generally easier for assessors to conduct preference judgements than absolute judgements, it is often not practical to obtain preference judgements for all combinations of documents. However, this problem can be overcome if we can effectively utilise noisy and incomplete preference judgements such as those that can be obtained from crowdsourcing. Our measure, η, is based on a simple probabilistic user model of the labellers which assumes that each document is associated with a graded relevance score for a given query. We also consider situations where multiple preference probabilities, rather than preference labels, are given for each document pair. For example, in the absence of manual preference judgements, one might want to employ an ensemble of machine learning techniques to obtain such estimated probabilities. For this scenario, we propose another ranking evaluation measure called ηp . Through simulated experiments, we demonstrate that our proposed measures τ and ηp can evaluate rankings more reliably than τ -b, a popular rank correlation measure.