[Colloq] [Thesis Proposal] Pavel Metrikov: Relevance Assessment (Un-)Reliability in Information Retrieval

Javed Aslam jaa at ccs.neu.edu
Thu Jun 20 17:18:41 EDT 2013


Thesis Proposal - Pavel Metrikov

Friday, June 21, 4pm
366WVH

Relevance Assessment (Un-)Reliability in Information Retrieval: Minimizing Negative Impact

Collecting relevance assessments is a very important procedure in Information Retrieval. It is conducted to (1) evaluate the performance of an existing search engine, or (2) build and train a new one. While most of the popular performance Evaluation Measures and search engine training algorithms assume the relevance assessments are accurate and reliable, in practice this assumption is often violated. Whether intentional or not, assessors may provide noisy and inconsistent relevance judgments potentially leading to (1) incorrect conclusions about the performance of a search engine or (2) inefficient or suboptimal training of a search engine.

Addressing the problem above, we first (a) demonstrate how one can quantify the negative effect of assessor disagreement (including intra-assessor disagreement as a special case) on the ranking performance of a search engine. Beside this theoretical result, we also propose practical strategies for (b) tuning existing Evaluation Measures with the goal of making them more robust to label noise and (c) incorporating a noise reduction component into existing Learning-to-Rank algorithms.

Jay Aslam (advisor)
Mirek Riedewald
David Smith
Igor Kuralenok (external member)


More information about the Colloq mailing list