October 27 to November 1, 2013, Dagstuhl Seminar
Evaluation of information retrieval (IR) systems has a long tradition. However, the test-collection based evaluation paradigm is of limited value for assessing today's IR applications, since it fails to address major aspects of the IR process. Thus there is a need for new evaluation methodologies.
This seminar aims to:
- Increase understanding of the central problems in evaluating information retrieval
- Meld a cross-fertilization of ideas in the evaluation approaches from the different IR evaluation communities
- Create new methodologies and approaches for solving existing problems
- Enhance the validity and reliability of future evaluation experiments
- Examine how to extract pertinent IR systems design elements from the results of evaluation experiments, in the long run.
To attain the goals of the seminar, each participant will be expected to identify one to five crucial issues in IR evaluation methodology. These perspectives will result in primarily theoretical presentations with empirical examples from current studies. Based on these contributions we will identify a selected set of methodological issues for further development in smaller working groups. The expected outcomes of the seminar will be the basis for one or more new evaluation frameworks and improved methodological solutions.