|
SIGIR 2009 Workshop on The Future of IR Evaluation
23 July 2009
|
Program
The program consists of invited keynotes, boosters/posters, and a final discussion on new tasks and tracks.
Invited Speakers
- Towards Good Evaluation of Individual Topics, Chris Buckley [Abstract,
Slides,
Handout]
- Evaluating IR in Situ, Susan Dumais [Abstract,
Slides,
Handout]
- User Models to Compare and Evaluate Web IR Metrics, Georges Dupret [Abstract,
Slides,
Handout]
- Richer Theories, Richer Experiments, Stephen Robertson [Abstract,
Slides,
Handout]
Accepter Papers
- Enhanced Web Retrieval Task, Sadek Ali, Mariano Consens
- Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment, Omar Alonso, Stefano Mizzaro
- Relative Significance is Insufficient: Baselines Matter Too, Timothy Armstrong, Justin Zobel, William Webber, Alistair Moffat
- A Model for Evaluation of Interactive Information Retrieval, Nicholas Belkin, Michael Cole, Jingjing Liu
- Accounting for stability of retrieval algorithms using risk-reward curves, Kevyn Collins-Thompson
- Towards Information Retrieval Evaluation over Web Archives, Miguel Costa, Mário Silva
- Evaluating Network-Aware Retrieval in Social Networks, Tom Crecelius, Ralf Schenkel
- Toward Automated Component-Level Evaluation, Allan Hanbury, Henning Müller
- New methods for creating testfiles: Tuning enterprise search with C-TEST, David Hawking, Paul Thomas, Tom Gedeon, Tom Rowlands, Tim Jones
- A Virtual Evaluation Forum for Cross Language Link Discovery, Wei Che (Darren) Huang, Andrew Trotman, Shlomo Geva
- On the Evaluation of the Quality of Relevance Assessments Collected through Crowdsourcing, Gabriella Kazai, Natasa Milic-Frayling
- Building Pseudo-Desktop Collections, Jinyoung Kim, Bruce Croft
- Evaluating Collaborative Filtering Over Time, Neal Lathia, Stephen Hailes, Licia Capra
- How long can you wait for your QA system?, Fernando Llopis, Alberto Escapa, Antonio Ferrandez, Sergio Navarro, Elsia Noguera
- Building a Test Collection for Evaluating Search Result Diversity: A Preliminary Study, Hua Liu, Ruihua Song, Jian-Yun Nie, Ji-Rong Wen
- Stakeholders and their respective costs-benefits in IR evaluation, Cecile Paris, Nathalie Colineau, Paul Thomas, Ross Wilkinson
- A Plan for Making Information Retrieval Evaluation Synonymous with Human Performance Prediction, Mark Smucker
- Are Evaluation Metrics Identical With Binary Judgements?, Milad Shokouhi, Emine Yilmaz, Nick Craswell, Stephen Robertson
- Queries without Clicks: Successful or Failed Searches?, Sofia Stamou, Efthimis Efthimiadis
- CiteEval for Evaluating Personalized Social Web Search, Zhen Yue, Abhay Harplale, Daqing He, Jonathan Grady, Yiling Lin, Jon Walker, Siddharth Gopal, Yiming Yang
Final Panel
- Charles Clarke
- David Evans
- Donna Harman
- Dianne Kelly
Proceedings
The final proceedings are here.
SCHEDULE
June 15, 2009 | Deadline for Paper Submissions |
| Prepare your 2 page PDF using the ACM format
Submit online using EasyChair |
| |
July 2, 2009 | Notification of Acceptance |
| Details of accepted papers published online |
| |
July 8, 2009 | Deadline for Camera Ready Copies |
| |
July 23, 2009 | SIGIR 2009 Workshop on the Future of IR Evaluation |
CREDITS
This workshop will be held as part of the 32st Annual International ACM SIGIR Conference on Research & Development on Information Retrieval, Boston, 2009.
Information on Boston can be found in the Wikipedia.