posted by user: grupocole || 2835 views || tracked by 8 users: [display]

FIRE 2009 : SIGIR 2009 Workshop on the Future of IR Evaluation

FacebookTwitterLinkedInGoogle

Link: http://staff.science.uva.nl/~kamps/ireval/
 
When Jul 23, 2009 - Jul 23, 2009
Where Boston, MA, USA
Submission Deadline May 18, 2009
Notification Due Jun 8, 2009
Final Version Due Jun 15, 2009
Categories    information retrieval
 

Call For Papers

----
SIGIR 2009 Workshop on the Future of IR Evaluation July 23, Boston
http://staff.science.uva.nl/~kamps/ireval/

Call for Papers

Evaluation is at the core of information retrieval: virtually all progress
owes directly or indirectly to test collections built within the so-called
Cranfield paradigm. However, in recent years, IR researchers are routinely
pursuing tasks outside the traditional paradigm, by taking a broader view on
tasks, users, and context.
There is a fast moving evolution in content from traditional static text to
diverse forms of dynamic, collaborative, and multilingual information
sources. Also industry is embracing "operational"
evaluation based on the analysis of endless streams of queries and clicks.

We invite the submission of papers that think outside the box:

- Are you working on an interesting new retrieval task or aspect? Or
on its broader task or user context? Or on a complete system with
novel interface? Or on interactive/adaptive search? Or ...?
Please explain why this is of interest, and what would be an
appropriate way of evaluating.

- Do you feel that the current evaluation tools fail to do justice to
your research? Is there a crucial aspect missing? Or are you
interested in specific, rare, phenomena that have little impact on
the average scores? Or ...? Please explain why this is of interest,
and what would be an appropriate way of evaluating.

- Do you have concrete ideas how to evaluate such a novel IR task? Or
ideas for new types of experimental or operational evaluation? Or
new measures or ways of re-using existing data? Or ...? Please
explain why this is of interest, and what would be an appropriate
way of evaluating.

The workshop brings together all stake-holders ranging from those with novel
evaluation needs, such as a PhD candidate pursuing a new IR-related problem,
to senior IR evaluation experts. Desired outcomes are insight into how to
make IR evaluation more "realistic," and at least one concrete idea for a
retrieval track or task (at CLEF, INEX, NTCIR, TREC) that would not have
happened otherwise.

Help us shape the future of IR evaluation!

- Submit a short 2-page poster or position paper explaining your key
wishes or key points,

- and take actively part in the discussion at the Workshop.

The deadline is Monday May 18, 2009, further submission details are on
http://staff.science.uva.nl/~kamps/ireval/


Shlomo Geva, INEX & QUT, Australia
Jaap Kamps, INEX & University of Amsterdam, The Netherlands
Carol Peters, CLEF & ISTI-CNR, Italy
Tetsuya Sakai, NTCIR & Microsoft Research Asia, China
Andrew Trotman, INEX & University of Otago, New Zealand
Ellen Voorhees, TREC/TAC & NIST, USA



Related Resources

SIGIR 2024   The 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
JARES 2024   International Journal of Advance Robotics & Expert Systems
LLM4EVAL 2024   Large Language Model for Evaluation in IR
ECNLPIR 2024   2024 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2024)
LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
AI & FL 2024   12th International Conference of Artificial Intelligence and Fuzzy Logic
EASE 2024   28th International Conference on Evaluation and Assessment in Software Engineering
IJMECH 2024   International Journal of Recent Advances in Mechanical Engineering
CSTFM 2024   2024 International Conference on Smart Transportation and Future Mobility (CSTFM 2024)
KDIR 2024   16th International Conference on Knowledge Discovery and Information Retrieval