CLEF: Cross-Language Evaluation Forum

FacebookTwitterLinkedInGoogle

 

Past:   Proceedings on DBLP

Future:  Post a CFP for 2025 or later   |   Invite the Organizers Email

 
 

All CFPs on WikiCFP

Event When Where Deadline
CLEF 2024 Conference and Labs of the Evaluation Forum - Information Access Evaluation meets Multilinguality, Multimodality, and Visualization
Sep 9, 2024 - Sep 12, 2024 Grenoble, France May 17, 2024
CLEF 2022 13th Conference and Labs of the Evaluation Forum
Sep 5, 2022 - Sep 8, 2022 Bologna, Italy May 2, 2022
CLEF 2021 Conference and Labs of the Evaluation Forum
Sep 21, 2021 - Sep 24, 2021 Bucharest, Romania Apr 3, 2021
CLEF 2020 11th Conference and Labs of the Evaluation Forum
Sep 22, 2020 - Sep 25, 2020 Thessaloniki, Greece Apr 28, 2020
CLEF 2019 CLEF 2019 | Conference and Labs of the Evaluation Forum
Sep 9, 2019 - Sep 12, 2019 Lugano, Switzerland May 10, 2019 (May 3, 2019)
CLEF 2017 Conference and Labs of the Evaluation Forum Information. Access Evaluation meets Multilinguality, Multimodality and Interaction
Sep 11, 2017 - Sep 14, 2017 Dublin, Ireland Apr 28, 2017
CLEF 2016 Conference and Labs of the Evaluation Forum. Information Access Evaluation meets Multilinguality, Multimodality and Interaction
Sep 5, 2016 - Sep 8, 2016 Évora, Portugal Apr 8, 2016
CLEF 2015 CLEF 2015: Conference and Labs of the Evaluation Forum
Sep 8, 2015 - Sep 11, 2015 Toulouse, France Apr 12, 2015
CLEF 2014 Conference and Labs of the Evaluation Forum
Sep 15, 2014 - Sep 18, 2014 Sheffield, UK Apr 28, 2014
CLEF 2013 Conference and Labs of the Evaluation Forum
Sep 23, 2013 - Sep 26, 2013 Valencia, Spain Apr 28, 2013
CLEF 2012 Information Access Evaluation meets Multilinguality, Multimodality, and Visual Analytics
Sep 17, 2012 - Sep 20, 2012 Rome, Italy Apr 30, 2012
CLEF 2011 Conference on Multilingual and Multimodal Information Access Evaluation
Sep 19, 2011 - Sep 22, 2011 Amsterdam, The Netherlands May 1, 2011
 
 

Present CFP : 2024

Important Dates (Time zone: Anywhere on Earth)

· Submission of Long, Short, Best of 2023 Labs Papers: 17 May, 2024 (Extended)
· Notification of Acceptance: 14 June, 2024
· Camera Ready Copy due: 21 June, 2024
· Conference: 9-12 September, 2024


Aim and Scope

The CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference includes presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks.
CLEF 2024 is the 15th CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. The CLEFconference has a clear focus on experimental IR as carried out within evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, and TAC) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (e.g., academic, professional, or everyday-life). We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield style evaluation paradigm.
All submissions to the CLEF main conference will be reviewed on the basis of relevance, originality, importance, and clarity. CLEF welcomes papers that describe rigorous hypothesis testing regardless of whether the results are positive or negative. CLEF also welcomes past runs/results/data analysis and new data collections. Methods are expected to be written so that they are reproducible by others, and the logic of the research design is clearly described in the paper. The conference proceedings will be published in the Springer Lecture Notes in Computer Science (LNCS).


Topics

Relevant topics for the CLEF 2024 Conference include but are not limited to:
· Information Access in any language or modality: information retrieval, image retrieval, question answering, search interfaces and design, infrastructures, etc.
· Analytics for Information Retrieval: theoretical and practical results in the analytics field that are specifically targeted for information access data analysis, data enrichment, etc.
· Reproducibilty and replicability issues : analyses of past results/run deep analysis both statistically and fine grain based.
· Language diversity : Work on less-resourced languages.
· Models leveraging collaborative and social data and their evaluation.
· User studies either based on lab studies or crowdsourcing.
· Evaluation initiatives: conclusions, lessons learned, impact and projection of any evaluation initiative after completing their cycle.
· Evaluation: methodologies, metrics, statistical and analytical tools, component based, user groups and use cases, ground-truth creation, impact of multilingual/multicultural/multimodal differences, etc.
· Technology transfer: economic impact/sustainability of information access approaches, deployment and exploitation of systems, use cases, etc.
· Interactive and Conversational Information Retrieval evaluation: the interactive/conversational evaluation of information retrieval systems using user-centered methods, evaluation of novel search interfaces, novel interactive/conversational evaluation methods, simulation of interaction/conversation, etc.
· Specific application domains: information access and its evaluation in application domains such as cultural heritage, digital libraries, social media, health information, legal documents, patents, news, books, and in the form of text, audio and/or image data.
· New data collection: presentation of new data collection with potential high impact on future research, specific collections from companies or labs, multilingual collections.


Format

Authors are invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format:
http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines
Two types of papers are solicited:
· Long papers: 12 pages max (including references). Aimed to report complete research works.
· Short papers: 6 pages max (including references). Position papers, new evaluation proposals, developments and applications, etc.


Review Process

Authors of long and short papers are asked to submit the following TWO versions of their manuscript:
Methodology version: This version does NOT report anything related to the results of the study. At this stage, the manuscripts will be evaluated based on the importance of the problem addressed and the soundness of the methodology. Manuscripts can include an introduction, description of the proposed methodology and datasets used. However, there should be no result and discussion sections. The authors should also remove mentions of results in the included sections (e.g., abstract, introduction)
Experimental version: This is the full version of the manuscript that contains all the sections of the paper including the experiments and results.
Papers will be peer-reviewed by 3 members of the program committee in two stages. At the first stage, the members will review the methodology version of the manuscripts based on originality and methodology. At the second stage, the full version of the manuscripts that passed from the first sage will be reviewed. Selection will be based on originality, clarity, and technical quality.
The deadline for the submission of both versions is 10th of May.


Paper Submission

Papers should be submitted in PDF format to the following address:
https://easychair.org/my/conference?conf=clef2024
· Submit the methodology version at the « Conference - Methodology Part » Track
· Submit the experimental version at the « Conference - Experimental Part » Track


Organization

General Chairs
Lorraine Goeuriot, Université Grenoble Alpes (France)
Philippe Mulhem, Université Grenoble Alpes (France)
Georges Quénot, Université Grenoble Alpes (France)
Didier Schwab, Université Grenoble Alpes (France)

Program Chairs
Laure Soulier - Sorbonne Université (France)
Giorgio Maria Di Nunzio - University of Padua (Italy)

Evaluation Lab Chairs
Petra Galuscakova, University of Stavanger (Norway)
Alba García Seco de Herrera, University of Essex (UK)

Lab Mentorship Chair
Liana Ermakova, Université de Bretagne Occidentale (France)
Florina Piroi, TU Wien (Austria)
 

Related Resources

LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
NB-REAL 2025   Nordic-Baltic Responsible Evaluation and Alignment of Language Models Workshop
HEAd 2025   11th International Conference on Higher Education Advances
ENASE 2025   20th International Conference on Evaluation of Novel Approaches to Software Engineering
CONEDU 2025   5th International Conference of Education
AMS 2024   Advanced Medical Sciences: An International Journal
EvalMG 2025   The First Workshop of Evaluation of Multi-Modal Generation @ COLING 2025
Learning 2024   Thirty-First International Conference on Learning
WMT-metrics 2024   WMT24 Metrics Task: Call for Participation
PEMWN 2024   Performance Evaluation & Modeling in Wired and Wireless Networks