posted by organizer: emiel || 1389 views || tracked by 2 users: [display]

EvalNLGEval 2020 : 1st Workshop on Evaluating NLG Evaluation, collocated with INLG

FacebookTwitterLinkedInGoogle

Link: https://evalnlg-workshop.github.io/
 
When Dec 18, 2020 - Dec 18, 2020
Where Dublin, Ireland (online conference)
Submission Deadline Oct 15, 2020
Notification Due Nov 15, 2020
Final Version Due Nov 29, 2020
Categories    natural language generation   natural language processing
 

Call For Papers

Workshop overview:
----------------------------------------------------------------------
This workshop is intended as a discussion platform on the status and the future of the evaluation of Natural Language Generation systems. Among other topics, we will discuss current evaluation quality, human versus automated metrics, and the development of shared tasks for NLG evaluation. The workshop also involves an ‘unshared task’, where participants are invited to experiment with evaluation data from earlier shared tasks.

Important dates:
----------------------------------------------------------------------
Submissions due - October 15, 2020
Notification of acceptance - November 15, 2020
Camera ready papers due - November 29, 2020
Workshop: December 18, 2020

Papers:
----------------------------------------------------------------------
We encourage a range of papers ranging from commentary and meta-evaluation of existing evaluation strategies to the suggestion of new metrics. We specifically place emphasis on the methodology and linguistic aspects of evaluation. We invite papers on any topic related to the evaluation of NLG systems, including (but not limited to):
Qualitative studies, definitions of evaluation metrics (e.g., readability, fluency, semantic correctness)
Crowdsourcing Strategies, qualitative tests for crowdsourcing (How to elucidate evaluation metrics?)
Looking at individual differences and cognitive biases in human evaluation (expert vs. non-expert, L1 vs L2 speakers)
Best practices for system evaluations (How does your lab choose models?)
Qualitative study/error analysis approaches
Demo: Systems that make the evaluation easier
Comparison of metrics across different NLG tasks (captioning, data2text, story generation, summarization…) or different languages (with a focus on low-resource languages)
Evaluation surveys
Position papers and commentary on trends in evaluation

We encourage the submission of “task proposals”, where authors can propose shared tasks for next year’s edition of the workshop.

Unshared Task:
----------------------------------------------------------------------
This year’s edition also features an unshared task: rather than working towards a specific goal, we encourage participants to use a specific collection of datasets, for any evaluation-related goal. For example: comparing a new evaluation method with existing ratings, or carrying out a subset analysis. This allows us to put the results from previous shared tasks in perspective, and helps us develop better evaluation metrics for future shared tasks. Working on the same datasets allows for more focused discussions at the workshop.

Datasets for this year’s edition are existing datasets with system outputs and human ratings. Participants may use any of these for their unshared task submission:
E2E NLG Challenge (http://www.macs.hw.ac.uk/InteractionLab/E2E/)
WebNLG Challenge 2017 (https://webnlg-challenge.loria.fr/challenge_2017/)
Surface Realization Shared Task (SRST) 2019 (http://taln.upf.edu/pages/msr2019-ws/SRST.html)

Submission Formats:
----------------------------------------------------------------------
Archival papers (up to 8 pages excluding references; shorter submissions are also welcome)
Non-archival abstract of papers within the topic accepted somewhere else or under submission at the main INLG 2020 (1-2 pages)
Demo papers (1-2 pages)

Organizers:
----------------------------------------------------------------------
Shubham Agarwal
Ondrej Dusek
Sebastian Gehrmann
Dimitra Gkatzia
Ioannis Konstas
Emiel van Miltenburg
Sashank Santhanam
Samira Shaikh

Contact: evalnlg.inlg@gmail.com

Related Resources

LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
NB-REAL 2025   Nordic-Baltic Responsible Evaluation and Alignment of Language Models Workshop
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
HEAd 2025   11th International Conference on Higher Education Advances
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
ENASE 2025   20th International Conference on Evaluation of Novel Approaches to Software Engineering
IJNLC 2024   International Journal on Natural Language Computing - H index - 24
CONEDU 2025   5th International Conference of Education
COIT 2025   5th International Conference on Computing and Information Technology