posted by user: grupocole || 1045 views || tracked by 7 users: [display]

HumEval 2023 : Third Workshop on Human Evaluation of NLP Systems

FacebookTwitterLinkedInGoogle

Link: https://humeval.github.io/
 
When Sep 7, 2023 - Sep 8, 2023
Where Varna, Bulgaria
Submission Deadline Jul 10, 2023
Notification Due Aug 5, 2023
Final Version Due Aug 25, 2023
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers


Third Workshop on Human Evaluation of NLP Systems (HumEval’23)
###############################################################

https://humeval.github.io/

RANLP’23, Varna, Bulgaria, 7 or 8 September 2023


First Call for Papers
++++++++++++++++++++++

The Third Workshop on Human Evaluation of NLP Systems (HumEval’23) invites the submission of long and short papers on substantial, original, and unpublished research on all aspects of human evaluation of NLP systems with a focus on NLP systems which produce language as output. We welcome work on any quality criteria relevant to NLP, on both intrinsic evaluation (which assesses systems and outputs directly) and extrinsic evaluation (which assesses systems and outputs indirectly in terms of its impact on an external task or system), on quantitative as well as qualitative methods, score-based (discrete or continuous scores) as well as annotation-based (marking, highlighting).


Important dates
----------------

Workshop paper submission deadline: 10 July 2023
Workshop paper acceptance notification: 5 August 2023
Workshop paper camera-ready versions: 25 August 2023
Workshop camera-ready proceedings ready: 31 August 2023

All deadlines are 23:59 UTC-12.



Topics
-------

We invite papers on topics including, but not limited to, the following:

Experimental design and methods for human evaluations
Reproducibility of human evaluations
Work on inter-evaluator and intra-evaluator agreement
Ethical considerations in human evaluation of computational systems
Quality assurance for human evaluation
Crowdsourcing for human evaluation
Issues in meta-evaluation of automatic metrics by correlation with human evaluations
Alternative forms of meta-evaluation and validation of human evaluations
Comparability of different human evaluations
Methods for assessing the quality and the reliability of human evaluations
Role of human evaluation in the context of Responsible and Accountable AI

We welcome work from any subfield of NLP (and ML/AI more generally), with a particular focus on evaluation of systems that produce language as output.


ReproNLP shared task
---------------------

The workshop will also host a shared Task on Reproducibility of Evaluations in NLP (ReproNLP) -- more details coming soon.


Papers
------

Long papers
- - - - - -

Long papers must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers may consist of up to eight (8) pages of content, plus unlimited pages of references. Final versions of long papers will be given one additional page of content (up to 9 pages) so that reviewers’ comments can be taken into account. Long papers will be presented orally or as posters as determined by the programme committee. Decisions as to which papers will be presented orally and which as posters will be based on the nature rather than the quality of the work. There will be no distinction in the proceedings between long papers presented orally and as posters.

Short papers
- - - - - - -

Short paper submissions must describe original and unpublished work. Short papers should have a point that can be made in a few pages. Examples of short papers are a focused contribution, a negative result, an opinion piece, an interesting application nugget, a small set of interesting results. Short papers may consist of up to four (4) pages of content, plus unlimited pages of references. Final versions of short papers will be given one additional page of content (up to 5 pages) so that reviewers’ comments can be taken into account. Short papers will be presented orally or as posters as determined by the programme committee. While short papers will be distinguished from long papers in the proceedings, there will be no distinction in the proceedings between short papers presented orally and as posters.
Multiple submission policyPermalink

HumEval’23 allows multiple submissions. However, if a submission has already been, or is planned to be, submitted to another event, this must be clearly stated in the submission form.


Submission procedure and templates
-----------------------------------

To submit, go directly to the workshop page at the Softconf START system https://softconf.com/ranlp23/HumEval/

The papers should follow the format of the main conference, described at the RANLP website, Submissions page. http://ranlp.org/ranlp2023/index.php/submissions/


Organisers

Anya Belz, ADAPT Centre, Dublin City University, Ireland
Maja Popović, ADAPT Centre, Dublin City University, Ireland
Ehud Reiter, University of Aberdeen, UK
João Sedoc, New-York University
Craig Thomson, University of Aberdeen, UK

For questions and comments regarding the workshop please contact the organisers at humeval.ws@gmail.com.

Related Resources

HumEval 2024   The Fourth Workshop on Human Evaluation of NLP Systems
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
SEMANTiCS 2024   20th International Conference on Semantic Systems
NLE Special Issue 2024   Natural Language Engineering- Special issue on NLP Approaches for Computational Analysis of Social Media Texts for Online Well-being and Social Order
ISEEIE 2024   2024 4th International Symposium on Electrical, Electronics and Information Engineering (ISEEIE 2024)
PCDS 2024   The 1st International Symposium on Parallel Computing and Distributed Systems
NLDB 2024   The 29th International Conference on Natural Language & Information Systems
AI&S 2024   Call For Papers: Special issue of the Journal AI & Society - The Impact of Generative AI on Human Creativity
SMM4H 2024   The 9th Social Media Mining for Health Research and Applications Workshop and Shared Tasks — Large Language Models (LLMs) and Generalizability for Social Media NLP