posted by user: grupocole || 1577 views || tracked by 6 users: [display]

Insights 2024 : Fifth Workshop on Insights from Negative Results in NLP

FacebookTwitterLinkedInGoogle

Link: https://insights-workshop.github.io/
 
When Jun 16, 2024 - Jun 21, 2024
Where Mexico City, Mexico
Submission Deadline Mar 10, 2024
Notification Due Apr 14, 2024
Final Version Due Apr 24, 2024
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers


Dear colleagues,

The Fifth Workshop on Insights from Negative Results in NLP Co-located with NAACL, June 16-21 2024

First Call for Participation

Insights Website: (https://insights-workshop.github.io/)

Contact email: insights-workshop-organizers@googlegroups.com


*Overview

Publication of negative results is difficult in most fields, but in NLP the problem is exacerbated by the near-universal focus on improvements in benchmarks. This situation implicitly discourages hypothesis-driven research, and it turns creation and fine-tuning of NLP models into art rather than science. Furthermore, it increases the time, effort, and carbon emissions spent on developing and tuning models, as the researchers have no opportunity to learn what has already been tried and failed.

This workshop invites both practical and theoretical unexpected or negative results that have important implications for future research, highlight methodological issues with existing approaches, and/or point out pervasive misunderstandings or bad practices. In particular, the most successful NLP models currently rely on Transformer-based large language models (LLMs). To complement all the success stories, it would be insightful to see where and possibly why they fail. Any NLP tasks are welcome: sequence labeling, question answering, inference, dialogue, machine translation - you name it.

A successful negative results paper would contribute one of the following:

** broadly applicable recommendations for training/fine-tuning/prompting, especially if X that didn’t work is something that many practitioners would think reasonable to try, and if the demonstration of X’s failure is accompanied by some explanation/hypothesis;
** ablation studies of components in previously proposed models, showing that their contributions are different from what was initially reported;
** datasets or probing tasks showing that previous approaches do not generalize to other domains or language phenomena;
** trivial baselines that work suspiciously well for a given task/dataset;
** cross-lingual studies showing that a technique X is only successful for a certain language or language family;
** experiments on (in)stability of the previously published results due to hardware, random initializations, preprocessing pipeline components, etc;
** theoretical arguments and/or proofs for why X should not be expected to work;
** demonstration of issues with data processing/collection/annotation pipelines, especially if they are widely used;
** demonstration of issues with evaluation metrics (e.g. accuracy, F1 or BLEU), which prevent their usage for fair comparison of methods;
** demonstration of issues with under-reporting of training details of pre-trained models, including test data contamination and invalid comparisons

In 2024, we will invite the authors of accepted negative results papers to nominate the specific work reporting the original positive results. The goal is to organize joint discussion sessions, so that the community can learn the most from the specific insightful failure.

* Important Dates

** Submission due: March 10, 2024
** Submission due for papers reviewed through ACL Rolling Review: April 7, 2024
** Notification of acceptance: April 14, 2024
** Camera-ready papers due: April 24, 2024
** Workshop: TBA, between June 21-22, 2024

* Submission

Submission is electronic, using the Softconf START conference management system.
Submission link: (https://softconf.com/naacl2024/Insights2024)

The workshop will accept short papers (up to 4 pages, excluding references), as well as 1-2 page non-archival abstract submissions for papers published elsewhere (e.g. in one of the main conferences or in non-NLP venues). The goal of this event is to stimulate a meaningful community-wide discussion of the deep issues in NLP methodology, and the authors of both types of submissions will be welcome to take part in our get-togethers.
The workshop will run its own review process, and papers can be submitted directly to the workshop by March 10, 2024. It is also possible to submit a paper accompanied with reviews from the ACL Rolling Review system by April 7, 2024. The submission deadline for ARR papers follows the ACL RR calendar. Both research papers and abstracts must follow the ACL two-column format. Official style sheets:
https://github.com/acl-org/acl-style-files

Please do not modify these style files, nor should you use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review. Please follow the formatting guidelines outlined here: https://acl-org.github.io/ACLPUB/formatting.html


* Multiple Submission Policy

The workshop cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of Insights. Any questions regarding submissions can be sent to insights-workshop-organizers@googlegroups.com.

If the paper has been rejected from another venue, the authors will have the option to provide the original reviews and the author response. The new reviewers will not have access to this information, but the organizers will be able to take into account the fact that the paper has already been revised and improved.

* Anonymity Period

The workshop will follow the new ACL policy: https://www.aclweb.org/adminwiki/index.php/ACL_Anonymity_Policy

* Presentation

All accepted papers must be presented at the workshop to appear in the proceedings. Authors of accepted papers must notify the program chairs by the camera-ready deadline if they wish to withdraw the paper. At least one author of each accepted paper must register for the workshop.
Previous presentations of the work (e.g. preprints on arXiv.org) should be noted in a footnote in the camera-ready version (but not in the anonymized version of the paper).
The workshop will take place during NAACL 2024 (June 16-21 2024). It will be hybrid, allowing for both in-person and virtual presentations.

* Organization Committee

** Shabnam Tafreshi, inQbator AI at eviCore Healthcare
** Arjun Reddy Akula, Google Research
** João Sedoc, New York University
** Anna Rogers, IT University of Copenhagen
** Aleksandr Drozd, RIKEN
** Anna Rumshisky, University of Massachusetts Lowell / Amazon Alexa

* Contact info
Any questions regarding the workshop can be sent to insights-workshop-organizers@googlegroups.com.


Please continue reading about: Authorship, Citation and Comparison, Ethics Policy, Reproducibility, and Presentation in the call for paper page on our website: https://insights-workshop.github.io/2024/cfp/


Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
ACL 2025   The 63rd Annual Meeting of the Association for Computational Linguistics
LIMO2024@KONVENS 2024   2nd workshop on Linguistic Insights from and for Multimodal Language Processing @KONVENS 2024
BIOEN 2025   8th International Conference on Biomedical Engineering and Science
Exploring Data Ecosystems and Markets 2024   Special Issue on Exploring Data Ecosystems and Markets: Trends and Insights in VLDB Journal
AISCA 2025   International Conference on Artificial Intelligence, Soft Computing And Applications
LDK 2025   Fifth Conference on Language, Data and Knowledge
ERROR 2024   4th Workshop on Escience ReseaRch leading tO negative Results
LSIJ 2024   Life Sciences: an International Journal
COIT 2025   5th International Conference on Computing and Information Technology