| |||||||||||||||
TrustNLP 2023 : Third Workshop on Trustworthy Natural Language Processing | |||||||||||||||
Link: https://trustnlpworkshop.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Overview Recent advances in Natural Language Processing, and the emergence of pretrained Large Language Models (LLM) specifically, have made NLP systems omnipresent in various aspects of our everyday life. While these emergent technologies have an unquestionable potential to power various innovative NLP and AI applications, they also pose a number of challenges in terms of their safe and ethical use. To address such challenges, NLP researchers have formulated various objectives, e.g., intended to make models more fair, safe, and privacy-preserving. However, these objectives are often considered separately, which is a major limitation since it is often important to understand the interplay and/or tension between them. The goal of this workshop is to move toward a more comprehensive notion of Trustworthy NLP, by bringing together researchers working on those distinct yet related topics, as well as their intersection. We invite papers that focus on developing models that are “explainable, fair, privacy-preserving, causal, and robust” (Trustworthy ML Initiative). Topics of interest include (but are not limited to): - Differential Privacy - Fairness and Bias: Evaluation and Treatments - Model Explainability and Interpretability - Accountability, Safety, and Robustness - Ethics - Industry applications of Trustworthy NLP - Causal Inference and Fair ML - Secure, Faithful, Trustworthy Data/Language Generation - Toxic Language Detection and Mitigation We also welcome contributions that draw upon interdisciplinary knowledge to advance Trustworthy NLP. This may include working with, synthesizing, or incorporating knowledge across expertise, sociopolitical systems, cultures, or norms. Important Dates Apr 24, 2023: Workshop Paper Due Date (Direct Submission) May 10, 2023: Workshop Paper Due Date (Fast-Track) May 22, 2023: Notification of Acceptance June 2, 2023: Camera-ready papers due July 14, 2023: Workshop Submission Policy All submissions will be double-blind peer-reviewed (with author names and affiliations removed) by the program committee and judged by their relevance to the workshop themes. Accepted and under-review papers are allowed to submit to the workshop but will not be included in the proceeding. Submitted manuscripts must be 8 pages long for full papers, and 4 pages long for short papers. Please follow ACL submission policies. Both full and short papers can have unlimited pages for references and appendices. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. Template files can be found here: https://aclrollingreview.org/cfp#long-papers. We also ask authors to include a limitation section and broader impact statement, following guidelines from the main conference. Please submit to Softconf via this link https://softconf.com/acl2023/trustnlp2023/ Fast-Track Submission If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.5, the paper is qualified to be submitted to the fast-track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised. Non-Archival option ACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. If accepted, these submissions will still participate and present their work in the workshop. A reference to the paper will be hosted on the workshop website (if desired), but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission at the bottom of the submission form. You can also skip this step and inform us of your non-archival preference after the reviews. Anonymity Period We will follow NAACL’s anonymity policy, and require full anonymity until time of acceptance. Organizers Kai-Wei Chang - UCLA, Amazon Visiting Academic Yada Pruksachatkun - Infinitus Systems Ninareh Mehrabi - Amazon Alexa AI Aram Galystan - USC, Amazon Visiting Academic Jwala Dhamala - Amazon Alexa AI Anaelia Ovalle - UCLA Apurv Verma - Amazon Alexa AI Yang Trista Cao - University of Maryland Anoop Kumar - Amazon Alexa AI Rahul Gupta - Amazon Alexa AI |
|