|
| |||||||||||||||
LT-EDI 2026 : Sixth Workshop on Language Technology for Equality, Diversity and Inclusion | |||||||||||||||
| Link: https://sites.google.com/view/lt-edi-2026/home | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
Sixth Workshop on Language Technology for Equality, Diversity and Inclusion (LT-EDI-2026) at The 64th Annual Meeting of the Association for Computational Linguistics (ACL) 2026 Place: San Diego, California, United States Link: https://sites.google.com/view/lt-edi-2026/home Tagline: Towards Fair and Inclusive Language Technologies for All. Call for Papers: Following the success of the first five editions of the LTEDI 2026 workshop (LDK 2025, EACL 2024, RANLP 2023, ACL 2022. EACL 2021), the workshop aims to bring together researchers and practitioners working on NLP, LLMs and other AI fields with social scientists and interdisciplinary researchers. LT-EDI-2026 invites theoretical, empirical, and applied papers from the Natural Language Processing (NLP), Artificial Intelligence (AI), and interdisciplinary communities particularly those focusing on bias in language technologies. Topics of interest include, but are not limited to: Datasets, and Benchmarks for Equality, Diversity, and Inclusion Construction and annotation of datasets for EDI, including benchmarks for bias detection and mitigation. Compilation of resources curated for fairness, inclusivity, and accessibility. Methodologies for annotating intersectional identities (gender, race, disability, religion, sexual orientation, etc.). Bias Detection and Mitigation in LLMs Techniques for identifying, measuring, and mediating gender, racial, disability, and other societal biases in NLP and LLMs. The impact of bias in deployed NLP/LLM systems Gender-neutral modeling and representational fairness in LLMs Detection and mitigation of intersectional biases including gender, racial, gender identity, disability, and other societal biases. Advances in bias mitigation in large language models: in-context learning, prompt engineering, conditional text generation, and adversarial training. Inclusive Language and Counter-Narratives for LLMs Algorithms and resources for inclusive language generation with LLMs. Counter-narrative modeling for combating toxicity, hate speech, and misinformation targeting marginalized communities. Dialogue systems and multi-agent approaches that align with inclusivity goals. Human-in-the-loop and participatory strategies for enhancing inclusiveness. Multilingual and Multicultural Approaches for LLMs Multicultural and multilingual LLMs and approaches Speech and language recognition for minority and under-resourced groups Code-mixed and cross-lingual approaches for inclusive technologies Responsible, Explainable, and Trustworthy LLMs for EDI Detecting and mitigating hallucinations, misinformation, and toxicity in LLM systems Explainable and trustworthy LLMs Evaluation frameworks incorporating ethics, accountability, and transparency Direct paper submission deadline: March 5, 2026 Pre-reviewed ARR commitment deadline: March 24, 2026 Notification of acceptance: April 28, 2026 Camera-ready paper due: May 12, 2026 Pre-recorded video due (hard deadline): June 4, 2026 Workshop dates: July 2-3, 2026 |
|