posted by user: grupocole || 1056 views || tracked by 2 users: [display]

TSAR 2023 : Second Workshop on Text Simplification, Accessibility and Readability

FacebookTwitterLinkedInGoogle

Link: https://tsar-workshop.github.io/
 
When Sep 7, 2023 - Sep 8, 2023
Where Varna, Bulgaria
Submission Deadline Jul 18, 2023
Notification Due Aug 5, 2023
Final Version Due Aug 25, 2023
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers


Second Workshop on Text Simplification, Accessibility and Readability - TSAR 2023 @ RANLP

Jointly with the Recent Advances in Natural Language Processing Conference RANLP 2023

https://tsar-workshop.github.io/
http://ranlp.org/ranlp2023/
First Call for Papers
Important Dates

Submission deadline: 10 July 2023

Notification of acceptance: 5 August 2023

Camera-ready papers due: 25 August 2023

Workshop: 7 or 8 September 2023


Web provides an abundance of knowledge and information that can reach large populations. However, the way in which a text is written (vocabulary, syntax, or text organization/structure), or presented, can make it inaccessible to many people, especially to non-native speakers, people with low literacy, and people with some type of cognitive or linguistic impairments. The results of Adult Literacy Survey (OECD, 2023) indicate that approximately 16.7% of the adult population (averaged over 24 highly-developed countries) requires lexical, 50% syntactic, and 89.4% conceptual simplification of everyday texts (Štajner, 2021).

Research on automatic text simplification (TS), textual accessibility, and readability thus have the potential to improve social inclusion of marginalised populations. These related research areas have increasingly attracted more and more attention in the past ten years, evidenced by the growing number of publications in NLP conferences. While only about 300 articles in Google Scholar mentioned TS in 2010, this number has increased to about 600 in 2015 and is greater than 1000 in 2020 (Štajner, 2021).

Recent research in automatic text simplification has mostly focused on proposing the use of methods derived from the deep learning paradigm (Glavaš and Štajner, 2015; Paetzold and Specia, 2016; Nisioi et al., 2017; Zhang and Lapata, 2017; Martin et al., 2020; Maddela et al., 2021; Sheang and Saggion, 2021). However, there are many important aspects of the automatic text simplification that need the attention of our community: the design of appropriate evaluation metrics, the development of context-aware simplification solutions, the creation of appropriate language resources to support research and evaluation, the deployment of simplification in real environments for real users, the study of discourse factors in text simplification, the identification of factors affecting the readability of a text, etc. To overcome those issues, there is a need for collaboration of CL/NLP researchers, machine learning and deep learning researchers, UI/UX and Accessibility professionals, as well as public organisations representatives (Štajner, 2021).

The proposed TSAR workshop builds upon the recent success of several workshops that covered a subset of our topics of interest, including the SEPLN 2021 Current Trends in Text Simplification (CTTS) and the SimpleText workshop at CLEF 2021, the TSAR-2022 at EMNLP 2022, the recent Special Issue on Text Simplification, Accessibility, and Readability at Frontiers in AI, as well as the birds-of-a-feather event on Text Simplification at NAACL 2021 (over 50 participants).

The TSAR workshop aims to foster collaboration among all parties interested in making information more accessible to all people. We will discuss recent trends and developments in the area of automatic text simplification, text accessibility, automatic readability assessment, language resources and evaluation for text simplification, etc.

Topics

We invite contributions on the following topics (among others):

Lexical simplification;

Syntactic simplification;

Modular and end-to-end TS;

Sequence-to-sequence and zero-shot TS;

Controllable TS;

Text complexity assessment;

Complex word identification and lexical complexity prediction;

Corpora, lexical resources, and benchmarks for TS;

Evaluation of TS systems;

Domain specific TS (e.g. health, legal);

Other related topics (e.g. empirical and eye-tracking studies);

Assistive technologies for improving readability and comprehension including those going beyond text.



Submissions

We welcome two types of papers: long papers and short papers. Submissions should be made to: https://softconf.com/ranlp23/TSAR/

The papers should present novel research. The review will be double blind and thus all submissions should be anonymized.

Format: Paper submissions must use the official RANLP 2023 Templates, which are available as an Overleaf template and also downloadable directly (Latex and Word). Authors may not modify these style files or use templates designed for other conferences.



Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review.


Long Papers: Long papers must describe substantial, original, completed, and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers may consist of up to eight (8) pages of content, plus unlimited pages of references. Final versions of long papers will be given one additional page of content (up to 9 pages), so that reviewers’ comments can be taken into account. Long papers will be presented orally or as posters as determined by the program committee. The decisions as to which papers will be presented orally and which as poster presentations will be based on the nature rather than the quality of the work. There will be no distinction in the proceedings between long papers presented orally and long papers presented as posters.



Short Papers: Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead, short papers should have a point that can be made in a few pages. Some kinds of short papers include: a small, focused contribution; a negative result; an opinion piece; an interesting application nugget. Short papers may consist of up to four (4) pages of content, plus unlimited pages of references. Final versions of short papers will be given one additional page of content (up to 5 pages), so that reviewers' comments can be taken into account. Short papers will be presented orally or as posters as determined by the program committee. While short papers will be distinguished from long papers in the proceedings, there will be no distinction in the proceedings between short papers presented orally and short papers presented as posters.

Demo papers: should be no more than two (2) pages, including references, and should describe implemented systems related to the topics of interest of the workshop. It also should include a link to a short screencast of the working software. In addition, authors of demo papers must be willing to present a demo of their system during TSAR 2023.

Related Resources

EACL 2024   The 18th Conference of the European Chapter of the Association for Computational Linguistics
DPPR 2023   13th International Conference on Digital Image Processing and Pattern Recognition
LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
RLRL 2023   1st workshop on the Readability for Low Resourced Language
IEEE Xplore-Ei/Scopus-CSPIT 2023   2023 Asia Conference on Communications, Signal Processing and Information Technology (CSPIT 2023) -EI Compendex
NLP-TeMA 2023   Natural Language Processing, Text Mining and Applications
IEEE Xplore-Ei/Scopus-DMCSE 2023   2023 International Conference on Data Mining, Computing and Software Engineering (DMCSE 2023) -EI Compendex
LAJC(2) 2023   Latin-American Journal of Computing Vol 10. Issue 2 - 2023-2
IEEE Big Data - MMAI 2023   IEEE Big Data 2023 Workshop on Multimodal AI (Hybrid)
NLP 2023   12th International Conference on Natural Language Processing