posted by user: grupocole || 1540 views || tracked by 2 users: [display]

TLLM 2023 : 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants

FacebookTwitterLinkedInGoogle

Link: https://ctrlnlg.github.io/
 
When Sep 12, 2023 - Sep 12, 2023
Where Prague, Czechia
Submission Deadline May 15, 2023
Notification Due Jul 21, 2023
Final Version Due Aug 14, 2023
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers

First Call For Submissions

Welcome to the 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants! This workshop aims to unite esteemed scholars, researchers, and practitioners specializing in Natural Language Generation (NLG). This event will foster in-depth discussions and explorations of the challenges and prospects associated with content control in LLMs. Emphasizing the intersection of NLG research and the instruction-learning paradigm, the workshop will serve as a platform for fruitful collaborations and knowledge exchange. This hybrid workshop will be co-located with INLG 2023 (https://inlg2023.github.io/workshops.html) at Prague.

Important Dates

Submission deadline: June 15, 2023

Author notification: July 21, 2023

Camera-ready deadline: August 14, 2023

Workshop date: September 12, 2023

Submission Portal: https://softconf.com/n/tllm2023
Website: https://ctrlnlg.github.io/

**All deadlines are 11.59 pm AOE time.

Topics

We welcome submissions on one or more of the following topics:

Alignment: Investigating techniques to better align LLMs with human values and intentions, including reward modeling, human-in-the-loop systems, and quantifying alignment metrics. Understanding the objectives pursued by a model and aligning them with human preferences are key challenges. We encourage research on methods to increase alignments, such as through prompt design and fine-tuning.

In-context Learning: Exploring the role of context in LLMs, including how to improve context understanding, manage context drift, and enhance context-aware responses. Also, investigating the use of in-context learning as a control mechanism.

Instruction-based Control: Comparing popular controlling mechanisms, including approaches such as logit manipulation, decoder mixing, and classifier guidance, amongst others, against the simpler instruction-based control.

Generality: Investigating controllable techniques that work across tasks and datasets.

Safety and Robustness: Assessing potential risks and vulnerabilities in LLMs, along with solutions such as adversarial training, safe exploration, and monitoring model behavior during deployment.

Controllability vs. Robustness: Developing methods to better understand LLMs' decision-making processes, and how it acts in grounded scenarios. Understanding its reliance on implicit vs. explicit memory.

Scalability and efficiency: Investigating novel approaches for reducing computational requirements for achieving control in LLMs.

Real-world applications and case studies: Showcasing successful LLM deployments in various fields, such as healthcare, finance, education, and creative industries, along with lessons learned and future opportunities.


Submissions

We welcome reports of original research in the form of two types:

Long papers (8 pages + references)

Short papers (4 pages + references)


We encourage all authors to include relevant discussions of ethical considerations and impact in the body of the paper.

Submissions will be made via SoftConf/START: https://softconf.com/n/tllm2023

Submission Format

The proceedings will be published by ACL Anthology.

All long, short, and abstract submissions must follow the two-column ACL format , which are available as an Overleaf template and also downloadable directly (Latex and Word). Please refer to the SIGDIAL 2023 website for the most recent version of the templates.

Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

All submissions should be anonymized to facilitate double blind reviewing.

Submissions that do not adhere to the author guidelines or ACL policies will be rejected without review.

Appendix should be added in the main document after references. Appendix does not count towards the page length.


Any questions regarding submissions can be sent to tamingllm-workshop@googlegroups.com

Related Resources

LLM6G 2026   3rd Workshop on “The Impact of Large Language Models on 6G and Beyond”
Ei/Scopus-ITCC 2026   2026 6th International Conference on Information Technology and Cloud Computing (ITCC 2026)
SNLP 2026   7th International Conference on Semantic & Natural Language Processing
DEPLING 2023   International Conference on Dependency Linguistics
IJWMN 2026   International Journal of Wireless & Mobile Networks -- ERA Indexed, H index 38
DRIJ 2026   Dental Research: An International Journal
IJMA 2026   International Journal of Multimedia & Its Applications -- ERA Indexed, H index - 24
MECH 2026   8th International Conference on Mechanical Engineering
IJNLC 2026   International Journal on Natural Language Computing
SEAS 2026   15th International Conference on Software Engineering and Applications