posted by user: grupocole || 965 views || tracked by 2 users: [display]

TLLM 2023 : 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants

FacebookTwitterLinkedInGoogle

Link: https://ctrlnlg.github.io/
 
When Sep 12, 2023 - Sep 12, 2023
Where Prague, Czechia
Submission Deadline May 15, 2023
Notification Due Jul 21, 2023
Final Version Due Aug 14, 2023
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers

First Call For Submissions

Welcome to the 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants! This workshop aims to unite esteemed scholars, researchers, and practitioners specializing in Natural Language Generation (NLG). This event will foster in-depth discussions and explorations of the challenges and prospects associated with content control in LLMs. Emphasizing the intersection of NLG research and the instruction-learning paradigm, the workshop will serve as a platform for fruitful collaborations and knowledge exchange. This hybrid workshop will be co-located with INLG 2023 (https://inlg2023.github.io/workshops.html) at Prague.

Important Dates

Submission deadline: June 15, 2023

Author notification: July 21, 2023

Camera-ready deadline: August 14, 2023

Workshop date: September 12, 2023

Submission Portal: https://softconf.com/n/tllm2023
Website: https://ctrlnlg.github.io/

**All deadlines are 11.59 pm AOE time.

Topics

We welcome submissions on one or more of the following topics:

Alignment: Investigating techniques to better align LLMs with human values and intentions, including reward modeling, human-in-the-loop systems, and quantifying alignment metrics. Understanding the objectives pursued by a model and aligning them with human preferences are key challenges. We encourage research on methods to increase alignments, such as through prompt design and fine-tuning.

In-context Learning: Exploring the role of context in LLMs, including how to improve context understanding, manage context drift, and enhance context-aware responses. Also, investigating the use of in-context learning as a control mechanism.

Instruction-based Control: Comparing popular controlling mechanisms, including approaches such as logit manipulation, decoder mixing, and classifier guidance, amongst others, against the simpler instruction-based control.

Generality: Investigating controllable techniques that work across tasks and datasets.

Safety and Robustness: Assessing potential risks and vulnerabilities in LLMs, along with solutions such as adversarial training, safe exploration, and monitoring model behavior during deployment.

Controllability vs. Robustness: Developing methods to better understand LLMs' decision-making processes, and how it acts in grounded scenarios. Understanding its reliance on implicit vs. explicit memory.

Scalability and efficiency: Investigating novel approaches for reducing computational requirements for achieving control in LLMs.

Real-world applications and case studies: Showcasing successful LLM deployments in various fields, such as healthcare, finance, education, and creative industries, along with lessons learned and future opportunities.


Submissions

We welcome reports of original research in the form of two types:

Long papers (8 pages + references)

Short papers (4 pages + references)


We encourage all authors to include relevant discussions of ethical considerations and impact in the body of the paper.

Submissions will be made via SoftConf/START: https://softconf.com/n/tllm2023

Submission Format

The proceedings will be published by ACL Anthology.

All long, short, and abstract submissions must follow the two-column ACL format , which are available as an Overleaf template and also downloadable directly (Latex and Word). Please refer to the SIGDIAL 2023 website for the most recent version of the templates.

Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

All submissions should be anonymized to facilitate double blind reviewing.

Submissions that do not adhere to the author guidelines or ACL policies will be rejected without review.

Appendix should be added in the main document after references. Appendix does not count towards the page length.


Any questions regarding submissions can be sent to tamingllm-workshop@googlegroups.com

Related Resources

IPM-LLMDQKG 2025   Special issue of Information Processing & Management on Large Language Models and Data Quality for Knowledge Graphs
Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
TSAR 2024   The third workshop on Text Simplification, Accessibility and Readability
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI (Hybrid)
GenAI4PM 2024   The First International Workshop on Generative AI for Process Mining
COLING 2025   International Conference on Computational Linguistics
ISEEIE 2024   2024 4th International Symposium on Electrical, Electronics and Information Engineering (ISEEIE 2024)
KaRS 2024   Sixth Knowledge-aware and Conversational Recommender Systems Workshop (KaRS 2024)
GenAISE 2024   The International Workshop on Advances of GenAI in Software Engineering
LLMCS 2024   The International Workshop on Large Language Models for Cybersecurity