![]() |
| |||||||||||||
XLLMRP 2025 : 1st Workshop on the Application of LLM Explainability to Reasoning and Planning at COLM2025 | |||||||||||||
Link: https://xllm-reasoning-planning-workshop.github.io/ | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
1st Workshop on the Application of LLM Explainability to Reasoning and Planning at COLM2025 Website: https://xllm-reasoning-planning-workshop.github.io/ Submission Deadline: June 23, 2025 We are thrilled to announce the First Workshop on the Application of LLM Explainability to Reasoning and Planning at COLM 2025. Enabling large language models (LLMs) to reason (e.g., arithmetic reasoning, symbolic reasoning, commonsense reasoning, etc.) and plan (e.g., path-finding, tool use, web navigation, computer use, etc.) has been a popular topic in the past few years. Despite the exciting achievement, there have also been growing concerns about the safety and trustworthiness of these LLM applications, due to our large “unknowns” on how LLMs achieve these capabilities and where they could fail. On the other hand, LLM explainability (broadly including any research explaining or interpreting LLMs) has also attracted increasing attention, but existing research has mostly focused on simplified tasks and hardly yields insights that can be directly applied to realistic reasoning and planning tasks. This discrepancy has consequently raised doubts about the practical meaning of LLM explainability research. In this workshop, we aim to bring together researchers from various perspectives to discuss the potential and practical applications of model explainability to advancing LLM reasoning and planning. Specifically, the workshop welcomes submissions on the following topics (non-exclusively): Local explanations (e.g., feature attribution, textual explanations, including CoT type) of LLMs in reasoning and/or planning tasks; Global explanations (e.g., mechanistic interpretability) of LLMs in reasoning and/or planning tasks; Applications of explainability to enhance LLM’s effectiveness in reasoning and/or planning tasks; Applications of explainability to enhance LLM’s safety and trustworthiness in reasoning and/or planning tasks; User interface development driven by LLM explanations; Human-LLM collaboration and teaming driven by explanations; and Explainability-driven, automatic, or human-in-the-loop LLM evaluation. We warmly invite researchers from both academia and industry with interests in LLM explainability, reasoning, and planning for participation. IMPORTANT DATES Submission deadline: June 23, 2025, 23:59 AoE Acceptance notification: July 24, 2025 WEBSITE: https://xllm-reasoning-planning-workshop.github.io/ Google Group (join for latest updates via email): https://groups.google.com/g/xllm-reasoning-planning-workshop ORGANIZERS: Daking Rai, George Mason University Ziyu Yao, George Mason University Hanjie Chen, Rice University Mengan Du, New Jersey Institute of Technology Shi Feng, George Washington University Q.Vera Liao, Microsoft Research/University of Michigan Andreas Madsen, Guide Labs Abulhair Saparov, Purdue University Yilun Zhou, Salesforce Research CONTACT: xllmreasoningplanningworkshop@gmail.com |
|