posted by user: imDavide || 45 views || tracked by 1 users: [display]

EXTRAAMAS 2026 : 8th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems

FacebookTwitterLinkedInGoogle

Link: https://extraamas.ehealth.hevs.ch/index.html
 
When May 25, 2026 - May 26, 2026
Where Cyprus
Submission Deadline Mar 1, 2026
Notification Due Mar 25, 2026
Final Version Due Jun 10, 2026
Categories    XAI   agentic ai   explainability   trustworthy ai
 

Call For Papers

8th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems
(EXTRAAMAS2026)

in conjunction with AAMAS 2026,
Cyprus, 25-26 May 2026

#Important Dates
Paper submission: 01/03/2026
Notification of acceptance: 25/03/2025
Early registration deadline: TBA
Workshop: 25-26/05/2026
Camera-ready (Springer post-proceedings): 10/06/2026
Submission link: https://easychair.org/conferences/?conf=extraamas2026



The International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (EXTRAAMAS) has run since 2019 and has become a well-established forum at the intersection of Explainable AI (XAI), Agentic AI and Multi-Agent Systems (MAS). EXTRAAMAS focuses on explainability for agentic systems operating in dynamic, multi-stakeholder environments—where agents plan, negotiate, coordinate, and reason about norms. The workshop emphasizes the shift beyond static, post-hoc explanations toward interactive, context-aware, and evaluation-driven approaches, supporting systems that can explain-to-decide rather than merely explain-after. In its 8th edition, EXTRAAMAS 2026 continues to strengthen both foundational and applied research through four tracks spanning neuro-symbolic and hybrid approaches, explainable negotiation and conflict resolution, interactive explainability and LLM-based agentic systems, and legal/ethical perspectives.
The workshop is structured around four thematic tracks covering foundational, applied, interactive, and cross-disciplinary perspectives on explainable and trustworthy agentic AI, including symbolic and sub-symbolic approaches, negotiation and conflict resolution, interactive and LLM-based explainability, and legal and ethical dimensions.

The four tracks for this year are:

#Track 1: Foundations of Explainable and Agentic AI (Symbolic, Sub-symbolic, and Hybrid Approaches)
This track focuses on foundational approaches to explainability for agentic AI systems, spanning symbolic, sub-symbolic, and hybrid (neuro-symbolic) models. It addresses how explanations can be embedded within the reasoning cycle of autonomous agents, supporting planning, learning, coordination, and decision-making in complex environments.
Topics of interest include (but are not limited to):

- Explainable machine learning and neural networks
- Symbolic knowledge representation, injection, and extraction
- Neuro-symbolic and hybrid reasoning architectures
- Causal and counterfactual explanation models
- Surrogate models and abstraction techniques
- Explainable planning and decision-making
- Multi-agent architectures supporting explainability
- Evaluation and benchmarking of foundational XAI methods

#Track 2: Explainable Interaction, Negotiation, and Collective Decision-Making in Multi-Agent Systems
This track addresses explainability in interactive multi-agent settings, including negotiation, coordination, argumentation, and collective decision-making. As agents increasingly operate in open, human-facing and multi-stakeholder environments, transparent and trustworthy interaction mechanisms become essential for understanding, trust, and effective collaboration.
Topics of interest include (but are not limited to):
- Explainable negotiation protocols and strategies
- Explainable conflict resolution and coordination mechanisms
- Argumentation-based explanations of decisions and outcomes
- Explainable recommendation systems and preference learning
- Trustworthy voting and collective choice mechanisms
- User and agent profiling for transparency and accountability
- Human- and agent-centered evaluation studies
- Applications in robotics, IoT, virtual assistants, and socio-technical systems

#Track 3: Interactive, Conversational, and LLM-Based Explainable Agentic Systems
This track focuses on interactive and user-in-the-loop explainability, emphasizing dialogue, conversational interfaces, and adaptive interaction. It explores explainability challenges arising from LLM-based, tool-using, and hybrid agentic systems, including issues of reliability, faithfulness, and evaluation.
Topics of interest include (but are not limited to):
- Interactive and conversational explanation systems
- Explanatory dialogue and mixed-initiative interaction
- Context modeling and user modeling for explainability
- Prompt engineering and explanation-aware prompting
- Explainability challenges in LLM-based and tool-using agents
- Reliability, faithfulness, and hallucination mitigation
- Methodologies for evaluating interactive explanations
- Responsible and trustworthy deployment of LLM-driven agents

#Track 4: Explainable, Trustworthy, and Governed AI: Legal, Ethical, and Societal Perspectives
This track focuses on the legal, ethical, and societal dimensions of explainable and trustworthy AI,
addressing governance, accountability, and compliance in autonomous and agent-based systems deployed in sensitive and regulated domains.
Topics of interest include (but are not limited to):
- Explainability in AI & Law and legal reasoning systems
- Compliance-by-design and regulatory frameworks (e.g., EU AI Act)
- Fairness, bias mitigation, and transparency
- Accountability, liability, and auditability of AI systems
- Nudging, deception, and ethical design choices
- Normative reasoning and machine ethics
- Culture-aware and value-sensitive AI systems


#KEYNOTES
Keynote 1 Title: Evaluating Explanations in Multi-Agent and Human-AI Systems
Speaker: Prof. Sandip Sen, University of Tulsa
Abstract: This keynote discusses current challenges and methodologies for evaluating explanations in multi-agent systems and human–AI collaboration, highlighting limitations of existing metrics and future research directions.

Keynote 2 Title: Explainability Challenges for Agentic AI and User Rights
Speaker: Dr. Rachele Carli, Umeå University
Abstract: This keynote addresses the challenges posed by explainable AI models for agentic systems operating in complex socio-technical settings, with a focus on user rights, integrity, and accountability.


#Workshop Chairs
Prof. Dr. Davide Calvaresi, HES-SO, Switzerland
research areas: Real-Time Multi-Agent Systems, Explainable AI, BCT, eHealth,
mail: davide.calvaresi@hevs.ch

Dr. Amro Najjar, UNILU, Luxembourg
research areas: Multi-Agent Systems, Explainable AI, AI
mail: amro.najjar@uni.lu

Prof. Dr. Kary Främling, Umeå & Aalto University Sweden/Finland,
research areas: Explainable AI, Artificial Intelligence, Machine Learning, IoT
mail: Kary.Framling@cs.umu.se

Prof. Dr. Andrea Omicini
research areas: Artificial Intelligence, Multi-agent Systems, Soft. Engineering
mail: andrea.omicini@unibo.it, web page, Google Scholar


#Track Chairs
Dr. Giovanni Ciatto, University of Bologna, Italy – giovanni.ciatto@unibo.it
Prof. Rehyan Aydogan, Ozyegin University, Turkey – reyhan.aydogan@ozyegin.edu.tr
Rachele Carli, University of Bologna – rachele.carli2@unibo.it
Joris HULSTIJN: University of Luxembourg – joris.hulstijn@uni.lu

#Publicity Chair:
Elia Pacioni - HES-SO, Switzerland

#Advisory Board
Dr. Tim Miller, University of Melbourne
Prof. Dr. Leon van der Torre, UNILU
Prof. Dr. Virginia Dignum, Umea University
Prof. Dr. Michael Ignaz Schumacher

#Primary Contacts
Prof. Dr. Davide Calvaresi - davide.calvaresi@hevs.ch
Dr. Amro Najjar - amro.najjar@list.lu

Related Resources

XAI 2026   XAI-2026 the 4th World Conference on eXplainable Artificial Intelligence
EXTRAAMAS 2026   8th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems
FLAIRS-ST XAI 2026   FLAIRS-ST XAI, Fairness, and Trust 2026 : FLAIRS-39 Special Track on Explainable, Fair, and Trustworthy AI
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
AI in Social Sciences 2026   AI in Social Sciences (working title)
ACM-JRC Trustworthy 2025   ACM Journal on Responsible Computing Special Issue on Trustworthy AI and Autonomous Systems
SIBMLKR 2025   Special Issue on Bridging Machine Learning and Knowledge Representation
Rev-AI 2026   The 2026 International Conference on Revolutionary Artificial Intelligence and Future Applications
Cyber-AI 2026   The IEEE 2026 International Conference on Cybersecurity and AI-Based Systems (Scopus)
SEAA 2026   52nd Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2026