posted by user: grupocole || 267 views || tracked by 1 users: [display]

MAI-XAI 2025 : 2nd Workshop on Multimodal, Affective and Interactive Explainable AI, collocated with ECAI

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/mai-xai25
 
When Oct 25, 2025 - Oct 26, 2025
Where Bologna Italy
Submission Deadline May 15, 2025
Notification Due Jul 24, 2025
Final Version Due Sep 11, 2025
Categories    NLP   computational linguistics   artificial intelligence
 

Call For Papers



No suele recibir correo electrónico de jmalonsom@gmail.com. Por qué es esto importante


* Apologies for cross-postings *

Call for Papers

2nd Workshop on Multimodal, Affective and Interactive Explainable AI (MAI-XAI-25), collocated with the European Conference on Artificial Intelligence (ECAI), October 25-30, Bologna Italy

https://sites.google.com/view/mai-xai25

The field of eXplainable Artificial Intelligence (XAI) is concerned with developing methods that make the decisions / predictions by machine learned models accessible and understandable to different stakeholders, ranging from machine learning experts to lay users. An important goal is to design systems in a human-centered manner, ensuring that explanations are effective in enhancing the understanding of human users about the model and empower them to perform an appropriate action.

Yet, the current state of the art in XAI is limited in this respect. Many studies in the field of XAI are concerned with evaluating technology in an intrinsic fashion regarding measures such as validity, proximity, etc. that tell us little about the actual effectiveness of explanations from an end user perspective. Further, there is a lack of methods that allow to interactively tailor explanations to the (evolving) needs of lay users as well as to measure the effectiveness of the provided explanations in terms of enhancing user understanding.

The MAI-XAI workshop focuses on improving XAI effectiveness of explanations by moving to “natural” explanations that are more accessible to a non-technical audience. Natural explanations leverage multiple modalities (text, speech, visual, tabular, ...) to select the form of presentation of an explanation that most suits the context and the explanatory needs of an explainee. XAI systems providing natural explanations might react to affective aspects and emotions to e.g. identify dissatisfaction with an explanation and react accordingly. Finally, they should be able to effectively interact with the user to move from one-shot static explanations to dynamically adapted explanations that can be informed by the reactions or feedback of a user during the interaction.

We aim to offer researchers and practitioners the opportunity to identify new promising research directions on XAI along the above mentioned lines, focusing on how to provide “natural explanations”. Attendants are encouraged to present case studies in real-world applications where XAI has been successfully applied, emphasizing the practical benefits and challenges encountered.

The topics of interest include (but are not limited to):

Multimodal XAI

■ XAI for multi-modal data retrieval, collection, augmentation, generation, and validation: From data explainability to understanding and mitigating data bias

■ XAI for Human-Computer Interaction (HCI): From explanatory user interfaces to interactive and interpretable machine learning approaches with human-in-the-loop and machine-in-the-loop approaches

■ Augmented reality for multi-modal XAI

■ XAI approaches leveraging application-specific domain knowledge: From concepts to large knowledge repositories (ontologies) and corpus

■ Design and validation of multi-modal explainers: From endowing explainable models with multi-modal explanation interfaces to measuring model explainability and evaluating the quality of XAI systems

■ Quantifying XAI: From defining metrics and methodologies to assessing the effectiveness of explanations in enhancing user understanding, reliance, and trust

■ Large knowledge bases and graphs that can be used for multi-modal explanation generation

■ Large language models and their generative power for multi-modal XAI

■ Proof-of-concepts and demonstrators of how to integrate effective and efficient XAI into real-world human decision-making processes

■ Ethical, Legal, Socio-Economic and Cultural (ELSEC) considerations in XAI: Examining ethical implications surrounding the use of high-risk AI applications, including potential biases and the responsible deployment of sustainable “green” AI in sensitive domains

Affective XAI

■ Explainable affective computing in healthcare, psychology, physiology, education, entertainment, and gaming

■ Privacy, fairness, and ethical considerations in affective computing

■ Multimodal (textual, visual, vocal, physiological) emotion recognition systems

■ User environments for the design of systems to better detect and classify affect

■ Sentiment analysis and explainability

■ Social robots and explainability

■ Emotion-aware recommender systems

■ Accuracy and explainability in emotion recognition

■ Machine learning using biometric data to classify biosignals

■ Virtual reality in affective computing

■ Human–Computer Interaction (HCI) and Human in the Loop (HITL) approaches in affective computing

Interactive XAI

■ Dialogue-based approaches to XAI

■ Use of multiple modalities in XAI systems

■ Approaches to dynamically adapt explainability in interaction with a user

■ XAI approaches that use a model of the partner to adapt explanations

■ XAI approaches for collaborative decision-making between humans and AI models

■ Methods to measure and evaluate the understanding of the users of a model

■ Methods to measure and evaluate the ability to use models effectively in downstream tasks

■ Interactive methods by which a system and a user can negotiate what is to be explained

■ Modelling the social functions and aspects of an explanation

■ Methods to identify users’ information and explainability needs

Papers submitted to ECAI that are under review for the conference cannot be submitted to the workshop. If rejected from ECAI, authors can submit a request for their paper to be considered for the workshop by July 18th to one of the emails from the Contact page.

Accepted manuscripts will be published in CEUR Workshop Proceedings (CEUR-WS.org). Papers must be written in English, be prepared for double-blind review using the CEUR-WS template.

The following types of submissions are allowed:

● Regular/Long Papers (10 - 15 pages): describing novel and original technical contributions enhancing our understanding of multimodality, affectiveness and interaction in XAI.

● Short Papers (5 - 9 pages): describing work in progress, a case study, …

Submissions should be made through Easychair: (https://easychair.org/conferences/?conf=maixai25).

Registering an abstract of your paper (of around 100-300 words in plain text) is mandatory in advance of the paper submission deadline and you will be asked to provide additional information (such as keywords) at that time. Please do not leave things to the very last moment; you can resubmit any number of times until the submission deadline.

The workshop is planned as an in-person event. Each accepted paper will get assigned either an oral presentation slot or a combined poster/spotlight presentation slot.

Important Dates:

● Abstract registration: May 15th, 2025

● Paper submission: May 21th, 2025

● Acceptance/rejection notification: July 24th, 2025

● Camera-ready paper submission: September 11th, 2025

● Conference dates: October 25-30, 2025 (The MAI-XAI 25 Workshop will take place from 25th - 26th of October, 2025)

Organizers:

● Philipp Cimiano, Bielefeld University, Germany

● Fosca Giannotti, Scuola Normale Superiore, Pisa, Italy

● Tim Miller, The University of Queensland, Australia

● Bárbara Hammer, Bielefeld University, Germany

● Alejandro Catalá Bolos, Universidade de Santiago de Compostela (USC), Spain

● Peter Flach, University of Bristol, UK

● Jose M. Alonso-Moral, Universidade de Santiago de Compostela (USC), Spain

Contact details:

● Jose M. Alonso-Moral, https://citius.gal/es/team/jose-maria-alonso-moral

● Philipp Cimiano, https://ekvv.uni-bielefeld.de/pers_publ/publ/PersonDetail.jsp?personId=15020699&lang=EN

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
FLAIRS-ST XAI, Fairness, and Trust 2025   FLAIRS-38 Special Track on Explainable, Fair, and Trustworthy AI
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
XAI Conf 2025   3rd World Conference on eXplainable Artificial Intelligence
IJCSITCE 2025   The International Journal of Computational Science, Information Technology and Control Engineering
FAIEMA 2025   3nd International Conference on Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
SKEAI 2025   Semantic Knowledge-based Explainability of Artificial Intelligence
IJRAP 2025   International Journal of Recent advances in Physics
EXTRAAMAS 2025   7th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems