posted by user: llorenz || 321 views || tracked by 1 users: [display]

xAI ST Actionable XAI 2025 : xAI ST Actionable XAI 2025 : xAI World Conference Special Track on Actionable Explainable AI

FacebookTwitterLinkedInGoogle

Link: https://xaiworldconference.com/2025/actionable-explainable-ai/
 
When Jul 9, 2025 - Jul 11, 2025
Where Istanbul, Turkey
Abstract Registration Due Feb 10, 2025
Submission Deadline Feb 15, 2025
Notification Due Mar 7, 2025
Final Version Due Mar 14, 2025
Categories    artificial intelligence   explainable ai   machine learning   interdisciplinarity
 

Call For Papers

Call for Papers: xAI 2025, Special Track: Actionable Explainable AI

****

Abstract Due Date: February 10, 2025
Submission Due Date: February 15, 2025
Conference Dates: July 09-11, 2025
Conference Location: Istanbul, Turkey

Website: https://xaiworldconference.com/2025/actionable-explainable-ai/

****

Following the success of Explainable AI in generating faithful and understandable explanations of complex ML models, there has been increasing attention on how the outcomes of Explainable AI can be systematically used to enable meaningful actions. These considerations are studied within the subfield of Actionable XAI. In particular, research questions relevant to this subfield include (1) what types of explanations are most helpful in enabling human experts to achieve more efficient and accurate decision-making, (2) how one can systematically improve the robustness and generalization ability of ML models or align them with human decision making and norms based on human feedback on explanations, (3) how to enable meaningful actioning of real-world systems via interpretable ML-based digital twins, and (4) how to evaluate and improve the quality of actions derived from XAI in an objective and reproducible manner. This special track will address both the technical and practical aspects of Actionable XAI. This includes the question of how to build highly informative explanations that form the basis for actionability, aiming for solutions that are interoperable with existing explanation techniques such as Shapley values, LRP or counterfactuals, and existing ML models. This special track will also cover the exploration of real-world use cases where these actions lead to improved outcomes.


Topics include, but are not limited to:
- Structured explanation techniques (e.g. higher-order, hierarchical) designed for actionability
- Multifaceted explanation techniques (e.g. disentangled or concept-based) designed for actionability
- Explanation techniques based on optimization in input space (e.g. counterfactuals or prototypes) designed for actionability
- Hybrid methods combining multiple explanation paradigms to improve actionability further
- Attribution or attention-based techniques for helping users taking meaningful actions in data-rich environments
- Shapley-, LRP-, or attention-based XAI techniques for retrieving relevant features from gigapixel images
- Explanation-guided dimensionality reduction to facilitate taking action under high-throughput data or real-time constraints
- XAI-based techniques for aligning the model decision making with ground-truth provided by human annotators
- XAI methods, such as CAM and LRP, to support semantic segmentation from limited annotations
- Techniques that leverage user explanatory feedback to produce an improved ML model
- Explanation-driven pruning or retraining to robustify an ML model against spurious correlations and dataset shifts
- Counterfactual and attribution methods combined with digital twins to identify effective actions in real-world systems
- Counterfactual and attribution methods combined with reinforcement learning to produce effective real-world control policies
- Design of environments (e.g. simulated environments) for end-to-end evaluation of XAI actionability
- Utility-based metrics (e.g. added-value in a deployed setting) for end-to-end evaluation of XAI actionability
- Indirect metrics (e.g. explanation informativeness, action-response prediction accuracy) for component-wise evaluation of XAI actionability
- Datasets (with simulated environments) for evaluating actions derived from XAI explanations in a reproducible manner
- Application of actionable XAI in biomedicine, e.g. for acting on molecular pathways
- XAI in clinical practice, e.g. for proposing targeted therapies
- Application of actionable XAI in industry, e.g. for calibration in manufacturing processes

Questions regarding the special track should be submitted to Grégoire Montavon (gregoire.montavon[at]fu-berlin.de).

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
FLAIRS-ST XAI, Fairness, and Trust 2025   FLAIRS-38 Special Track on Explainable, Fair, and Trustworthy AI
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
XAI Conf 2025   3rd World Conference on eXplainable Artificial Intelligence
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
SKEAI 2025   Semantic Knowledge-based Explainability of Artificial Intelligence
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
EXTRAAMAS 2025   7th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems
SC 2025   The International Conference for High Performance Computing, Networking, Storage, and Analysis
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations