posted by organizer: Aholzinger || 3100 views || tracked by 5 users: [display]

CfP Journal (SCI IF=2,5) 2019 : Springer/Nature BMC MIDM Explainable AI in Medical Informatics and Decision Support

FacebookTwitterLinkedInGoogle

Link: https://hci-kdd.org/special-issue-explainable-ai-medical-informatics-decision-making/
 
When N/A
Where N/A
Submission Deadline Mar 30, 2019
Categories    explainability   explainable ai   causality   transparent machine learning
 

Call For Papers

Special Collection Springer/Nature BMC Medical Informatics and Decision Support
Full open access SCI-IF = 2,5

Explainable AI in Medical Informatics and Decision Support
Call for papers

Based on a successful workshop on explainable AI during the Cross Domain for Machine Learning and Knowledge Extraction (CD-MAKE) 2018 conference, we launch this call for a special issue at BMC Medical Informatics and Decision Making, with the possibility to present the papers at the next session on explainable AI during the CD-MAKE 2019 conference in Kent (Canterbury, UK) at the end of August 2019.

We want to inspire cross-domain experts interested in artificial intelligence/machine learning to stimulate research, engineering and evaluation in, around and for explainable AI - towards making machine decisions transparent, re-enactive, comprehensible, interpretable, thus explainable, re-traceable and reproducible; the latter is the cornerstone of scientific research per se!

We foster cross-disciplinary and interdisciplinary work including but not limited to:

Novel methods, algorithms, tools for supporting explainable AI
Proof-of-concepts and demonstrators of how to integrate explainable AI into workflows
Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability and causality machine learning
Theoretical approaches of explainability ("What is a good explanation?")
Towards argumentation theories of explanation and issues of cognition
Comparison Human intelligence vs. Artificial Intelligence (HCI -- KDD)
Interactive machine learning with human(s)-in-the-loop (crowd intelligence)
Explanation User Interfaces and Human-Computer Interaction (HCI) for explainable AI
Novel Intelligent User Interfaces and affective computing approaches
Fairness, accountability and trust
Ethical aspects, law and social responsibility
Business aspects of explainable AI
Self-explanatory agents and decision support systems
Explanation agents and recommender systems
Combination of statistical learning approaches with large knowledge repositories (ontologies)

The grand goal of future explainable AI is to make results understandable and transparent and to answer questions of how and why a result was achieved. In fact: “Can we explain how and why a specific result was achieved by an algorithm?”

Submission for this special issue is open until 30 March 2019. The special issue is overseen by Section Editor Andreas Holzinger.


Related Resources

PCA 2025   Disasters and Apocalypses: CFP Pop Culture Association
COMNET SI - GenXAI for Internet 2024   Elsevier Computer Networks - Special Issue on Generative and Explainable AI for Internet Traffic and Network Architectures
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
SLE 2025   1st CfP: SLE 2025 - 18th ACM SIGPLAN International Conference on Software Language Engineering
CAD-SI-TOXIC 2025   CfP Culture and Dialogue, Special Issue: The Aesthetics and Ethics of the Toxic
CfP Alma Mater 2025   CfP: Alma Mater – Journal of Interdisciplinary Cultural Studies (March Issue 2025) / Alma Mater – Zeitschrift für interdisziplinäre Kulturforschungen (Ausgabe März 2025)
CIbSE 2025   CFP: 28th Ibero-American Conference on Software Engineering (CIbSE 2025)
NASSR 2025   Virtual NASSR 2025 CFP - Imagining Deleuze’s Romanticism
CFP-Singapore-ICBDAA 2024   The 2024 International Conference on Big Data Analysis and Application (ICBDAA 2024)
CFP-CIPCV-EI/SCOPUS 2025   The 2025 3rd International Conference on Intelligent Perception and Computer Vision