| |||||||||||||||
MAKE-exAI 2019 : Machine Learning & Knowledge Extraction Workshop on explainable Artificial Intelligence | |||||||||||||||
Link: https://hci-kdd.org/make-explainable-artificial-intelligence-2019/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
In line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas – Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work including but not limited to:
Novel methods, algorithms, tools, procedures for supporting explainability in AI/ML Proof-of-concepts and demonstrators of how to integrate explainable AI into workflows and industrial processes Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability Work on causality machine learning Theoretical approaches of explainability (“What makes a good explanation?”) Philsophical approaches of explainability (“When is it enough, do we have a degree of saturation?”) Towards argumentation theories of explanation and issues of cognition Comparison Human intelligence vs. Artificial Intelligence (HCI — KDD) Interactive machine learning with human(s)-in-the-loop (crowd intelligence) Explanatory User Interfaces and Human-Computer Interaction (HCI) for explainable AI Novel Intelligent User Interfaces and affective computing approaches Fairness, accountability and trust Ethical aspects and law, legal issues and social responsibility Business aspects of explainable AI Self-explanatory agents and decision support systems Explanation agents and recommender systems Combination of statistical learning approaches with large knowledge repositories (ontologies) |
|