| |||||||||||||||
exAI @ CD-MAKE 2020 : Explainable AI | |||||||||||||||
Link: https://human-centered.ai/explainable-ai-2020/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
full-digital --- no travel -- deadline: May, 06, 2020
In-line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas – Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work in order to bring together experts from different fields, e.g. computer science, psychology, sociology, philosophy, law, business, … experts who would otherwise possibly not meet together. This cross-domain integration and appraisal of different fields of science and industry shall provide an atmosphere to foster different perspectives and opinions; it will offer a platform for novel crazy ideas and a fresh look on the methodologies to put these ideas into business. Topics include but are not limited to (alphabetically – not prioritized): Acceptance (“How to ensure acceptance of AI/ML among end users?”) Accountability and Responsibility (“Who is to blame if something goes wrong?”) Affective computing for successful human-AI interaction (human-robot interaction) Argumentation theories of explanations Artificial Advice Givers Bayesian rule lists Bias and Fairness Causal learning, causal discovery and causal inference Causality and Causability research Cognitive issues of explanation and understanding (“understanding understanding”) Combination of statistical learning approaches with large knowledge repositories (ontologies, terminologies) Comparison of Human intelligence vs. Artificial Intelligence (HCI — KDD) Cyber security, Cyber defense and malicious use of adversarial examples Decision making and decision support systems (“Is a human-like decision good enough?) Emotional intelligence (“Emotion AI”) Ethical aspects of AI in general and human-AI interaction in particular Explanation agents and recommender systems Explanatory User Interfaces and Human-Computer Interaction (HCI) for explainable AI Fairness, Accountability and Trust (“How to ensure trust in AI?”) Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability Graphical causal inference and graphical models for explanation and causality Ground truth Group recommender systems Human rights vs. Robot rights Interactive machine learning with a human-in-the-loop Interactive machine learning with (many) humans-in-the-loop (crowd intelligence) Kandinsky Patterns experiments and extensions Legal aspects of AI/ML (“Who is to blame if an error occurs?”) Moral principles and Moral dilemmas of current and future AI Novel intelligent user interfaces (e.g. affective mobile interfaces) Novel methods, algorithms, tools, procedures for supporting explainability in AI/ML Philosophical approaches of explainability (“When is it enough explained? Do we have a degree of saturation?”) Proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes Privacy, surveillance, control and agency Python for nerds (Python tricks of the trade – relevant for explainable AI) Self-explanatory agents and decision support systems Social implications of AI (“What AI impacts”), e.g. labour trends, human-human interaction, machine-machine interaction Spartanic approaches of explanations (“What is the most simplest explanation?”) Theoretical approaches of explainability (“What makes a good explanation?”) Web- and mobile-based cooperative intelligent information systems and tools |
|