posted by organizer: BasilEll || 2247 views || tracked by 8 users: [display]

SemEx 2019 : 1st Workshop on Semantic Explainability


When Jan 30, 2019 - Jan 30, 2019
Where Newport Beach, California
Submission Deadline Nov 26, 2018
Notification Due Dec 10, 2018
Final Version Due Dec 17, 2018
Categories    semantic web   explainability   artificial intelligence   machine learning

Call For Papers

1st Workshop on Semantic Explainability (SemEx 2019)

in conjunction with
The 13th IEEE International Conference on Semantic Computing
Jan 30 - Feb 1, 2019 - Newport Beach, California, USA

*** Website: ***

*Paper Submission Due: November 26, 2018
*Acceptance Notification: December 10, 2018

In recent years, the explainability of complex systems such as decision support systems, automatic decision systems, machine learning-based/trained systems, and artificial intelligence in general has been expressed not only as a desired property, but also as a property that is required by law. For example, the General Data Protection Regulation’s (GDPR) „right to explanation“ demands that the results of ML/AI-based decisions are explained. The explainability of complex systems, especially of ML-based and AI-based systems, becomes increasingly relevant as more and more aspects of our lives are influenced by these systems‘ actions and decisions.

Several workshops address the problem of explainable AI. However, none of these workshops has a focus on semantic technologies such as ontologies and reasoning. We believe that semantic technologies and explainability coalesce in two ways. First, systems that are based on semantic technologies must be explainable like all other AI systems. In addition, semantic technologies seem predestined to support rendering systems that are not based on semantic technologies explainable.

Turning a system that already makes use of ontologies into an explainable system could be supported by the ontologies, as ideally the ontologies capture some aspects of the users‘ conceptualizations of a problem domain. However, how can such systems make use of these ontologies to generate explanations of actions they performed and decisions they took? Which criteria must an ontology fulfill so that it supports the generation of explanations? Do we have adequate ontologies that enable to express explanations and enable to model and reason about what is understandable or comprehensible for a certain user? What kind of lexicographic information is necessary to generate linguistic utterances? How to evaluate a system‘s understandability? How to design ontologies for system understandability? What are models of human-machine interaction where the system enables to interact with the system until the user understood a certain action or decision? How can explanatory components be reused with other systems that they have not been designed for?

Turning systems that are not yet based on ontologies but on sub-symbolic representations/distributed semantics such as deep learning-based approaches into explainable systems might be supported by the use of ontologies. Some efforts in this field have been referred to as neural-symbolic integration.

This workshop aims to bring together international experts interested in the application of semantic technologies for explainability of artificial intelligence/machine learning to stimulate research, engineering and evaluation – towards making machine decisions transparent, re-traceable, comprehensible, interpretable, explainable, and reproducible. Semantic technologies have the potential to play an important role in the field of explainability since they lend themselves very well to the task, as they enable to model users‘ conceptualizations of the problem domain. However, this field has so far only been only rarely explored.

Topics of interest include, but are not limited to:

- Explainability of machine learning models based on semantics/ontologies
- Exploiting semantics/ontologies for explainable/traceable recommendations
- Explanations based on semantics/ontologies in the context of decision making/decision support systems
- Semantic user modelling for personalized explanations
- Design criteria for explainability-supporting ontologies
- Dialogue management and natural language generation based on semantics/ontologies
- Visual explanations based on semantics/ontologies
- Multi-modal explanations using semantics/ontologies
- Interactive/incremental explanations based on semantics/ontologies
- Ontological modeling of explanations and user profiles

Paper Format
Manuscripts must be written in English, must not be longer than eight (8) pages, and must follow the instructions found here:


Important Dates
Submission deadline: Nov 26, 2018 – 23:59 Hawaii Time
Notification of acceptance: Dec 10, 2018 – 23:59 Hawaii Time
Camera-ready version due: Dec 17, 2018 – 23:59 Hawaii Time

Workshop Organizers
Philipp Cimiano – Bielefeld University
Basil Ell – Bielefeld University, Oslo University
Axel-Cyrille Ngonga Ngomo – Paderborn University

Related Resources

ICML 2020   37th International Conference on Machine Learning
ICSC 2020   14th IEEE International Conference on ​Semantic Computing
ECAI 2020   24th European Conference on Artificial Intelligence
ESWC 2020   Extended Semantic Web Conference
IWUAS 2020   2020 International Workshop on Unmanned Aircraft Systems (IWUAS 2020)
VISxAI 2019   2nd Workshop on Visualization for AI Explainability (VISxAI) at IEEE VIS 2019
ISBDAI 2020   【Ei Compendex Scopus】2018 International Symposium on Big Data and Artificial Intelligence
Robust AI in FS 2019   NeurIPS 2019 Workshop on Robust AI in Financial Services: Data, Fairness, Explainability, Trustworthiness, and Privacy
WiMo 2020   12th International Conference on Wireless & Mobile Network
SCSN 2020   The Eighth IEEE International Workshop on Semantic Computing for Social Networks and Organization Sciences: from user information to social knowledge