| |||||||||||||
EXaCt 2011 : Workshop on Explanation-aware ComputingConference Series : Explanation-aware Computing | |||||||||||||
Link: http://exact2011.workshop.hm/ | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
Both within AI systems and in interactive systems, the ability to explain reasoning processes and results can have substantial impact. Within the field of knowledge-based systems, explanations have been considered as an important link between humans and machines. There, their main purpose has been to increase the confidence of the user in the system’s result (persuasion) or the system as a whole (satisfaction), by providing evidence of how it was derived (transparency). Additional AI research has focused on how computer systems can themselves use explanations, for example to guide learning (education).
Current interest in mixed-initiative systems provides a new context in which explanation issues may play a crucial role. When knowledge-based systems are partners in an interactive socio-technical process, with incomplete and changing problem descriptions, communication between human and software systems is a central part. For example, in recommender systems good explanations can help to inspire user trust and loyalty, and make it quicker and easier (efficiency) for users to find what they want effectiveness). Explanations exchanged between human agents and software agents thus may play an important role in mixed-initiative problem solving. Other disciplines such as cognitive science, linguistics, philosophy of science, psychology, and education have investigated explanation as well. They consider varying aspects, making it clear that there are many different views of the nature of explanation and facets of explanation to explore. Two relevant examples of these include, but are not limited to, open learner models in education, and dialogue management and planning in natural language generation. This workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems, and increase transparency, user sense of control (scrutability), trust, acceptance and decision support. Suggested topics for contributions (not restricted to IT views): * Models and knowledge representations for explanations * Quality of explanations; understandability * Integrating application and explanation knowledge * Explanation-awareness in (designing) applications * Methodologies for developing explanation-aware systems * Explanations and learning * Context-aware explanation vs. explanation-aware context * Confidence and explanations * Privacy, trust, and explanation * Empirical studies of explanations * Requirements and needs for explanations to support human understanding * Explanation of complex, autonomous systems * Co-operative explanation * Visualising explanations * Dialogue management and natural language generation * Human-Computer Interaction (HCI) and explanation Workshop submission will be electronic, in pdf format only. Submitted papers must not exceed 10 pages and should conform to Springer LNCS style (see below). At least one author of each accepted paper must register for the workshop and the IJCAI conference, and present the contribution in order to be published in the workshop proceedings. |
|