| |||||||||||||||
EXTRAAMAS 2019 : International Workshop on Explainable Transparent Autonomous Agent and Multi-Agent Systems | |||||||||||||||
Link: https://extraamas.ehealth.hevs.ch/index.html | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
1st International Workshop on
EXplainable TRAnsparent Autonomous Agent and Multi-Agent Systems (EXTRAAMAS) Call for Papers Human decisions are increasingly relying on Artificial Intelligence (AI) techniques implementing autonomous decision making and distributed problem-solving. However, reasoning and dynamics powering such systems are becoming increasingly opaque. Therefore, the societal awareness about the lack of transparency and the need for explainability is rising. As a consequence, new legal constraints and grant solicitations have been defined to enforce transparency and explainability in IT systems. An example is the new General Data Protection Regulation (GDPR) which became effective in Europe in May 2018. Emphasizing the need for transparency in AI systems, recent studies pointed out that equipping intelligent systems with explanative abilities has a positive impact on users, (e.g., contributing to overcome discomfort, confusion, and self-deception due to the lack of understanding). For all these reasons, Explainable Artificial Intelligence (XAI) has recently re-emerged and is considered to be a hot topic in AI, attracting research from domains such as machine learning, robot planning, and multi-agent systems. Agents and Multi-Agent Systems (MAS) can have two core contributions for XAI. The first is in the context of personal intelligent systems providing tailored and personalized feedback (e.g., recommendations and coaching systems). Autonomous agent and multi-agent approaches have recently gained noticeable results and scientific relevance in different research domains (e.g., e-health, UAVs,, smart environments). However, despite possibly being correct, the outcomes of such agent-based systems, as well as their impact and effect on users, can be negatively affected by the lack of clarity and explainability of their dynamics and rationality. Nevertheless, if explainable, their understanding, reliability, and acceptance can be enhanced. In particular, user personal features (e.g., user context, expertise, age, and cognitive abilities), which are already used to compute the outcome, can be employed in the explanation process providing a user-tailored solution. The second axis is agent/robot teams or mixed human-agent teams. In this context, succeeding in collaboration necessitates a mutual understanding of the status of other agents/users/ their capacities and limitations. This ensures efficient teamwork and avoids potential dangers caused by misunderstandings. In such a scenario, explainability goes beyond single human-agent settings into agent-agent or even mixed agent-human team explainability. The main aim of this first “International workshop on Explainable Transparent Agent and Multi-Agent Systems” (EXTRAAMAS) is four-folded: (i) to establish a common ground for the study and development of explainable and understandable autonomous agents, robots and MAS, (ii) to investigate the potential of agent-based systems in the development of personalized user-aware explainable AI, (iii) to assess the impact of transparent and explained solutions on the user/agents behaviors, and (iv) to discuss motivating examples and concrete applications in which the lack of explainability leads to problems, which would be resolved by explainability. Contributions are encouraged in both theoretical and practical applications for transparent and explainable intelligence in agents and MAS. Papers presenting theoretical contributions, designs, prototypes, tools, subjective user tests, assessment, new or improved techniques, and general survey papers tracking current evolutions and future directions are welcome. Participants are invited to submit papers on all research and application aspects of explainable and transparent intelligence in agents and multi-agent system, including, but not limited to: Explainable agent architectures Adaptive and personalized explainable agents Explainable human-robot interaction Expressive robots Explainable planning Explanation visualization Explainable agents’ applications: (e-health, smart environment, driving companion, recommender systems, coaching agents,etc.) Reinforcement learning agents Cognitive and social sciences perspectives on explanations Legal aspects of explainable agent The top papers will be published in Special Issue of a journal with a relevant impact factor (to be announced) Important Dates Deadline for Submissions: 12 February 2019 Notification of acceptance: 10 March 2019 Camera-ready: 1 April 2019 Workshop day: 13-14 May 2019 Organizers Prof. Kary Främling, Umeå University and Computer Science (Aalto) Dr. Davide Calvaresi, University of Applied Sciences Western Switzerland (HES-SO) Dr. Amro Najjar, Umeå University Prof. Michael Schumacher, University of Applied Sciences Western Switzerland (HES-SO) Advisory Board Prof. Virginia Dignum, Umeå University Prof. Tim Miller, School of Computing and Information Systems at The University of Melbourne Prof. Catholijn M. Jonker, TU Delft, and LIACS Leiden University Publicity Chairs Yazan Mualla, University Bourgogne Franche-Comté Timotheus Kampik, Umeå University Program Committee To Be Announced... |
|