| |||||||||||||||
SEA 2013 : Special Session on Self‐Explaining Agents | |||||||||||||||
Link: http://www.dai-labor.de/~faehndrich/sea/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The development of multi-agent systems in heterogeneous environments is a challenging task for humans. As a matter of fact, the management of such systems where different parties at different times make use of different technologies to reach their goals becomes ever more difficult. Thus systems can dynamically change due to the presence or absence of agents, services, devices and emergent behaviors can occur; behaviors which are not pre-programmed into the systems. Hence, developers attempt to shift evermore design decisions to the application runtime allowing systems to manage themselves. Those systems are called selfware with inherent self-* properties. The goal of this special session is to foster one particular self-* property, namely self-explanation.
Self-explanation is inspired not only from biological systems but also by the field of social science. In this context, self-explanation is defined as an ability of explaining oneself in an attempt to make sense of new information, either presented in a text or in some other medium. Commonly, explaining events, intentions and ideas is a well-known way of communicating information in everyday life. On the one hand, the explaining entity is able to impart knowledge to some audience. On the other hand, the audience is able to understand and comprehend the explainer's intentions and they may even understand the explainer's course of actions. Now, taking into account agents, developers and (end)users, which are the addressee of a self-explanation, we can distinguish two different types. To start with, the system sided self-explanation aims at creating self-explanatory descriptions and reasoning algorithms capable of extracting information out of them. One must notice, that in contrast to classical service descriptions a self-explanation should inhabit all information needed to reason up on them. Following the idea of self-explanation this means that new agents as well as existing ones are able to learn the capabilities of each other and to comprehend in which way they are able to interact (e.g. which data format and expressions match and what is the connection between those). Further, we are able to identify some human-sided self-explanation, which aims at integrating users into existing systems as a regular component. Using multi-modal interaction, agents may access human capabilities in order to reach their goals. As those systems are typically goal-driven, humans have to be able to define goals and also to restrict the system by means of constraints. This mechanism rises significant engineering challenges for the interface between the system and the human user. Having self-explanatory descriptions might be beneficial to many aspects of multi-agent systems like agent-human interaction, agent-planning and agent-communication, to name but a few. Hence, the goal of this special session is to foster research in self-explanation for agent-systems. In addition, currently there is no common understanding or agreed definition of the term self-explanation and its inherent meaning. In order to fill this gap, the special session seeks a discussion with a broad audience of researchers, who have experience with the design and development of self-adaptive agent-applications. Goals of the Special Session This special session shell deepen (but is not limited to) the following research areas: - Self-Explaining Systems – Definition of Self-Explanation – Categorization of Self-Explanations – Requirements of Self-Explaining Systems – Self-Configuration through Self-Explanation – Demonstrations on Self-Configuration through Self-Explaining Systems – Agents with Self-Explaining User Interaction – Directability of Agents – Adjustable Agent Autonomy – AI Methods on Adaption and Semantic Reasoning – IOPE and and other Description Paradigms – Reasoning on Semantic Descriptions – Learning of Communication Protocols – BDI-Agents and Reasoning on D – Collaborative Reinforcement Learning on Self-Explanations SUBMITTING PAPERS All papers must be formatted according to the Advances in Intelligent and Soft Computing Series Springer template, with a maximum length of: 8 pages in length, including figures and references for the PAAMS 2013 Special Sessions. The template can be downloaded in the PAAMS 2013 website (special sessions section). All proposed papers must be submitted in electronic form (PDF format) using the PAAMS 2013 conference management system. PUBLICATION Accepted papers will be included in a volume of the PAAMS 2013 Proceedings, published in the Advances in Intelligent and Soft-Computing series of Springer. One of the special session organizers will be included as editor of the volume. At least one of the authors per paper will be required to register and attend the symposium to present the paper in order to include the paper in the conference proceedings. All accepted papers will be published by Springer Verlag. PROGRAM COMMITTEES Christian Müller-Schloer: Leibniz Universität Hannover Sebastian Ahrndt: DAI-Labor, Technische Universität Berlin Dawud Gordan: TECO, Karlsruhe Institut of Technologie Johannes Fähndrich: DAI-Labor, Technische Universität Berlin Benjamin Hirsch: EBTIC, Khalifa University Marco Lützenberger: DAI-Labor, Technische Universität Berlin ORGANIZATION Sebastian Ahrndt: DAI-Labor, Technische Universität Berlin Johannes Fähndrich: DAI-Labor, Technische Universität Berlin Benjamin Hirsch: EBTIC, Khalifa University CONTACT Johannes Fähndrich (Johannes.Faehndrich@dai-labor.de) Fon +49 (0) 30/314 -74 034 Fax +49 (0) 30/314 -74 003 Mobile: +49 (0)176/70869963 DAI-Labor Technische Universität Berlin Fakultät IV – Elektrotechnik & Informatik Sekretariat TEL 14 Ernst-Reuter-Platz 7 10587 Berlin, Germany FURTHER INFORMATION VISIT: http://paams.net/special-sessions/SS8-SEA |
|