| |||||||||||||||||
MLIS 2012 : ECAI Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Language, Motor Control and Vision | |||||||||||||||||
Link: http://www.sfbtr8.spatial-cognition.de/mlis-2012/Submission.html | |||||||||||||||||
| |||||||||||||||||
Call For Papers | |||||||||||||||||
ECAI Workshop on Machine Learning for Interactive Systems (MLIS)
Montpellier, France, 27-28th of August, 2012 Submissions can take two forms. Long papers should not exceed 6 pages and short papers should not exceed 2 pages. They should follow the general ECAI submission guidelines. http://www.sfbtr8.spatial-cognition.de/mlis-2012/Submission.html A special issue related to the topic of this workshop, with a submission deadline in autumn 2012, is planned for the ACM Transactions on Interactive Intelligent Systems (TiiS) journal (http://tiis.acm.org/). Important Dates: June 16, 2012: Extended paper submission deadline (23:59 CET) July 16, 2012: Notification of acceptance July 22, 2012: Camera-ready papers due Aug. 27/28, 2012: MLIS Workshop Interactive systems such as multimodal interfaces or robots must perceive, act, and interact in the environment where they are embedded. Naturally, perception, action and interaction are mutually related and affect each other. This is particularly the case in many hands-free and eyes-free mobile applications of interactive systems. Machine learning offers the attractive capability of making interactive systems more adaptive to the user and environment. For any of perception, action, and interaction we find a large number of applications using machine learning techniques. However, holistic approaches that tackle these fields in a unified way are still rare. The question of how to integrate language, motor control and vision in machine learning interfaces in an efficient and effective way has been a long standing problem and is the main topic of the workshop. This workshop aims to bring people together interested in natural language processing, motor control and computer vision with a unified perspective. This invitation is particularly directed to people designing, building, and evaluating Machine Learning Interactive Systems (MLIS) that interact with their environment, and particularly, the people within. Example research questions to address are the following: (a) how do MLIS integrate multimodal perceptions for action and interaction? (b) How do MLIS exhibit adaptive interactive behaviour given their perceptions? and (c) how do MLIS integrate verbal and non-verbal behaviour for effective interactions? Topics include, but are not limited to the following: - Reinforcement learning for interactive systems - Supervised learning for interactive systems - Unsupervised learning for interactive systems - Hybrid machine learning for interactive systems - Hierarchical Machine Learning for interactive systems - Machine learning for multi-modal interactive systems - Machine learning for multi-party interactive systems - Machine learning for emotional interactive systems - Machine learning for reasoning in interactive systems - Machine learning for user modelling in interactive systems - Machine learning for gesture-based interactive systems - Machine learning for vision-based interactive systems - Evaluations of machine learning interactive systems - All topics related to machine learning for avatars and interactive robots Invited Speakers: Jeremy Wyatt, University of Birmingham Oliver Lemon, Heriot-Watt University, Edinburgh Organizing Committee: Heriberto Cuayáhuitl, DFKI Saarbrücken, Germany Lutz Frommberger, University of Bremen, Germany Nina Dethlefs, Heriot-Watt University, Edinburgh, UK Hichem Sahli, Free University of Brussels, Belgium Contact: hecu01@dfki.de |
|