posted by user: nbee || 2432 views || tracked by 5 users: [display]

EGIHMI 2011 : 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction


When Feb 13, 2011 - Feb 13, 2011
Where Palo Alto, California, USA
Submission Deadline Nov 8, 2010
Notification Due Dec 6, 2010
Final Version Due Dec 20, 2010
Categories    eye gaze   intelligent interfaces

Call For Papers

In interactive systems, eye-gaze and attentional information have
great potential in improving the communication between the user and
the systems. For instance, by combining with situational and
linguistic information, user's focus of attention is useful in
interpreting the user's intentions. Eye-gaze also serves as a
nonverbal signal in mediated communication using avatars as well as
during interaction with humanoid autonomous agents. Moreover, recent
studies have shown that eye gaze can be measured using brain
activities, and such eye-tracking technologies provide new
opportunities to design novel attention-based intelligent user
The first eye-gaze workshop held at IUI 2010 covered various
research issues concerning eye-gaze: eye-tracking technologies,
analyses of human eye-gaze behaviors, multimodal interpretation, user
interfaces using an eye-tracker, and presenting gaze behaviors in
humanoid interfaces. This year's workshop aims to continue exploring
this important topic by bringing together researchers including human
sensing, intelligent user interface, multimodal processing, and
communication science, with the long term goal of establishing a
strong interdisciplinary research community in "attention aware
interactive systems".


This workshop solicits papers that address the following topics (but
not limited to):

* Technologies for sensing human attentional behaviors in IUI
- Sensing attentional behaviors using bodily motions such as pupil movements,
head movements and torso directions
- Sensing attentional behaviors using brain activities
- Issues in tracking attentional behaviors in IUI

* Interpreting attentional behaviors as communicative signals in IUI
- Incorporating attentional information in multimodal understanding
- Using attentional information in interpreting user’s intentions,
attitude towards the
system, grounding and engagement in conversational interactions

* Gaze model for generating eye-gaze behaviors by conversational humanoids
- Selecting appropriate eye-gaze behaviors for virtual agents and
communication robots
- User’s perception of the attentional signals presented by the humanoids
- Difference of gaze expressiveness between virtual agents and robots

* Analysis of human attentional behaviors
- Attentional behaviors in interaction with computer systems
- Attentional behaviors in dyads and multiparty face-to-face conversations
- Implications of analysis of human attentional behaviors towards IUI design

* Evaluation of gaze-based IUI
- Evaluation method for attentional IUI
- Designs of user studies to identify the real impact of gaze-based
information in IUI


There are three categories of paper submissions.
Long paper: The maximum length is 8 pages.
Short paper: The maximum length is 4 pages.
Poster presentations and Demos: The maximum length is 2 pages.

All submissions should be prepared according to the standard SIGCHI
publications format.
- Microsoft Word document template
- LaTeX class file (

Each submission will be reviewed by three members of the program committee.
The accepted papers will be published in the workshop proceedings.
We plan to publish revised versions of selected paper in a special
issue of a journal.


Paper Submission: November 8, 2010
Notification of Acceptance: December 6, 2010
Camera-ready due: December 20, 2010
Workshop: February 13, 2011



Yukiko Nakano (Seikei University, Japan)
Cristina Conati (University of British Columbia, Canada)
Thomas Bader (Karlsruhe Institute of Technology, Germany)
Neil Cooke (University of Birminghan, UK)


Elisabeth André (University of Augsburg, Germany)
Nikolaus Bee (Augsburg University, Germany)
Justine Cassell (Carnegie Mellon University, USA)
Joyce Chai (Michigan State University, USA)
Andrew Duchowski (Clemson University, USA)
Jürgen Geisler (Fraunhofer IOSB, Germany)
Patrick Jermann (Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland)
Yoshinori Kuno (Saitama University, Japan)
Kasia Muldner (Arizona State University, USA)
Toyoaki Nishida (Kyoto University, Japan)
Catherine Pelachaud (TELECOM Paris Tech, France)
Christopher Peters (Coventry University, UK)
Shaolin Qu (Michigan State University, USA)
Matthias Rötting (University of Berlin, Germany)
Candy Sidner (Worcester Polytechnic Institute, USA)

Related Resources

MLHMI--Ei Compendex and Scopus 2021   2021 2nd International Conference on Machine Learning and Human-Computer Interaction (MLHMI 2021)--Ei Compendex, Scopus
MLHMI--Ei and Scopus 2021   2021 2nd International Conference on Machine Learning and Human-Computer Interaction (MLHMI 2021)--Ei Compendex, Scopus
IUI 2021   ACM Intelligent User Interfaces 2021
IHM 2021   32e Conférence Francophone sur l’Interaction Homme-Machine
ACHI 2020   The Thirteenth International Conference on Advances in Computer-Human Interactions
SCOPUS-CGDIP 2020   4th International Conference on Computer Graphics and Digital Image Processing
EI/SCOPUS-WATCV 2020   2020 Workshop on Applications and Technologies of Computer Vision
ICRMV--EI Compendex, Scopus 2021   2021 The 5th International Conference on Robotics and Machine Vision (ICRMV 2021)--Ei Compendex, Scopus
IEEE HMData 2020   Fourth IEEE Workshop on Human-in-the-Loop Methods and Future of Work in BigData