posted by user: nbee || 2561 views || tracked by 5 users: [display]

EGIHMI 2011 : 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction


When Feb 13, 2011 - Feb 13, 2011
Where Palo Alto, California, USA
Submission Deadline Nov 8, 2010
Notification Due Dec 6, 2010
Final Version Due Dec 20, 2010
Categories    eye gaze   intelligent interfaces

Call For Papers

In interactive systems, eye-gaze and attentional information have
great potential in improving the communication between the user and
the systems. For instance, by combining with situational and
linguistic information, user's focus of attention is useful in
interpreting the user's intentions. Eye-gaze also serves as a
nonverbal signal in mediated communication using avatars as well as
during interaction with humanoid autonomous agents. Moreover, recent
studies have shown that eye gaze can be measured using brain
activities, and such eye-tracking technologies provide new
opportunities to design novel attention-based intelligent user
The first eye-gaze workshop held at IUI 2010 covered various
research issues concerning eye-gaze: eye-tracking technologies,
analyses of human eye-gaze behaviors, multimodal interpretation, user
interfaces using an eye-tracker, and presenting gaze behaviors in
humanoid interfaces. This year's workshop aims to continue exploring
this important topic by bringing together researchers including human
sensing, intelligent user interface, multimodal processing, and
communication science, with the long term goal of establishing a
strong interdisciplinary research community in "attention aware
interactive systems".


This workshop solicits papers that address the following topics (but
not limited to):

* Technologies for sensing human attentional behaviors in IUI
- Sensing attentional behaviors using bodily motions such as pupil movements,
head movements and torso directions
- Sensing attentional behaviors using brain activities
- Issues in tracking attentional behaviors in IUI

* Interpreting attentional behaviors as communicative signals in IUI
- Incorporating attentional information in multimodal understanding
- Using attentional information in interpreting user’s intentions,
attitude towards the
system, grounding and engagement in conversational interactions

* Gaze model for generating eye-gaze behaviors by conversational humanoids
- Selecting appropriate eye-gaze behaviors for virtual agents and
communication robots
- User’s perception of the attentional signals presented by the humanoids
- Difference of gaze expressiveness between virtual agents and robots

* Analysis of human attentional behaviors
- Attentional behaviors in interaction with computer systems
- Attentional behaviors in dyads and multiparty face-to-face conversations
- Implications of analysis of human attentional behaviors towards IUI design

* Evaluation of gaze-based IUI
- Evaluation method for attentional IUI
- Designs of user studies to identify the real impact of gaze-based
information in IUI


There are three categories of paper submissions.
Long paper: The maximum length is 8 pages.
Short paper: The maximum length is 4 pages.
Poster presentations and Demos: The maximum length is 2 pages.

All submissions should be prepared according to the standard SIGCHI
publications format.
- Microsoft Word document template
- LaTeX class file (

Each submission will be reviewed by three members of the program committee.
The accepted papers will be published in the workshop proceedings.
We plan to publish revised versions of selected paper in a special
issue of a journal.


Paper Submission: November 8, 2010
Notification of Acceptance: December 6, 2010
Camera-ready due: December 20, 2010
Workshop: February 13, 2011



Yukiko Nakano (Seikei University, Japan)
Cristina Conati (University of British Columbia, Canada)
Thomas Bader (Karlsruhe Institute of Technology, Germany)
Neil Cooke (University of Birminghan, UK)


Elisabeth André (University of Augsburg, Germany)
Nikolaus Bee (Augsburg University, Germany)
Justine Cassell (Carnegie Mellon University, USA)
Joyce Chai (Michigan State University, USA)
Andrew Duchowski (Clemson University, USA)
Jürgen Geisler (Fraunhofer IOSB, Germany)
Patrick Jermann (Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland)
Yoshinori Kuno (Saitama University, Japan)
Kasia Muldner (Arizona State University, USA)
Toyoaki Nishida (Kyoto University, Japan)
Catherine Pelachaud (TELECOM Paris Tech, France)
Christopher Peters (Coventry University, UK)
Shaolin Qu (Michigan State University, USA)
Matthias Rötting (University of Berlin, Germany)
Candy Sidner (Worcester Polytechnic Institute, USA)

Related Resources

SI: Adaptive HAR in smart spaces 2022   Special issue : Adaptive Human Activity and Behaviour Recognition Models for Context Awareness in Intelligent Environments
ai4i 2021   Artificial Intelligence for Industries (ai4i 2021)
Frontiers - Human-Media Interaction 2021   Frontiers Research Topic on Computational Commensality
SI : Multimodal Datasets in smart spaces 2021   Special Issue: Multisensor and Multimodal Datasets in Intelligent Home for Context-Awareness, Human Home Interaction and Dialogue
SI: IE for Health and Well-being 2022   Special issue: Intelligent Environments for Health and Well-being
SPECOM 2021   23rd International Conference on Speech and Computer
CENTRIC 2021   The Fourteenth International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services
ActivEye 2021   Challenges in large scale eye tracking for active participants
ACM Hypertext 2021   ACM Hypertext and Social Media Conference
CHIRA 2021   5th International Conference on Computer-Human Interaction Research and Applications