| |||||||||||||||
FFER 2014 : ICPR International Workshop on Face and Facial Expression Recognition from Real World Videos | |||||||||||||||
Link: http://www.vap.aau.dk/ffer14/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The face plays a key role in many real-world applications such as security systems, human computer interaction, remote monitoring of patients, video annotation, and gaming. Having detected the face, pattern recognition techniques and machine learning algorithms are applied to facial images, for example, to find the identity of a subject or analyze her/his emotional status. Though face and facial expression recognition in still images and in ideal imaging conditions have been around for many years, they have been less explored in video sequences in uncontrolled imagining conditions. Developing face and facial expression recognition algorithms for real-world scenarios, for instance, for remote patient monitoring or for identification in surveillance videos are still challenging tasks. The purpose of this workshop is to bring together researchers who are working on developing face and facial expression recognition systems that involve non-ideal conditions, like those that might be present in a video. We welcome research papers focusing on the following (and similar) topics:
Video face recognition Video facial expression recognition Face and facial expression recognition from facial dynamics Multi-face clustering from video 3D face modeling from video Multimodal face and facial expression recognition Applications of video face recognition Applications of video facial expression recognition Invited speakers: Massimo Tistarelli, University of Sassari, Italy Maja Pantic, Imperial College of London, UK (pending) Papers: Papers of at least 10 pages length in Springer’s single column format will be blind peer-reviewed by at least two referees. Accepted papers will be published in a post-proceeding volume of Springer’s Lecture Notes in Computer Science (LNCS) series. Paper submission can be done via the following CMT website: https://cmt.research.microsoft.com/FFER2014/Default.aspx Organizers: Qiang Ji, Rensselaer Polytechnic Institute, USA Thomas Moeslund, Aalborg University, Denmark Gang Hua, Stevens Institute of Technology, USA Kamal Nasrollahi, Aalborg University, Denmark |
|