posted by organizer: ringevaf || 2821 views || tracked by 7 users: [display]

AV+EC 2015 : AV+EC 2015 - 5th International Audio/Visual+ Emotion Challenge and Workshop

FacebookTwitterLinkedInGoogle

Link: http://sspnet.eu/avec2015
 
When Oct 26, 2015 - Oct 30, 2015
Where Brisbane, Australia
Submission Deadline Jul 1, 2015
Notification Due Jul 16, 2015
Final Version Due Jul 31, 2015
Categories    affective computing   multimodality   features extraction   machine learning
 

Call For Papers

Dear colleagues,



We have the great pleasure to announce the opening of the CFP to the 5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015), organised in conjunction with ACM Multimedia 2015. See below for the CFP - apologies for potential cross-posting.



_____________________________________________________________


Call for Participation / Papers



5th International Audio/Visual+ Emotion Challenge and Workshop (AV+EC 2015)



in conjunction with ACM Multimedia 2015, October 26-30, Brisbane, Australia



http://sspnet.eu/avec2015/

http://www.acmmm.org/2015/



Register and download data and features:

http://sspnet.eu/avec2015/challenge-guidelines/



_____________________________________________________________


Scope

The Audio/Visual Emotion Challenge and Workshop (AV+EC 2015) “Bridging Across Audio, Video and Physio” will be the fifth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and – for the first time also – physiological emotion analysis, with all participants competing under strictly the same conditions.
 
The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion systems to be able to deal with fully naturalistic behaviours in large volumes of unsegmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.
 
We are calling for teams to participate in a Challenge of fully-continuous emotion detection from audio, or video, or physiological data, or any combination of these three modalities. As benchmarking database the RECOLA multimodal corpus of remote and collaborative affective interactions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in two dimensions: arousal and valence.
 
Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.


Topics include, but are not limited to:
Participation in the Challenge
 
Audio/Visual/Physiological Emotion Recognition:
Audio-based Emotion Recognition
Video-based Emotion Recognition
Physiology-based Emotion Recognition
Synchrony of Non-Stationary Time Series
Multi-task learning of Multiple Dimensions
Weakly Supervised Learning
Agglomeration of Learning Data
Context in Emotion Recognition
Multiple Rater Ambiguity and Asynchrony
 
Application:
Multimedia Coding and Retrieval
 


___________________________________________


Important Dates

Paper submission

July 1, 2015



Notification of acceptance

July 16, 2015



Final challenge result submission
24 July, 2015


Camera ready paper

July 31, 2015



Workshop

October 26 or 30, 2015


___________________________________________





Organisers

Fabien Ringeval (Tech. Univ. Munich, Germany)
Björn Schuller (Imperial College London / Univ. Passau, UK / Germany)

Michel Valstar (University of Nottingham, UK)

Roddy Cowie (Queen's University Belfast, UK)

Maja Pantic (Imperial College London, UK)





___________________________________________
Program Committee

Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Fang Chen, NICTA, Australia
Mohamed Chetouani, Université Pierre et Marie Curie
Jeffrey Cohn, University of Pittsburgh, USA
Laurence Devillers, Université Paris-Sud, France
Julien Epps, University of New South Wales, Australia
Anna Esposito, University of Naples, Italy
Roland Goecke, University of Canberra, Australia
Jarek Krajewski, Universität Wuppertal, Germany
Marc Mehu, Webster Vienna Private University, Austria
Louis-Philippe Morency, Carnegie Mellon University, USA
Stefan Scherer, University of Southern California, USA
Stefan Steidl, Universität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Matthew Turk, University of California, USA
Stefanos Zafeiriou, Imperial College London, UK



Please regularly visit our website http://sspnet.eu/avec2015 for more information and excuse cross-postings,




Thank you very much and all the best,



Fabien Ringeval, Björn Schuller, Michel Valstar, Roddy Cowie and Maja Pantic

Related Resources

EmotionAware 2020   4th International Workshop on Emotion Awareness for Pervasive Computing with Mobile and Wearable Devices (EmotionAware 2020)
ICML 2020   37th International Conference on Machine Learning
Law and Emotion 2020   Law and Emotion
ICDMML 2020   【EI SCOPUS】2020 International Conference on Data Mining and Machine Learning
ISIC Challenge 2019   2019 ISIC Challenge on Skin Lesion Analysis
ICPR 2020   International Conference on Pattern Recognition 2020
MwE 2019   Machines with Emotions (Affect Modeling, Evaluation, and Challenges in Intelligent Cars), in Conjunction with IROS 2019
ISBDAI 2020   【Ei Compendex Scopus】2020 International Symposium on Big Data and Artificial Intelligence
AIARD 2020   Making Food Safe: Meeting the Global Challenge
SMM 2019   Speech, Music and Mind 2019: Detecting and Influencing Mental States with Audio. A satellite workshop of Interspeech 2019