| |||||||||||||||
ICMI EAT 2018 : ICMI Audio/Visual Eating Analysis & Tracking Challenge | |||||||||||||||
Link: https://icmi-eat.ihearu-play.eu/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
We are calling for participation in ICMI EAT, the 1st Audio/Visual Eating Analysis & Tracking Challenge, held as a session of ACM International Conference on Multimodal Interaction 2018 in Boulder, CO, USA on 20th October 2018.
The Audio/Visual Eating Analysis & Tracking Challenge (ICMI EAT 2018) will be the first audio-visual open research competition under strictly comparable conditions dealing with Machine Learning for audio/visual tracking of human subjects’ recorded while eating different types of food during speaking. The Challenge will focus on multimodal recognition of eating conditions: 1) whether a person is eating or not - and if yes, which food type 2) recognizing the subjects' food likability rating 3) recognizing the level of difficulty to speak while eating For more information, see https://icmi-eat.ihearu-play.eu/. ICMI EAT shall help bridging the gap between current-state research on multimodal machine learning, computational paralinguistics, and user behaviour tracking. All Sub-Challenges allow participants to find their own acoustic/visual features and/or their own machine learning model. Standard acoustic and visual feature sets will be provided featuring recent end-to-end deep learning and more ground-breaking crossmodal bag-of-words that may be used by the participants. We encourage both contributions aiming at highest performance, and contributions aiming at fostering brave new ideas in this context. In order to participate in the Challenge, please register your team by following the challenge guidelines. Besides participation in the Challenge we are calling for papers addressing the overall topics of this session, in particular works that address the differences between audio and video of eating condition data, and the issues concerning combined audio-visual eating recognition. Important Dates Release of training data and evaluation script: 4 April 2018 Release of test data: 16 May 2018 Paper submission: 30 May 2018 Notification of acceptance: 18 July 2018 Camera ready paper: 31 July 2018 Challenge: 20 October 2018 |
|