| |||||||||||||||
Big-Affect 2017 : Utilising Big Unlabelled and Unmatched Data for Affective Computing | |||||||||||||||
Link: https://sites.google.com/view/acii17ubuudac/accueil | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
There has been lots of research toward affect recognition through different modalities such as speech, video, and text. Despite these great efforts, the performed analyses are often limited to small collected datasets which consequently makes generated models barely generalisable to other recording scenarios. This lack of `big' labelled data for affective computing hampers creating deep models, which have proved their substantial effectiveness, so far, mostly in related fields such as speech and video recognition. Thanks to the popularity of social multimedia, collecting audiovisual and textual data has become a somewhat easy task. Nonetheless, labeling such data demands a huge amount of (expert) human work, which can be expensive and time-consuming. Additionally, collected data may not have high quality and therefore, may not be sufficiently reliable to be used for training a model. Furthermore, collected data from different sources may be highly dissimilar, which can also lead to performance degradation. Therefore, in this special session, we seek approaches that aim to increase the number of reliable labelled data with less human effort as well as to match data distributions between labelled and un- or partially-labelled corpora. This will be a crucial step to lead Affective Computing to industrial level and bring related everyday applications into real life.
Topics (indication, not limited to): * semi-supervised learning and active learning * zero resource technologies, as unsupervised learning * transfer learning for domain/model adaptation * using weak labels and co-training * crowdsourcing for collecting and annotation large-scale data * affective data augmentation and synthesis * reinforcement learning * cloud/distributed computing algorithms for big affective data * applications (such as cross-language cross-cultural adaptation, cross-modality transfer learning, ...) For further details see https://sites.google.com/view/acii17ubuudac/accueil​ |
|