| |||||||||||||||
MUWS 2024 : MUWS 2024 - The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media | |||||||||||||||
Link: https://muws-workshop.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Aim and Scope
Multimodal human understanding and analysis is an emerging research area that cuts through several disciplines like Computer Vision, Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction, and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences, semiotics and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights and analysis on perceptual understanding through signs and symbols via multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media. The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop will be an interactive event and include keynotes by relevant experts, poster and demo sessions, research presentations and discussion. Particular areas of interest include, but are not limited to: - Modeling human impressions in the context of the Web and Social Media - Cross-modal and semantic relations - Incorporating multi-disciplinary theories such as Semiotics or Gestalt-Theory into multimodal analyses - Measuring and analyzing biases such as cultural bias, social bias, multilingual bias, and related topics in the context of the Web and Social Media - Multimodal human perception understanding - Multimodal sentiment/emotion/sarcasm recognition - Multimodal hate speech detection - Multimodal misinformation detection - Multimodal content understanding and analysis - Multimodal rhetoric in online media Submission Instructions We welcome contributions from 4 pages (short papers) to 8 pages (long papers) that address the topics of interest. Papers should follow the ACM proceedings style. All submissions must be written in English and must be formatted according to the proceedings style. The workshop proceedings will be part of the ICMR Proceedings. Submission Page: https://easychair.org/conferences/?conf=muws24 Important Dates Submission deadline: April 14th, 2024 Paper notification: April 21st, 2024 Workshop date: June 10th, 2024 Organizing Committee Marc A. Kastner, Kyoto University, Kyoto, Japan Gullal S. Cheema, TIB - Leibniz Information Centre for Science and Technology, Hannover, Germany Sherzod Hakimov, University of Potsdam, Potsdam, Germany Noa Garcia, Osaka University, Osaka, Japan Contact All questions about the workshop should be emailed to: muws24 (at sign) easychair.org |
|