| |||||||||||||||
WCRML 2019 : Workshop on Crossmodal Learning and Application | |||||||||||||||
Link: https://crossmodallearning.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
To contribute to the understanding of cross-modal technologies, we invite original articles in relevant topics, which include but are not limited to
Multi modal representation/feature learning Cross-modal retrieval Data alignment across modalities, e.g., synchronising motion sensor with video Data translation, e.g., visually indicated sound Learning using side information, e.g., modality hallucination Knowledge transfer across modalities, e.g., zero-shot/few-shot learning Applications with cross-modal data–IoT (Internet of Things)–operation and maintenance–surveillance–public transportation–logistics–health care–task-oriented dialog–human-robot interaction with vision and audio–user/product/job search and recommendation–social media retrieval and analysis–others More detailed submission guideliness could be found in https://crossmodallearning.github.io |
|