posted by user: kyunghyuncho || 3311 views || tracked by 4 users: [display]

MMML 2015 : Multimodal Machine Learning Workshop @NIPS 2015

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/site/multiml2015/
 
When Dec 11, 2015 - Dec 11, 2015
Where Montreal, QC, Canada
Submission Deadline Oct 9, 2015
Notification Due Oct 24, 2015
 

Call For Papers

=====================================================
NIPS 2015 Workshop: Multimodal Machine Learning
Montreal, Quebec, Canada
https://sites.google.com/site/multiml2015/
=====================================================

IMPORTANT DATES

· Submission Deadline: October 9th, 2015
· Author Notification: October 24th, 2015
· Workshop: December 11, 2015


KEYNOTE SPEAKERS

· Shih-Fu Chang (Columbia University)
· Li Deng (Microsoft Research)
· Raymond Mooney (University of Texas, Austin)
· Ruslan Salakhutdinov (Carnegie Mellon University)

OVERVIEW

Multimodal machine learning aims at building models that can process and relate information from multiple modalities. From the early research on audio-visual speech recognition to the recent explosion of interest in models mapping images to natural language, multimodal machine learning is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential.

Learning from paired multimodal sources offers the possibility of capturing correspondences between modalities and gain in-depth understanding of natural phenomena. Thus, multimodal data provides a means of reducing our dependence on the more standard supervised learning paradigm that is inherently limited by the availability of labeled examples.

This research field brings some unique challenges for machine learning researchers given the heterogeneity of the data and the complementarity often found between modalities. This workshop will facilitate the progress in multimodal machine learning by bringing together researchers from natural language processing, multimedia, computer vision, speech processing and machine learning to discuss the current challenges and identify the research infrastructure needed to enable a stronger multidisciplinary collaboration.

TOPICS

We are looking for contributed papers that apply machine learning to multimodal data. We are interested in both application-oriented papers as well as more fundamental algorithmic / theoretical works.

A non-exhaustive list of relevant topics:

· Automatic image and video description
· Multimodal signal processing
· Audio-visual speech recognition
· Multimodal affect recognition
· Cross-modal multimedia retrieval
· Multi-view multi-task learning
· Multimodal representation learning
· Multi-sensory computational modeling
· Multilingual, multimodal language processing
· Multimodal modeling for robotics control
· Multimodal human behavior modeling

SUBMISSIONS

Authors should submit an extended abstract between 4 and 6 pages (including references). We particularly encourage submissions that have been previously published outside the machine learning community (i.e. at NIPS and ICML) to emphasize the multidisciplinary aspect of this research area. We also encourage submission of relevant work in progress.

Submitted abstracts may be a shortened version of a longer paper or technical report, in which case the longer paper should be referred from the submission. Reviewers will be asked to judge the submission solely based on the submitted extended abstract.

All submissions must be in PDF format, and we encourage authors to follow the style guidelines of NIPS 2015 at: https://nips.cc/Conferences/2015/PaperInformation/AuthorSubmissionInstructions

Submissions must be made through: https://cmt.research.microsoft.com/MMML2015/

Submissions will be reviewed for relevance, quality and novelty. They will be presented as posters during the poster session (before the lunch break). A handful of submissions will be given a short talk.

ORGANIZERS

· Louis-Philippe Morency (morency@cs.cmu.edu)
· Tadas Baltrušaitis (tbaltrus@cs.cmu.edu)
· Aaron Courville (aaron.courville@umontreal.ca)
· KyungHyun Cho (kyunghyun.cho@nyu.edu)

Related Resources

AICA 2020   O'Reilly AI Conference San Jose
Fintech 2020   Sustainaility (Q2): Fintech: Recent Advancements in Modern Techniques, Methods and Real-World Solutions
MLHMI--Ei Compendex and Scopus 2021   2021 2nd International Conference on Machine Learning and Human-Computer Interaction (MLHMI 2021)--Ei Compendex, Scopus
FAPER 2020   Fine Art Pattern Extraction and Recognition (FAPER) workshop at ICPR2020
SI-FDS-MLJ 2021   CFP: Special Issue on Foundations of Data Science - Machine Learning Journal
FILA 2020   IEEE International Workshop on Fair and Interpretable Learning Algorithms
EI/SCOPUS-CMVIT 2021   5th International Conference on Machine Vision and Information Technology (CMVIT 2021)
U2BigData 2020   User Understanding from Big Data Workshop - in conjunction with IEEE Big Data 2020
MAES 2020   Machine Learning Advances Environmental Science
MLHMI--Ei and Scopus 2021   2021 2nd International Conference on Machine Learning and Human-Computer Interaction (MLHMI 2021)--Ei Compendex, Scopus