posted by user: paws || 3576 views || tracked by 4 users: [display]

JMLR 2015 : JMLR Special Topic on Multi Task Learning, Domain Adaptation and Transfer Learning

FacebookTwitterLinkedInGoogle

 
When N/A
Where N/A
Submission Deadline May 15, 2015
 

Call For Papers

Call For Papers - EXTENDED DEADLINE
JMLR Special Topic on
Multi-Task Learning, Domain Adaptation and Transfer Learning

Guest Editors: Urun Dogan, Marius Kloft, Francesco Orabona, Tatiana Tommasi

**************************************************************
Due to several requests we extended the submission deadline to
15 May 2015
**************************************************************

In the last years there has been an increase of activity in the areas of domain adaptation, transfer and multi-task learning. All born as solutions to better exploit the available data at training time and often moved by the need to deal with a reduced amount of information, these three topics grew fast in several directions and have multiple applications. Today new open research questions present the challenge. On one side, the literature is missing a joint theoretical framework over all of them, replaced instead by many theoretical formulations model regimes that are rarely used in practice (e.g. adaptive methods that store all the source samples). On the other, in the “big data” era, existing methods should be extended to manage large amount of data that do not lack anymore in size but may lack in quality or may continuously change over time.

This special topic is intended to to gather contributions that indicate new directions, innovative views and to serve as an outlet for recent advances in learning in such environments. We welcome both theoretical advances in this field as well as detailed reports on applications.

Topics of interest include:
Learning the task similarities/dissimilarities from large amount of data
Regularization strategies in multi-task learning
Domain adaptation and dataset bias on large data collections
Deep Learning on domain adaptation, transfer and multi-task applications
Incremental, online and active transfer for open-ended learning
Innovative adaptive procedures with applications e.g. in computer vision or computational biology
Domain adaptation theory
Large number of tasks, small number of examples setting in multi-task learning
Reinforcement learning and adaption
Applications of multi-task learning to natural language processing tasks such as machine translation and syntactic or semantic parsing.

Important Dates:
Submission: 15 May 2015
Decision: 15 August 2015
Final Version Due: 1 October 2015

Submission Procedure:
Authors are kindly invited to follow the standard JMLR format and submission procedure. The number of pages is limited to 30. Please include a note stating that your submission is for the special topic on Multi-Task Learning, Domain Adaptation and Transfer Learning.

For further details or enquiries, please contact the guest editors: mtldatl@gmail.com

Related Resources

EuroSimE 2026   27th International Conference on Thermal, Mechanical and Multi-Physics Simulation and Experiments in Microelectronics and Microsystems
Mtl Book 2025   Call for Book Chapters: Multi-Task Learning in Science and Engineering
AMLDS 2026   IEEE--2026 2nd International Conference on Advanced Machine Learning and Data Science
Ei/Scopus-ITCC 2026   2026 6th International Conference on Information Technology and Cloud Computing (ITCC 2026)
Ei/Scopus-CEICE 2026   2026 3rd International Conference on Electrical, Information and Communication Engineering (CEICE 2026)
Ei/Scopus-CMLDS 2026   2026 3rd International Conference on Computing, Machine Learning and Data Science (CMLDS 2026)
Ei/Scopus-CNIOT 2026   2026 7th IEEE International Conference on Computing, Networks and Internet of Things (CNIOT 2026)
PRIMA 2025   26th International Conference on Principles and Practice of Multi-Agent Systems
CEA Latinx Lit 2026   CFP: Special Topic in Latinx Literature at CEA 2026
M2EC 2026   8th International Workshop on Recent Advances for Multi-Clouds and Mobile Edge Computing