posted by user: dvazquez || 4568 views || tracked by 5 users: [display]

TASK-CV 2015 : 2nd Workshop on Transferring and Adapting Source Knowledge in Computer Vision

FacebookTwitterLinkedInGoogle

Link: http://www.cvc.uab.es/adas/task-cv2015
 
When Dec 17, 2015 - Dec 18, 2015
Where Santiago de Chile
Submission Deadline Sep 14, 2015
Notification Due Sep 27, 2015
Final Version Due Oct 12, 2015
Categories    computer vision   machine learning   domain adaptation   transfer learning
 

Call For Papers

TASK-CV 2015 - 2nd Workshop on Transferring and Adapting Source Knowledge in Computer Vision
Santiago de Chile, TBA
In conjunction with ICCV 2015

Web: http://www.cvc.uab.es/adas/task-cv2015
********************************************************

_________________
IMPORTANT DATES
_________________

Submission deadline: September 14th, 2015
Author notification: September 27th, 2015
Camera-ready: October 12th 2015
Workshop: TBA (17th or 18th December 2015)
________________
CALL FOR PAPERS
________________

This workshop aims at bringing together computer vision researchers interested in the domain adaptation and knowledge transfer techniques, which are receiving increasing attention in computer vision research.

During the first decade of the XXI century, progress in machine learning has had an enormous impact in computer vision. The ability to learn models from data has been a fundamental paradigm in image classification, object detection, semantic segmentation or tracking.

A key ingredient of such a success has been the availability of visual data with annotations, both for training and testing, and well-established protocols for evaluating the results.

However, most of the time, annotating visual information is a tiresome human activity prone to errors. This represents a limitation for addressing new tasks and/or operating in new domains. In order to scale to such situations, it is worth finding mechanisms to reuse the available annotations or the models learned from them.

This aim challenges machine learning theory since its more traditional corpus corresponds to situations where there are sufficient labeled data of each task, and the training data distribution matches the test distribution.

Therefore, transferring and adapting source knowledge (in the form of annotated data or learned models) to perform new tasks and/or operating in new domains has recently emerged as a challenge to develop computer vision methods that are reliable across domains and tasks.

Accordingly, TASK-CV aims to bring together research in transfer learning and domain adaptation for computer vision as a workshop hosted by the ICCV 2015. We invite the submission of research contributions such as:

- TL/DA learning methods for challenging paradigms like unsupervised, and incremental or online learning.
- TL/DA focusing on specific visual features, models or learning algorithms.
- TL/DA jointly applied with other learning paradigms such as reinforcement learning.
- TL/DA in the era of convolutional neural networks (CNNs), adaptation effects of fine-tuning, regularization techniques, transfer of architectures and weights, etc.
- TL/DA focusing on specific computer vision tasks (e.g., image classification, object detection, semantic segmentation, recognition, retrieval, tracking, etc.) and applications (biomedical, robotics, multimedia, autonomous driving, etc.).
- Comparative studies of different TL/DA methods.
- Working frameworks with appropriate CV-oriented datasets and evaluation protocols to assess TL/DA methods.
- Transferring part representations between categories.
- Transferring tasks to new domains.
- Solving domain shift due to sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity) and compression schemes.
- Datasets and protocols for evaluating TL/DA methods. This is not a closed list; thus, we welcome other interesting and relevant research for TASK-CV.


____________
SUBMISSION
____________

Authors should take into account the following:
- The submission site is https://cmt.research.microsoft.com/TASK2015/.
- The contributions will consist in Extended Abstracts (EA) of 4 pages. Thus, they can be summary papers presented in the main conference or just ongoing work that will be eventually submitted or not to another conference. In any case, we will encourage the accepted EA to submit a longer version to the journal special issue that we will organize.
- The format of the papers is the same as the ICCV main conference.
- Submissions will be rejected without review if they: contain more than 4 pages (excluding references) or violate the double-blind policy.
- Manuscript templates can be found at the main conference website: http://pamitc.org/iccv15/author_guidelines.php
- The accepted papers will be linked in the TASK-CV webpage.

_____________________
BEST PAPER
_____________________

The TASK-CV will award the best student paper of the workshop, voted by the program committee. More details will be provided in the workshop web page.

_________
Contact
_________

David Vazquez (dvazquez@cvc.uab.es)
Antonio M. Lopez (antonio@cvc.uab.es)

Related Resources

DMKD 2024   2024 International Conference on Data Mining and Knowledge Discovery(DMKD 2024)
ECAI 2024   27th European Conference on Artificial Intelligence
AIKE 2024   7th IEEE International Conference on Artificial Intelligence and Knowledge Engineering
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
CPAIOR 2024   International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
ADMIT 2024   2024 3rd International Conference on Algorithms, Data Mining, and Information Technology (ADMIT 2024)
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
DSIT 2024   2024 7th International Conference on Data Science and Information Technology (DSIT 2024)
AIM@EPIA 2024   Artificial Intelligence in Medicine