posted by user: bfrenay || 5583 views || tracked by 10 users: [display]

NEUCOM 2014 : Neurocomputing Special Issue: Advances in Learning with Label Noise

FacebookTwitterLinkedInGoogle

Link: http://www.journals.elsevier.com/neurocomputing/call-for-papers/special-issue-on-advances-in-learning-with-label-noise/
 
When N/A
Where N/A
Submission Deadline Mar 1, 2014
Notification Due Jul 15, 2014
Categories    machine learning
 

Call For Papers

Call for Papers: Neurocomputing Special Issue on "Advances in Learning with Label Noise"

AIMS AND SCOPE

Label noise is an important issue in classification. It is both expensive and difficult to obtain completely reliable labels, yet traditional classifiers expect a perfectly labelled training set. In real-world data sets, however, the labels available often contain mistakes. Mislabelling may occur for several reasons, including lack of information, speedy labelling by non-experts, the subjective nature of class memberships, expert errors, and communication problems. Furthermore, label noise may take several different forms -- for instance, labelling errors may occur at random, or may depend on particular values of the data features, or they may be adversarial. Errors may affect all data classes equally or asymmetrically. A large body of literature exists on the effects of label noise, which shows that mislabelling may detrimentally affect the classification performance, the complexity of the learned models, and it may impair pre-processing tasks such as feature selection.

Many methods have been proposed to deal with label noise. Filter approaches aim at identifying and removing any mislabelled instances. Label noise sensitive algorithms aim at dealing with label noise during learning, by modelling the process of label corruption as part of modelling the data. Some methods have been modified to take label noise into account in an embedded fashion. The current literature on learning with label noise is a lively mixture of theoretical and experimental studies which clearly demonstrate both the complexity and the importance of the problem. Dealing with mislabelled instances needs to be flexible enough to accommodate label uncertainty, yet constrained enough to guide the learning process in its decisions regarding when to trust the label and when to trust the classifier.

This special issue aims to stimulate new research in the area of learning with label noise by providing a forum for authors to report on new advances and findings in this problem area. Topics of interest include, but are not limited to:

- new methods to deal with label noise;
- new applications where label noise must be taken into account;
- theoretical results about learning in the presence of label noise;
- experimental results which provide insight about existing methods;
- dealing with different types of label noise (random, non-random, malicious, or adversarial);
- conditions for the consistency of classification in the presence of label noise;
- label noise in high dimensional small sample settings;
- the issue of model meta-parameters/order selection in the presence of label noise;
- feature selection and dimensionality reduction in the presence of label noise;
- label-noise aware classification algorithms in static and dynamic scenarios;
- on-line learning with label noise
- learning with side information to counter label noise;
- model assessment in the presence of label noise in test data.


SUBMISSION OF MANUSCRIPTS

If you intend to contribute to this special issue, please send a title and abstract of your contribution to the guest editors.

Authors should prepare their manuscript according to the Guide for Authors available at http://www.journals.elsevier.com/neurocomputing. All the papers will be peer-reviewed following the Neurocomputing reviewing procedures. Authors must submit their papers electronically by using online manuscript submission at http://ees.elsevier.com/neucom. To ensure that all manuscripts are correctly included into the special issue, it is important that authors select "SI: Learning with label noise" when they reach the "Article Type" step in the submission process.

For technical questions regarding the submission website, please contact the support office at Elsevier or the guest editors.

IMPORTANT DATES

Deadline of paper submission: 15 February 2014
Notification of acceptance: 15 July 2014

GUEST EDITORS

Benoît Frénay (Managing Guest Editor)
Université catholique de Louvain, Belgium
E-mail: benoit.frenay@uclouvain.be
Website: http://bfrenay.wordpress.com
Phone: +32 10 478133

Ata Kaban (Special Issue Guest Editor)
University of Birmingham, United Kingdom
E-mail: A.Kaban@cs.bham.ac.uk
Website: http://www.cs.bham.ac.uk/~axk
Phone: +44 121 41 42792

Related Resources

MLDM 2023   18th International Conference on Machine Learning and Data Mining
CFDSP 2023   2023 International Conference on Frontiers of Digital Signal Processing (CFDSP 2023)
IJCNN 2023   International Joint Conference on Neural Networks
JCRAI 2022-Ei Compendex & Scopus 2022   2022 International Joint Conference on Robotics and Artificial Intelligence (JCRAI 2022)
WSPML 2022   2022 4rd International Workshop on Signal Processing and Machine Learning (WSPML 2022)
IEEE ICA 2022   The 6th IEEE International Conference on Agents
WSDM 2023   Web Search and Data Mining
RAMFP 2022   TAA (OA) - SI: Recent Advances on Metric Fixed Point Theory 2022
MLDM 2023   19th International Conference on Machine Learning and Data Mining
ACM-Ei/Scopus-ITNLP 2022   2022 2nd International Conference on Information Technology and Natural Language Processing (ITNLP 2022) -EI Compendex