| |||||||||||||||
LQ @ ECML-PKDD 2023 : 3rd International Workshop on Learning to Quantify: Methods and Applications | |||||||||||||||
Link: https://lq-2023.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Learning to Quantify (LQ - also known as "quantification", or "class prior estimation", or "unfolding"), is the task of training class prevalence estimators via supervised learning. In other words, the task of these trained models is to estimate, given an unlabelled sample of data items and a set of classes, the prevalence (i.e., relative frequency) of each such class in the sample.
LQ is interesting in all applications of classification in which the final goal is not determining which class (or classes) individual unlabelled data items belong to, but estimating the percentages of data items that belong to the classes of interest, i.e., estimating the distribution of the unlabelled data items across the classes. Example disciplines whose interest in labeling data items is at the aggregate level (rather than at the individual level), are the social sciences, political science, market research, ecological modelling, experimental physics, and epidemiology. While LQ may in principle be solved by classifying each data item in the sample and counting how many such items have been labelled with a certain class, it has been shown that this “classify and count” method may yield poor quantification accuracy. As a result, quantification is now no longer considered a mere by-product of classification, and has evolved as a task of its own. The goal of this workshop is to bring together all researchers interested in methods, algorithms, evaluation measures, and methodologies for LQ, as well as practitioners interested in their practical application to managing large quantities of data. We seek papers on any of the following topics, which will form the main themes of the workshop: - Binary, multiclass, multilabel, and ordinal LQ - Supervised algorithms for LQ - Semi-supervised / transductive LQ - Deep learning for LQ - Representation learning for LQ - LQ and dataset shift - Evaluation measures for LQ - Experimental protocols for the evaluation of LQ - Quantification of streaming data - Cost-sensitive quantification - Improving classifier performance via LQ - Novel applications of LQ and other topics of relevance to LQ. We seek papers on topics of relevance to LQ. Two categories of papers are of interest: - papers reporting original, unpublished research; - papers {published in 2023 / currently under submission / accepted in 2023} at other {workshops / conferences / journals}, provided this double submission does not violate the rules of these {workshops / conferences / journals}. Papers should be submitted (specifying which of the above categories they belong to) via the EasyChair system at https://easychair.org/conferences/?conf=lq2023 Papers should be formatted according to the same format as for the main ECML/PKDD 2023 conference, and should be up to 16 pages (including references) in length; however, this is just the upper bound, and contributions of any length up to this bound will be considered. The workshop will be a hybrid event, but it is strongly recommended that authors of accepted papers present the work in-presence. At least one author of each accepted paper must register to present the work. The proceedings of the workshop will not be formally published, so as to allow authors to resubmit their work to other conferences. Informal proceedings will be published on the workshop website; however, for each accepted paper, it will be left at the discretion of the authors to decide whether to contribute their paper or not to these proceedings. |
|