LAWS 2012 : First International Workshop on Learning with Weak Supervision
Call For Papers
First International Workshop on Learning with Weak Supervision
in conjunction with ACML 2012, November 4, 2012 - Singapore
**Selected papers of LAWS'12 will be invited to publish an extended version in a follow-on special issue of Elsevier's Neurocomputing Journal on the topic of Learning with Weak Supervision**
BACKGROUND AND MOTIVATION
Supervision information encodes semantics and regularities on the learning problem to be addressed, and thus plays a key role for the success of many learning systems. Traditional supervised learning methods adopt the strong supervision assumption, i.e. training examples are supposed to carry sufficient and explicit supervision information to induce prediction models with good generalization ability. However, due to the various constraints imposed by physical environment, problem characteristics, and resource limitations, it is difficult and even infeasible to have strong supervision information in many real-world applications.
Recently, learning with weak supervision has attracted much attention within machine learning community, where a number of weak learning frameworks have emerged. To name a few, semi-supervised learning studies the problem where the supervision information is only available for a small number of labeled instances; PU learning studies the problem where the supervision information is only available for positive instances; Multi-instance learning studies the problem where the supervision information is only available at the level of bags (set of instances) instead of at the level of instances; Constrained clustering studies the problem where the supervision information is only available in the form of a few ``must-link'' and ``cannot-link'' constraints; and so on.
It is interesting to note that some learning problems which seem to have strong supervision information actually suffer from insufficient supervision. For instance, in multi-label learning, each object is represented by a single instance while associated with multiple class labels. Formally speaking, multi-label learning learns a mapping from instance space to the power set of label space. Therefore, it can be essentially regarded as a single-label multi-class learning problem where each label subset corresponds to a new class. Although the supervision information for each instance looks strong, the supervision information available for the whole learning problem is weak due to the huge (exponentially-sized) output space.
Yet another kind of weak supervision is that the supervision information comes from not the identical but just a related domain or a related task where the distribution of training data (and possibly labels as well) would be different from that in the target (main) application. Examples of such learning problems include domain adaptation, transfer learning, and multi-task learning. In domain adaptation, for example, training data is abundant in a source domain or multiple source domains but scarce or nonexistent in the target domain. The challenge is how to make the best use out of the training data in the source domain(s) while still taking into consideration the differences between the source domain(s) and the target domain.
AIMS, SCOPE AND FORMAT
Although in recent years, machine learning researchers have done extensive studies on each of the above topics, it would be of great interest to the machine learning community to find common techniques that work across them and get inspiration from each other. We aim to organize a workshop at ACML'12, which can present a good chance to bring together researchers and practitioners who work on various aspects of learning with weak supervision, to discuss on the state-of-the-art and open problems, to share their expertise and exchange the ideas, and to offer them an opportunity to identify new promising research directions. The workshop will feature sessions for oral presentation of the accepted contributions, invited talks and discussion session to allow for a more interactive and engaging experience.
TOPICS OF INTEREST (non-exhaustive list)
* Learning from data with incomplete labels
- Semi-supervised learning
- PU learning
- Multi-instance learning
- Constrained clustering
* Learning from data with multiple labels
- Multi-label learning
- Partial label learning (learning from candidate labeling sets)
- Multi-instance multi-label learning (MIML)
* Learning from data in different distributions
- Domain adaptation
- Transfer learning
- Multi-task learning
* Full paper submission due : September 10, 2012
* Acceptance notification : September 30, 2012
* Camera-ready paper due : October 7, 2012
* Date of LAWS'12 workshop : November 4, 2012
Papers must be in English and formatted according to the ACML 2012 stylefiles (available at http://acml12.comp.nus.edu.sg/uploads/Main/ACML2012.zip). The maximum length of papers is 16 pages in this format. At the time of submission, papers should not be under review or accepted for publication elsewhere.
Each paper will undergo rigorous review (single-blind) by at least two reviewers and the quality will be evaluated based on its novelty of content, clarity of presentation, thoroughness of experiments, as well as other aspects. Accepted papers will be included in the Working Notes of LAWS'12 and be made publicly available via the workshop's website.
The submission system of LAWS'12 is managed by EasyChair. To submit your contribution, please visit https://www.easychair.org/conferences/?conf=laws12
* Deng Cai, Zhejiang University
* Xiaoli Li, Institute for Infocomm Research
* Sinno Jianlin Pan, Institute for Infocomm Research
* Jie Wang, Arizona State University
* Dacheng Tao, University of Technology, Sydney
* Ivor Tsang, Nanyang Technological University
* Yiming Yin, University of Exeter
* Shipeng Yu, Siemens Medical Solutions USA, Inc.
* Jerry Zhu, University of Wisconsin-Madison
* Xingquan Zhu, University of Technology, Sydney
Southeast University, China
Birkbeck, University of London, UK
Singapore Management University, Singapore