Modern representation learning techniques like deep neural networks have had a major impact on a wide range of tasks, achieving new state-of-the-art performances on benchmarks using little or no feature engineering. However, these gains are often difficult to translate into real-world settings because they usually require massive hand-labeled training sets. Collecting such training sets by hand is often infeasible due to the time and expense of labeling data; moreover, hand-labeled training sets are static and must be completely relabeled when real-world modeling goals change.
Increasingly popular approaches for addressing this labeled data scarcity include using weak supervision---higher-level approaches to labeling training data that are cheaper and/or more efficient, such as distant or heuristic supervision, constraints, or noisy labels; multi-task learning, to effectively pool limited supervision signal; data augmentation strategies to express class invariances; and introduction of other forms of structured prior knowledge. An overarching goal of such approaches is to use domain knowledge and data resources provided by subject matter experts, but to solicit it in higher-level, lower-fidelity, or more opportunistic ways.
In this workshop, we examine these increasingly popular and critical techniques in the context of representation learning. While approaches for representation learning in the large labeled sample setting have become increasingly standardized and powerful, the same is not the case in the limited labeled data and/or weakly supervised case. Developing new representation learning techniques that address these challenges is an exciting emerging direction for research [e.g., 1, 2]. Learned representations have been shown to lead to models robust to noisy inputs, and are an effective way of exploiting unlabeled data and transferring knowledge to new tasks where labeled data is sparse.
In this workshop, we aim to bring together researchers approaching these challenges from a variety of angles. Specifically this includes:
- Learning representations to reweight and de-bias weak supervision
- Representations to enforce structured prior knowledge (e.g. invariances, logic constraints).
- Learning representations for higher-level supervision from subject matter experts
- Representations for zero and few shot learning
- Representation learning for multi-task learning in the limited labeled setting
- Representation learning for data augmentation
- Theoretical or empirically observed properties of representations in the above contexts
The second LLD workshop continues the conversation from the 2017 NeurIPS Workshop on Learning with Limited Labeled Data. Our goal is to once again bring together researchers interested in this growing field. With funding support, we are excited to again organize best paper awards for the most outstanding submitted papers. We also will have seven distinguished and diverse speakers from a range of machine learning perspectives, a panel on where the most promising directions for future research are, and a discussion session on developing new benchmarks and other evaluations for these techniques.
The LLD workshop organizers are also committed to fostering a strong sense of inclusion for all groups at this workshop, and to help this concretely, aside from the paper awards, there will be funding for several travel awards specifically for traditionally underrepresented groups.
Invited Talk 1By ...
Invited Talk 2By ...
Contributed Talk 1By ...
1-minute poster spotlights
Poster Session 1/Coffee break
Invited Talk 4By ...
Invited Talk 5By ...
Panel: “Domain Knowledge vs. Sample Efficiency: Where Should We Focus?”
1-minute poster spotlights
Poster Session 2/Coffee break
Contributed Talk 2By ...
Invited Talk 6By ...
Invited Talk 7By ...
Contributed Talk 3By ...
Contributed Talk 4By ...
Discussion Session: New Evaluations and Benchmarks with Limited Labeled Data
ICLR 2019, New Orleans, Louisiana
Ernest N. Morial Convention Center, New Orleans
Submission and important dates
Please format your papers using the standard ICLR 2019 style files. The page limit is 4 pages (excluding references). Please do not include author information, submissions must be made anonymous. All accepted papers will be presented as posters(poster dimensions: TBC), with exceptional submissions also presented as oral talks.
We are pleased to announce that our sponsors, Google, LumenAI and SFDS will provide both best paper awards (2 awards of $500 each) and travelling support for exceptional submissions.
- Isabelle Augenstein
- Stephen Bach
- Matthew Blaschko
- Eugene Belilovsky
- Edouard Oyallon
- Anthony Platanios
- Alex Ratner
- Christopher Re
- Xiang Ren
- Paroma Varma
We would like to thank all our reviewers.