Modern representation learning techniques like deep neural networks have had a major impact on a wide range of tasks, achieving new state-of-the-art performances on benchmarks using little or no feature engineering. However, these gains are often difficult to translate into real-world settings because they usually require massive hand-labeled training sets. Collecting such training sets by hand is often infeasible due to the time and expense of labeling data; moreover, hand-labeled training sets are static and must be completely relabeled when real-world modeling goals change.

Increasingly popular approaches for addressing this labeled data scarcity include using weak supervision---higher-level approaches to labeling training data that are cheaper and/or more efficient, such as distant or heuristic supervision, constraints, or noisy labels; multi-task learning, to effectively pool limited supervision signal; data augmentation strategies to express class invariances; and introduction of other forms of structured prior knowledge. An overarching goal of such approaches is to use domain knowledge and data resources provided by subject matter experts, but to solicit it in higher-level, lower-fidelity, or more opportunistic ways.

In this workshop, we examine these increasingly popular and critical techniques in the context of representation learning. While approaches for representation learning in the large labeled sample setting have become increasingly standardized and powerful, the same is not the case in the limited labeled data and/or weakly supervised case. Developing new representation learning techniques that address these challenges is an exciting emerging direction for research [e.g., 1, 2]. Learned representations have been shown to lead to models robust to noisy inputs, and are an effective way of exploiting unlabeled data and transferring knowledge to new tasks where labeled data is sparse.

In this workshop, we aim to bring together researchers approaching these challenges from a variety of angles. Specifically this includes:

  • Learning representations to reweight and de-bias weak supervision
  • Representations to enforce structured prior knowledge (e.g. invariances, logic constraints).
  • Learning representations for higher-level supervision from subject matter experts
  • Representations for zero and few shot learning
  • Representation learning for multi-task learning in the limited labeled setting
  • Representation learning for data augmentation
  • Theoretical or empirically observed properties of representations in the above contexts

The second LLD workshop continues the conversation from the 2017 NeurIPS Workshop on Learning with Limited Labeled Data. Our goal is to once again bring together researchers interested in this growing field. With funding support, we are excited to again organize best paper awards for the most outstanding submitted papers. We also will have seven distinguished and diverse speakers from a range of machine learning perspectives, a panel on where the most promising directions for future research are, and a discussion session on developing new benchmarks and other evaluations for these techniques.

The LLD workshop organizers are also committed to fostering a strong sense of inclusion for all groups at this workshop, and to help this concretely, aside from the paper awards, there will be funding for several travel awards specifically for traditionally underrepresented groups.

Our Sponsors

We warmly thanks our generous sponsors to support this event!

Brand Logo Brand Logo

Previous Event Images

Our Speakers

speaker img

Anima Anandkumar


speaker img

Luna Dong


Stefano Ermon

Stanford University

speaker img

Chelsea Finn

Stanford University

Schedule Detail

Schedule tentative.

  • 9.45 AM


  • event speaker

    10.00 AM

    Invited Talk 1

    By Chelsea Finn
  • 10.30 AM

    Coffee Break

  • 11.00 AM

    Contributed Talk 1 by Or Litany
    SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels

  • event speaker

    11.15 AM

    Invited Talk 2

    By Luna Dong
  • 11.45 AM

    30-second Poster Spotlights

  • 12.00 AM

    Poster Session 1 & Lunch Break

  • event speaker

    3.15 PM

    Invited talk 3

    By Anima Anandkumar
  • 3.45 PM

    Contributed Talk 2 by Suman Ravuri
    Seeing is Not Necessarily Believing: Limitations of BigGANs for Data Augmentation

  • 4.00 PM

    Coffee break

  • event speaker

    4.30 PM

    Invited Talk 4

    By Stefano Ermon
  • 5.00 PM

    30-second Poster Spotlights

  • 5.15 PM

    Poster Session 2

  • 6.15 PM

    Award Ceremony

  • 6.25 PM

    Closing Remarks


ICLR 2019, New Orleans, Louisiana

Ernest N. Morial Convention Center, New Orleans

Submission and important dates

Please format your papers using the standard ICLR 2019 style files. The page limit is 4 pages (excluding references). Please do not include author information, submissions must be made anonymous. All accepted papers will be presented as posters(poster dimensions: 36W x 48H inches or 90 x 122 cm), with exceptional submissions also presented as oral talks.

We are pleased to announce that our sponsors, LumenAI will provide best paper awards (2 awards of $500 each) and Google will provide travelling support for exceptional submissions.

Submission deadline:

March 24, 2019, 11.59pm, GMT+1

Submit my paper.

Notification of acceptance:

April 14, 2019

List of accepted papers.

Camera ready due:

May 4, 2019


There is single registration for both the ICLR conference and workshops, the one from iclr.cc
Email organizing chairs: lld2019[at]googlegroups[dot]com
Submissions are reviewed through a confidential double-blind process.
We strongly encourage at least one author per submission to attend the workshop to present in person, however due to registration difficulties this year, submissions with no attending authors will still be considered.

Accepted papers


SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels, Or Litany, Daniel Freedman, (OpenReview link), (Award!)
Unsupervised Functional Dependency Discovery for Data Preparation, Zhihan Guo, Theodoros Rekatsinas, (OpenReview link)
Split Batch Normalization: a Simple Trick for Semi-Supervised Learning under Domain Shift, Michał Zając, Konrad Żołna, Stanisław Jastrzębski, (OpenReview link)
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, Kate Rakelly*, Aurick Zhou*, Deirdre Quillen, Chelsea Finn, Sergey Levine, (OpenReview link), (arXiv link)
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks, Jason Wei, Kai Zou, (OpenReview link)
Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories, Jeffrey Ling, Nicholas FitzGerald, Livio Baldini Soares, David Weiss, Tom Kwiatkowski, (OpenReview link)
Passage Ranking with Weak Supervision, Peng Xu, Xiaofei Ma, Ramesh Nallapati, Bing Xiang, (OpenReview link)
Data Augmentation for Rumor Detection Using Context-Sensitive Neural Language Model With Large-Scale Credibility Corpus, Sooji Han, Jie Gao, Fabio Ciravegna, (OpenReview link)
Unifying semi-supervised and robust learning by mixup, Ryuichiro Hataya, Hideki Nakayama, (OpenReview link)
Data for free: Fewer-shot algorithm learning with parametricity data augmentation, Owen Lewis, Katherine Hermann, (OpenReview link)
Search-Guided, Lightly-Supervised Training of Structured Prediction Energy Networks, Amirmohammad Rooshenas, Dongxu Zhang, Gopal Sharma, Andrew McCallum, (OpenReview link)

WEAKLY SEMI-SUPERVISED NEURAL TOPIC MODELS, Ian Gemp, Ramesh Nallapati, Ran Ding, Feng Nan, Bing Xiang, (OpenReview link)
LABEL-EFFICIENT AUDIO CLASSIFICATION THROUGH MULTITASK LEARNING AND SELF-SUPERVISION, Tyler Lee, Ting Gong, Suchismita Padhy, Andrew Rouditchenko, Anthony Ndirango, (OpenReview link)
Online Semi-Supervised Learning with Bandit Feedback, Mikhail Yurochkin, Sohini Upadhyay, Djallel Bouneffouf, Mayank Agarwal, Yasaman Khazaeni, (OpenReview link)
Enhancing experimental signals in single-cell RNA-sequencing data using graph signal processing, Daniel B. Burkhardt, Jay S. Stanley III, Ana Luisa Pertigoto, Scott A. Gigante, Kevan C. Herold, Guy Wolf, Antonio J. Giraldez, David van Dijk, Smita Krishnaswamy, (OpenReview link)
Training Neural Networks for Aspect Extraction Using Descriptive Keywords Only, Giannis Karamanolakis, Daniel Hsu, Luis Gravano, (OpenReview link)
Disentangling Factors of Variations Using Few Labels, Francesco Locatello, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem, (OpenReview link)
Learning from Samples of Variable Quality, Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf, (OpenReview link)
Sub-Task Discovery with Limited Supervision: A Constrained Clustering Approach, Phillip Odom, Aaron Keech, Zsolt Kira, (OpenReview link)
Supervised Contextual Embeddings for Transfer Learning in Natural Language Processing Tasks, Mihir Kale, Aditya Siddhant, Sreyashi Nag, Radhika Parik, Anthony Tomasic, Matthias Grabmair, (OpenReview link)
Few-Shot Regression via Learned Basis Functions, Yi Loo, Swee Kiat Lim, Gemma Roig, Ngai-Man Cheung, (OpenReview link)
Learning Graph Neural Networks with Noisy Labels, Hoang NT, Jun Jin Choong, Tsuyoshi Murata, (OpenReview link)
Unsupervised Scalable Representation Learning for Multivariate Time Series, Jean-Yves Franceschi, Aymeric Dieuleveut, Martin Jaggi, (OpenReview link)


Seeing is Not Necessarily Believing: Limitations of BigGANs for Data Augmentation, Suman Ravuri, Oriol Vinyals, (OpenReview link), (Award!)
Improved Self-Supervised Deep Image Denoising, Samuli Laine, Jaakko Lehtinen, Timo Aila, (OpenReview link)
Adversarial Feature Learning under Accuracy Constraint for Domain Generalization, Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo, (OpenReview link)
Online Meta-Learning, Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine, (OpenReview link)
De-biasing Weakly Supervised Learning by Regularizing Prediction Entropy, Dean Wyatte, (OpenReview link)
Learning Spatial Common Sense with Geometry-Aware Recurrent Networks, Hsiao-Yu Tung, Ricson Cheng, Katerina Fragkiadaki, (OpenReview link)
Learnability for the Information Bottleneck, Tailin Wu, Ian Fischer, Isaac Chuang, Max Tegmark, (OpenReview link)
Unsupervised Continual Learning and Self-Taught Associative Memory Hierarchies, James Smith, Seth Baer, Zsolt Kira, Constantine Dovrolis, (OpenReview link)
Explanation-Based Attention for Semi-Supervised Deep Active Learning, Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa, (OpenReview link)
Improving Sample Complexity with Observational Supervision, Khaled Saab, Jared Dunnmon, Alexander Ratner, Daniel Rubin, Christopher Re, (OpenReview link)

Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning, Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata, (OpenReview link)
Efficient Receptive Field Learning by Dynamic Gaussian Structure, Evan Shelhamer, Dequan Wang, Trevor Darrell, (OpenReview link)
Adversarial Learning of General Transformations for Data Augmentation, Saypraseuth Mounsaveng, David Vazquez, Ismail Ben Ayed, Marco Pedersoli, (OpenReview link)
Reference-based Variational Autoencoders, Adrià Ruiz, Oriol Martinez, Xavier Binefa, Jakob Verbeek, (OpenReview link)
Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment, Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton, (OpenReview link)
Spatial Broadcast Decoder: A Simple Architecture for Disentangled Representations in VAEs, Nick Watters, Loic Matthey, Chris P. Burgess, Alexander Lerchner, (OpenReview link)
Adaptive Masked Weight Imprinting for Few-Shot Segmentation, Mennatullah Siam, Boris Oreshkin, (OpenReview link)
Heuristics for Image Generation from Scene Graphs, Anonymous, (OpenReview link)
Train Neural Network by Embedding Space Probabilistic Constraint, Kaiyuan Chen, Zhanyuan Yin, (OpenReview link)
Invariant Feature Learning by Attribute Perception Matching, Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo, (OpenReview link)
Enhancing Generalization of First-Order Meta-Learning, Mirantha Jayathilaka, (OpenReview link)
Adaptive Cross-Modal Few-Shot Learning, Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, Pedro O. Pinheiro, (OpenReview link)
Data Interpolating Prediction: Alternative Interpretation of Mixup, Takuya Shimada, Shoichiro Yamaguchi, Kohei Hayashi, Sosuke Kobayashi, (OpenReview link)
A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision, Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama, (OpenReview link)


  • Isabelle Augenstein
  • Stephen Bach
  • Matthew Blaschko
  • Eugene Belilovsky
  • Edouard Oyallon
  • Anthony Platanios
  • Alex Ratner
  • Christopher Re
  • Xiang Ren
  • Paroma Varma


We would like to thank all our reviewers.