Despite their empirical successes in computer vision, deep networks often demand “big” and carefully labeled training data. When being applied to studying complex problems in the real visual world, their performance is limited since both data and labels can be notoriously difficult, costly to obtain, or may come in various noisy, weak or long-tailed forms. For example, collecting image data in many scientific and engineering disciplines (astronomy, material science, geoscience, medicine and so on) often hinges on expensive and high-stake experiments. Further, the operation of data labeling in these applications is also tedious to scale up, whose complexity demands highly skilled professionals, creating challenges to use some cost-effective solutions such as crowdsourcing. Moreover, labeling data in large scale using crowdsourcing are often infeasible for proprietary or sensitive data. What is worse, additional labels are always needed when the trained models face changes in their operating environments and need to be adapted. For many problems, the labeled data required to adapt models to new environments approaches the amount required to train from scratch. Therefore, continuous data and label collection is also needed when the systems exhibit non-stationary properties or operate in varying environments.
In this tutorial, we address the grand challenge of data- and label-efficient visual learning in realistic and imperfect visual environments. We plan to focus on a comprehensive suite of state-of-the-art techniques to tackle this problem from multiple levels, including unsupervised/self-supervised learning, weakly-supervised learning, long tail visual recognition, domain adaptation and meta learning. We will also demonstrate those techniques in representative computer vision applications such as (interactive) segmentation, autonomous driving, and medical image understanding. The organizers will share their extensive experience on this topic and provide links to resources such as relevant datasets and source code.
The Economist once published a story titled, “The world's most valuable resource is no longer oil, but data.” However, acquiring perfect data is usually inefficient or even hopeless for some research areas or applications such as segmentation, satellite / agriculture / medical imagery. Our aim is to efficiently leverage the existing data, either rich or scarce, and no matter they are labeled, weakly labeled, unlabeled, noisy, with domain gaps, etc., towards learning reliable recognition models for real world applications. We believe the topics covered by this tutorial will attract a wide range of researchers working on unsupervised / few-shot / weakly supervised / domain adaptive / meta learning, and label- or data-limited applications such as medical imagery from both academia and industry.
A list of reference papers and code bases provided by the organizers: