Campuses:

When Labelling Hurts: Learning to Classify Large-Scale Data with Minimal Supervision

Wednesday, September 16, 2020 - 9:30am - 10:15am
Angelica Aviles-Rivero (University of Cambridge)
In this era of big data, deep learning (DL) has reported astonishing results for different tasks in computer vision including image classification, detection and segmentation just to name a few. In particular, for the task of image classification, a major breakthrough has been reported in the setting of supervised learning. A key factor for these impressive results is the assumption of a large corpus of labelled data. However, obtaining well-annotated labels is expensive and time consuming, and one should account for human bias and uncertainty that adversely affect the classification output. These drawbacks have motivated deep semi-supervised learning (SSL) to be a focus of great interest in the community.

In recent works, we apply the concept of hybrid models by defining a classifier as a combination between a model-based functional and a deep net. This with the aim of keeping some mathematical guarantees of model-based techniques whilst exploiting the power of deep nets. In this talk, we will address the potentials of such hybrid models and discuss novel functionals with carefully selected class priors that enforce a sufficiently smooth solution and strengthen the intrinsic relation between the labelled and unlabelled data, and their connection to deep nets. We will show that our philosophy, despite being vastly different from the current trend in the field, readily compete with recent deep-learning approaches