Better Self-training for Image Classification Through Self-supervision

Published in Proceedings of the Australasian Joint Conference on Artificial Intelligence. Springer, Cham, 2022

Download paper here

Abstract: Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision—learning without manual supervision by solving an automatically-generated pretext task—has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time.

Recommended citation: Sahito, Attaullah, Eibe Frank, and Bernhard Pfahringer. “Better Self-training for Image Classification through Self-supervision.” Australasian Joint Conference on Artificial Intelligence. Springer, Cham, 2022.