Neat usecase for Active Learning.
Somebody shared an interesting [paper] the other day.
The paper does active learning on street sign detection. Instead of using active learning to give labelling preference to the most uncertain predictions they instead opt to give preference to examples from the rare class.
The approach makes a lot of sense. When you’re dealing with an imbalanced dataset, given you’re able to label, it’s better to make the dataset more balanced instead of resorting to resampling techniques. The paper also demonstrates that this “prefer rare labels” method of active learning outperforms active learning based on entropy.
To quote their summary;
Our main result is that we obtain very good results using a simple active learning scheme, which can be summarised as:
Train a neural network classifier on your imbalanced training set, using normal cross-entropy loss.
From the unlabeled set, select the frames with the highest probability of belonging to one of the rare classes, as estimated by the classifier. Add these to the training set.
We were surprised that this simple algorithm worked so well, since the large class imbalance leads to a very low estimated probability for most of the rare classes, even for the selected samples. To explain this surprising result we analyse a simple 2-dimensional toy model of a dataset with high class imbalance.