Mohamad Amin Mohamadi
The University of British Columbia
NeurIPS 2022
Active learning: reducing the required amount of labelled data in training ML models through allowing the model to "actively request for annotation of specific datapoints".
We focus on Pool Based Active Learning:
Most proposed acquisition functions in deep active learning can be categorized to two branches:
Our Motivation: Making Look-Ahead acquisition functions feasible in deep active learning:
Retraining
Engine
Basic elements in neural network training:
Gradient Descent:
Idea: Study neural networks in the function space!
Gradient Flow:
Change in the function output:
Hmm, looks like we have a kernel on the right hand side!
So:
where
this is called the Neural Tangent Kernel!
Retraining Time: The proposed retraining approximation is much faster than SGD.
Experiments: The proposed querying strategy attains similar or better performance than best prior pool-based AL methods on several datasets.
arXiv URL
GitHub URL