Mohamad Amin Mohamadi
The University of British Columbia
March 2023
1: Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks.
2: Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., SohlDickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent.
[1]
[2]
*: Under suitable parameterization at initialization, with expectation taken over the parameters of the neural network.
*
with probability at least \(1-\delta\).
Active learning: reducing the required amount of labelled data in training ML models through allowing the model to "actively request for annotation of specific datapoints".
We focus on Pool Based Active Learning:
Most proposed acquisition functions in deep active learning can be categorized to two branches:
Our Motivation: Making Look-Ahead acquisition functions feasible in deep active learning:
Retraining
Engine
and hence, the induction step holds!
Retraining Time: The proposed retraining approximation is much faster than SGD.
Experiments: The proposed querying strategy attains similar or better performance than best prior pool-based AL methods on several datasets.
Active Learning NTKs on arXiv
Pseudo-NTK on arXiv