Leonardo Petrini
PhD Student @ Physics of Complex Systems Lab, EPFL Lausanne
Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
= "cat"
Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
vs.
\(P\): training set size
\(d\) data-space dimension
What is the structure of real data?
Cat
Bruna and Mallat (2013), Mallat (2016), ...
\(S = \frac{\|f(x) - f(\tau x)\|}{\|\nabla\tau\|}\)
\(f\) : network function
Is it true or not?
Can we test it?
Some negative results:
Azulay and Weiss (2018); Dieleman et al. (2016); Zhang (2019) ...
\(x-\)translations
\(y-\)translations
more deformed
\(x\) input image
\(\tau\) smooth deformation
\(\eta\) isotropic noise with \(\|\eta\| = \langle\|\tau x - x\|\rangle\)
\(f\) network function
Observable that quantifies if a deep net is less sensitive to diffeomorphisms than to generic data transformations
$$R_f = \frac{\langle \|f(x) - f(\tau x)\|^2\rangle_{x, \tau}}{\langle \|f(x) - f(x + \eta)\|^2\rangle_{x, \eta}}$$
Relative stability:
$$R_f = \frac{\langle \|f(\tau x) - f(x)\|^2\rangle_{x, \tau}}{\langle \|f(x + \eta) - f(x)\|^2\rangle_{x, \eta}}$$
Results:
Deep nets learn to become stable to diffeomosphisms!
Relation with performance?
Thanks!
By Leonardo Petrini
NeurIPS Conference 2021