method
power-law spectra
"universal scale-free representations"
power-law spectra in V1
such difference, much wow
How to understand the decomposition?
latent dimensions look suspiciously like a Fourier basis...
vs
covariance function for neuron i
if translationally invariant (i.e. spatially stationary for all i), latent dimensions will be the Fourier basis
empirically appears so
power-law covariance spectra at all scales
coarse-graining analysis where we average the responses of neighboring neurons/voxels
all variance
basically no variance
How big should my Gaussian smoothing kernel be to reduce the variance by a factor f?
high-variance (low-rank) dimensions have bigger scales
low-variance dimensions have smaller scales
universal scaling properties of visual cortex representations across species, individuals, imaging methods, very different spatial scales
are basically all neurons being used?