The High-Dimensional Structure of
Visual Cortex Representations
A Dissertation Proposal
Outline
Motivation
What are the statistical properties of the cortical representations themselves?
Specifically, let's look at dimensionality.
A low-dimensional theory of visual cortex
Goal
Compress high-dimensional images onto a low-dimensional manifold that supports behavior while being robust to stimulus variation
Haxby (2011), movie-viewing fMRI
Huth (2012), movie-viewing fMRI, semantic space
Lehky (2014), objects, monkey electrophysiology
A high-dimensional theory of visual cortex
Benefits
Expressive enough to capture the complexity of the real world; supports performing a variety of tasks
Stringer (2019), mouse visual cortex, ImageNet
mouse cortex also scales to ~10^6 dimensions, Manley et al. (2024)
Posani (2024), mouse cortex during behavior
How can we resolve these contradictions?
Use new large-scale, high-quality fMRI datasets!
Project 1
Universal scale-free representations in human visual cortex
[manuscript under review]
The Natural Scenes dataset
Cross-decomposition ~ cvPCA + hyperalignment
Universal scale-free covariance spectra
Anatomical alignment is insufficient
Systematic variation aross some ROIs
RSA is insensitive to high-rank dimensions
Summary
Project 2
Spatial scale-invariant properties of mammalian visual cortex
[manuscript in preparation]
Similar covariance spectra in humans and mice*
Latent dimensions appear spatially structured
Covariance functions are spatially stationary
Covariance spectra are spatial scale-invariant
Some implications
How does this universal power spectrum arise?
Some possible explanations
Project 3
Characterizing the representational content of different latent subspaces
[planned; preliminary results]
What information is available at different ranks?
Fine-grained semantic annotations available
Hypothesis 1
coarse category information?
fine-grained distinctions?
Hypothesis 2
low-variance dimensions?
high-variance dimensions?
Testing these hypothesis: Proposed methods
Is SNR too low in high-rank subspaces?
Apparently not.
As a simple proof-of-concept, a nearest centroid classifier can perform pairwise instance-level classification using information in all latent subspaces.
ranks 1-10
10-100
Preliminary results
Timeline