Cortical representation learning

& Bayes-optimal cue integration

Deperrois, N., Petrovici, M.A., Senn, W., & Jordan, J. (2021). Learning cortical representations through perturbed and adversarial dreaming. arXiv preprint arXiv:2109.04261.

Jordan, J., Sacramento, J., Wybo, W. A., Petrovici, M. A., & Senn, W. (2021). Learning

Bayes-optimal dendritic opinion pooling. arXiv preprint arXiv:2104.13238.

Jakob Jordan

Department of Physiology, University of Bern, Switzerland

03.03.2022, SCN Retreat, Crans-Montana, Switzerland

  • structured neuronal representations underlie our capacity to behave successfully, and our fast/flexible learning capabilities
  • latent representations emerge from unsupervised learning principles
  • here: we propose that sleep and in particular REM dreams implement adversarial learning

Motivation

[Illing et al., 2021]

[Goetschalckx et al., 2021]

GenerativeAdversarial
Networks

Learning during sleep (sketch)

  • sensory experiences are encoded & stored during wakefulness
  • preturbed replay during NREM robustifies encoding
  • "creative" replay" during REM organizes encoding space

Nicolas Deperrois

Different, but complementary objectives govern learning during wakefulness and sleep

  • discriminator: external
  • data reconstruction
  • latent reconstruction
  • discriminator: internal
  • generator: external

Objectives:

Dreams become more realistic with training

(remark: FID uses an Inception-v3 network, hence likely

focuses on local image statistics, e.g., Brendel & Bethge, 2019)

Adversarial dreaming during REM improves the structure of latent representations

Perturbed dreaming during NREM improves the robustness of latent representations

Cortical correlates of adversarial learning

  • (trained) discriminator neurons are in different activity regimes during wakefulness and REM sleep; distinguish external from internal, hence may be involved in reality monitoring systems (ACC, mPFC?); impaired discriminator function could thus lead to the formation of delusions

 

  • discriminator learning predicts opposite weight changes on feedforward synapses during wakefulness and REM sleep for identical low-level activity

 

  • generator learning predicts opposite plasticity rule on feedback synapses during wakefulness and REM sleep

(High-level) neuronal representations reflect contributions from multiple modalities

How to (optimally) combine uncertain information from different sources?

\bm z

visual

auditory

olfactory

\frac{1}{\sigma_\text{v}^2}
\frac{1}{\sigma_\text{a}^2}
\frac{1}{\sigma_\text{o}^2}

Neurons with conductance-based synapses

naturally implement probabilistic cue integration

An observation

Bayes-optimal inference

Bidirectional voltage dynamics

Membrane potential dynamics from noisy gradient ascent

\begin{array}{rl} p(u_\text{s}|W,r) =& \frac{1}{Z'} \prod_{d=0}^D p_d(u_\text{s}|W_d,r) \\ =& \frac{1}{Z} e^{-\frac{\bar g_\text{s}}{2}\left( u_\text{s} - \bar E_\text{s}\right)^2} \end{array}
\begin{array}{rl} C \dot u_\text{s} =& \frac{\partial}{\partial u_\text{s}} \log p(u_\text{s}| W,r) + \xi \\ =& \sum_{d=0}^D \left( g_d^\text{L} (E^\text{L} - u_\text{s}) + g_d^\text{E} (E^\text{E} - u_\text{s}) + g_d^\text{I} (E^\text{I} - u_\text{s}) \right) + \xi \end{array}
\mathbb{E}[u_\text{s}] = \bar E_\text{s}

Average membrane potentials

= reliability-weighted opinions

Membrane potential variance

= 1/total reliability

\text{Var}[u_\text{s}] = \frac{1}{\bar g_\text{s}}

Synaptic plasticity from stochastic gradient ascent

\begin{array}{rl} \dot W_d^\text{E/I} \propto \frac{\partial}{\partial W_d^\text{E/I}} \log p(u_\text{s}^*|W,r) \\ \end{array}
\Delta \mu^{\text{E/I}} \propto (u_\text{s}^* - \bar E_\text{s}) \left( E^\text{E/I} - \bar E_\text{s} \right) \\
\Delta \sigma^2 \propto \frac{1}{2} \left( \frac{1}{\bar g_\text{s}} - (u_\text{s}^* - \bar E_\text{s})^2 \right)

Synaptic plasticity modifies excitatory/inhibitory synapses

  • in approx. opposite directions to match the mean
  • in identical directions to match the variance
\propto [ \; \Delta \mu^\text{E/I} \, + \, \Delta \sigma^2 \; ] \, r

\(u_\text{s}^*\): sample from target distribution \(p^*(u_\text{s})\)

target

actual

Synaptic plasticity performs

error-correction and reliability matching

Learning Bayes-optimal inference of orientations from multimodal stimuli

The trained model approximates ideal observers

and reproduces psychophysical signatures of experimental data

[Nikbakht et al., 2018]

Cross-modal suppression as

reliability-weighted opinion pooling

The trained model exhibits cross-modal suppression:

  • at low stimulus intensities, firing rate is larger bimodal condition
  • at high stimulus intensities, firing rate is smaller in bimodal condition
  • example prediction for experiments: strength of suppression depends on relative reliabilities of the two modalities

[Ohshiro et al., 2017]

Summary

Adversarial learning during wakefulness and sleep allows the emergence of organized cortical representations.

Single neurons with conductance-based synapses learn to be optimal cue integrators.

Deperrois, N., Petrovici, M.A., Senn, W., & Jordan, J. (2021). Learning cortical representations through perturbed and adversarial dreaming. arXiv preprint arXiv:2109.04261.

Jordan, J., Sacramento, J., Wybo, W. A., Petrovici, M. A., & Senn, W. (2021). Learning

Bayes-optimal dendritic opinion pooling. arXiv preprint arXiv:2104.13238.

Representations & cue integration

By jakobj

Representations & cue integration

  • 92