Learning Bayes-optimal

dendritic opinion pooling

Jakob Jordan

Department of Physiology, University of Bern, Switzerland

25.08.2022, Giessbach meeting, Giessbach, Switzerland

Jordan, J., Sacramento, J., Wybo, W. A., Petrovici, M. A., & Senn, W. (2021).

Learning Bayes-optimal dendritic opinion pooling. arXiv preprint arXiv:2104.13238.

(High-level) neuronal representations reflect contributions from multiple modalities

How to combine information from uncertain sources?

\bm z

visual

auditory

olfactory

\sigma_\text{v}^2
\sigma_\text{a}^2
\sigma_\text{o}^2

"uncertainty"

How to estimate the uncertainty of each source?

Can single cortical neurons learn to optimally integrate information from uncertain sources?

Can single model neurons with plausible dynamics  learn to optimally integrate information from uncertain sources?

An observation

Bayes-optimal inference

"Slow" membrane potentials

mean

precision

Membrane potential dynamics from noisy gradient ascent

\begin{array}{rl} p(u_\text{s}|W,r) \end{array}
\begin{array}{rl} \dot u_\text{s} \propto \frac{\partial}{\partial u_\text{s}} \log p(u_\text{s}| W,r) + \xi \end{array}
\mathbb{E}[u_\text{s}] = \bar E_\text{s}

Membrane potential mean

= mean of the posterior

Membrane potential variance

= inverse precision of the posterior

\text{Var}[u_\text{s}] = \frac{1}{\bar g_\text{s}}

Somatic membrane potential distribution

Somatic membrane potential dynamics

Synaptic plasticity from matching target distributions

\mathbb{E} \left[ \Delta \mu^{\text{E/I}} \right] \propto (\mu^* - \bar E_\text{s})
\mathbb{E} \left[ \Delta \sigma^2 \right] \propto \frac{1}{\bar g_\text{s}} - \mathbb{E} \left[ (u_\text{s}^* - \bar E_\text{s})^2 \right]

Prediction for experiments:

Synaptic plasticity modifies excitatory/inhibitory synapses

  • in approx. opposite directions to match the mean
  • in identical directions to match the variance
\dot W_d^\text{E/I} \propto [ \; \Delta \mu^\text{E/I} \, + \, \Delta \sigma^2 \; ] \, r

target

actual

Learning Bayes-optimal inference of orientations from multimodal stimuli

The trained model approximates ideal observers

and reproduces psychophysical signatures of experimental data

[Nikbakht et al., 2018]

Cross-modal suppression as

reliability-weighted opinion pooling

  • low stimulus intensities: firing rate is enhanced
  • high stimulus intensities: firing rate is suppressed
  • prediction for experiments: strength of suppression depends on relative reliabilities of the two modalities

[Ohshiro et al., 2017]

Summary

Single neurons with conductance-based synapses can learn to be optimal cue integrators.

Jordan, J., Sacramento, J., Wybo, W. A., Petrovici, M. A., & Senn, W. (2021).

Learning Bayes-optimal dendritic opinion pooling. arXiv preprint arXiv:2104.13238.

  1. Membrane-potential dynamics compute maximum-a-posteriori estimates.
  2. Membrane conductances represent reliabilities of these estimates.
  3. Synaptic plasticity allows neurons to match target membrane potential distributions.

Neurons with conductance-based synapses

naturally implement probabilistic cue integration

Membrane potential dynamics from noisy gradient ascent

\begin{array}{rl} p(u_\text{s}|W,r) =& \frac{1}{Z'} \prod_{d=0}^D p_d(u_\text{s}|W_d,r) \\ =& \frac{1}{Z} e^{-\frac{\bar g_\text{s}}{2}\left( u_\text{s} - \bar E_\text{s}\right)^2} \end{array}
\begin{array}{rl} C \dot u_\text{s} =& \frac{\partial}{\partial u_\text{s}} \log p(u_\text{s}| W,r) + \xi \\ =& \sum_{d=0}^D \left( g_d^\text{L} (E^\text{L} - u_\text{s}) + g_d^\text{E} (E^\text{E} - u_\text{s}) + g_d^\text{I} (E^\text{I} - u_\text{s}) \right) + \xi \end{array}
\mathbb{E}[u_\text{s}] = \bar E_\text{s}

Time-averaged membrane potentials

= mean of the posterior

Membrane potential variance

= inverse precision of the posterior

\text{Var}[u_\text{s}] = \frac{1}{\bar g_\text{s}}

Synaptic plasticity from stochastic gradient ascent

\begin{array}{rl} \dot W_d^\text{E/I} \propto \frac{\partial}{\partial W_d^\text{E/I}} \log p(u_\text{s}^*|W,r) \\ \end{array}
\Delta \mu^{\text{E/I}} \propto (u_\text{s}^* - \bar E_\text{s}) \left( E^\text{E/I} - \bar E_\text{s} \right) \\
\Delta \sigma^2 \propto \frac{1}{2} \left( \frac{1}{\bar g_\text{s}} - (u_\text{s}^* - \bar E_\text{s})^2 \right)

Prediction for experiments:

Synaptic plasticity modifies excitatory/inhibitory synapses

  • in approx. opposite directions to match the mean
  • in identical directions to match the variance
\propto [ \; \Delta \mu^\text{E/I} \, + \, \Delta \sigma^2 \; ] \, r

\(u_\text{s}^*\): sample from target distribution \(p^*(u_\text{s})\)

target

actual

Synaptic plasticity performs

error-correction and reliability matching

Learning Bayes-optimal dendritic opinion pooling

By jakobj

Learning Bayes-optimal dendritic opinion pooling

  • 72