Aim 3: Posterior Sampling and Uncertainty

 

December 12, 2023

Testing DPS Step Size

non-Bayesian (i.e., \(g(t)^2\) is not applied to MLE step) update

\[x_{t-1} = x_t - [f(x,t) - (g(t)^2s_{\hat{\theta}}(x_t, t) + \eta\nabla_{x_t}\text{MLE}(y, x_t, t))] + g(t)dw\]

Testing DPS Step Size

\(\eta(t) = 0.1\)

\(\eta(t) = 1.0\)

\(\eta(t) = 8.0\)

Samples (\(I_0 = 1024\))

UQ

Faster Sampling with DDIM

DDIM update

\[x_s^{\text{DDIM}} = \alpha_s \hat{x}_0(x_t) + \sqrt{b^2_s - \sigma^2} \cdot \epsilon_{\theta}(x_t, t) + \sigma z,~z \sim \mathcal{N}(0, \mathbb{I})\]

with

\[\sigma = \eta^{\text{DDIM}} \frac{b_s^2}{b_t^2}\sqrt{1 - \frac{\alpha_t^2}{\alpha_s^2}}\]

\[x_t \mid x_0 \sim \mathcal{N}(\alpha_t x_0, b^2_t\mathbb{I})\]

DDIM-based DPS

\[x_s = x_s^{\text{DDIM}} + \eta^{\text{DPS}}\nabla_{x_t} \text{MLE}(y, x_t, t)\]

Samples with 200 Steps \((I_0 = 1024)\)

\(x\)

\(y\)

samples

\(\hat{u} - \hat{l}\)

Samples with 200 Steps \((I_0 = 1024)\)

\(x\)

\(y\)

samples

\(\hat{u} - \hat{l}\)

Samples with 200 Steps \((I_0 = 1024)\)

\(x\)

\(y\)

samples

\(\hat{u} - \hat{l}\)

Baselines: FBP, FBP + QR U-net

\(x\)

FBP(\(x\))

\(\hat{x}\)

\(\hat{l}\)

\(\hat{u}\)

\(\hat{u} - \hat{l}\)

Quantitative Comparison

Calibration Results

U-net

Gaussian

Poisson

Semantic Calibration

From

\[\mathcal{I}(y)_j = [\hat{l}_j - \lambda_j, \hat{u}_j + \lambda_{j}]\]

to

\[\mathcal{I}(y)_j = [\hat{l}_j - \lambda_{c_j}, \hat{u}_j + \lambda_{c_j}]\]

where

\[c_j = \underset{k \in [C]}{\arg\max}~p(k_j \mid y)\]

is the posterior semantic segmentation of pixel \(j\)

\[\downarrow\]

testing with background, body, and lungs

Semantic Calibration

conformalized uncertainty maps

\(\lambda\)

RCPS

K-RCPS

sem-RCPS

background

body

lungs

Beyond Gaussian Diffusion

Poisson DPS

\[p(y \mid x_t) \approx p(y \mid \hat{x}_0(x_t)) = \text{Pois}(y;I_0e^{-A\hat{x}_0(x_t)})\]

\[\downarrow\]

QUESTION

\(I_0 < I_{\text{max}}\)

\(y_0\) measured at \(I_0\)

can sample \(y_{\text{max}}\) at \(I_{\text{max}}\)?

\[\downarrow\]

\[y_t \mid \mu \sim \text{Pois}(I(t)e^{-A\mu})\]

is a Poisson Point Process

Stochastic Localization [Montanari, '23]

If

\[(Y_{t})_{t \geq 0}\big\lvert_{x} \sim \text{PPP}(x~\text{d}t) \implies Y_t \sim \text{Pois}(tx)\]

probability of not adding 1 is

\[\mathbb{P}[Y_{t + \delta} = y \mid Y_t = y] = 1 - \delta m(y,t) + o(\delta)\]

probability of adding 1 is

\[\mathbb{P}[Y_{t + \delta} = y + 1 \mid Y_t = t] = \delta m(y,t) + o(\delta)\]

then

and

where

\[m(y,t) = \mathbb{E}[X \mid Y_t = y]\]

Stochastic Localization [Montanari, '23]

Learn \(m_{\theta}(y,t)\) with

\[\hat{\theta} = \underset{\theta}{\arg\min}~\mathbb{E}_{\mu, t, y \sim \text{Pois}(te^{-A\mu})}[\| e^{-A\mu} - m_{\theta}(y,t)\|^2]\]

given \(y_0\) sample \(y_T\) with

\[y_{t + \text{d}t} = y_t + \text{Pois}(m_{\hat{\theta}}(y_t, t)~\text{d}t)\]

obtain \(x_T\) via FBP on \(y_T\)

🤔

Stochastic Localization [Montanari, '23]

Possible things that are going wrong:

1. Exponential range makes optimization problem hard

 

2. PPP is a discrete process, once a count has appeared, it is difficult for it to go away

(ideas to fix: add and remove counts? smooth process by adding additive Gaussian noise?)

 

3. Tweedie estimate of Poisson lives in log space, so things are not as nice as in Gaussian diffusion

(there is no score here)

[12/12/23] Aim 3: Posterior Sampling and Uncertainty

By Jacopo Teneggi

[12/12/23] Aim 3: Posterior Sampling and Uncertainty

  • 29