Importance Sampling

with Path Guiding:

 A Project Proposal

Phillip Thomas

Motivation

L_o(x,\omega_o) = L_e(x,\omega_o) + \underbrace{\int_{\mathcal{S}^2}f(x,\omega_o,\omega_i) L(x,\omega_i) |\cos\theta_i| \text{d}\omega_i}_{=L_r(x,\omega_o)\text{, we estimate with MC estimator}}
L_r(x,\omega_o) = \frac{1}{N} \sum_{i=1}^{N}\frac{f(x,\omega_o,\omega_i) L(x,\omega_i) |\cos\theta_i|}{p(\omega_i)}

The variance of this estimator is \(\mathcal{O}(\frac{1}{N})\). We want to reduce the variance of a Monte Carlo estimator, i.e., image noise when we only have a finite number of samples.

 

Using a probability distribution for random samples \(\omega_i \) (also the distribution denoted by \(p\)) that is shaped like the integrand, we can reduce the variance of the estimator. This is importance sampling.

Basic idea

Approximating the incoming radiance term by using information collected from past samples is called path guiding.

L_r(x,\omega_o) = \frac{1}{N} \sum_{i=1}^{N}\frac{f(x,\omega_o,\omega_i) L(x,\omega_i) |\cos\theta_i|}{p(\omega_i)}

If we can create a probability distribution \(p\) that has a shape similar to even one term in the numerator, we are eliminating some variance in the estimator.

\text{samples per pixel} = \overbrace{2, ~4, ~8, ~16, ~32, ~64, ~128, ~256}^{\text{training iterations}}, \overbrace{512}^{\text{render}}

Motivation

Reduce variance of MC estimator.

Basic idea

Learn an approximation of the incoming radiance terms.

My project

Implement this idea (and the details described in the paper) in C++ code.

Reproduce their rendered  results.

Two time permitting extensions

  1. Implement another paper that learns direct illumination separately.
L(p_1 \rightarrow p_0) = \underbrace{P(\overline{p}_1)}_{\text{emission}} + \underbrace{P(\overline{p}_2)}_{\text{direct illumination}} + \underbrace{\sum_{i=3}^\infty P(\overline{p}_i)}_{\text{indirect illumination}},

where \(P ( \overline{p}_i)\) gives the amount of radiance scattered over a path \(\overline{p}_i\) with \(i+1\) vertices.

P(\overline{p}_2) = L_d(x,\omega_o) = \frac{1}{N} \sum_{i=0}^N \frac{L_e(y_i\rightarrow x)V(x,y_i)f(x,y_i,\omega_o)G(x,y_i)}{p(y_i|x,\omega_o)}

2. Extend the SD-trees to be useful in animated sequences.

Made with Slides.com