Matthew Berger, Jixian Li, and Joshua A. Levine
Similar to InSituNet
The general idea is to take a "view" and a discretized transfer function, and produce a volume rendered image.
Learns representation of volume
Volume Rendering
Generative Adversarial Networks (GANs) again
\(\displaystyle\min_G\displaystyle\max_DL_{adv}(G,D)\)
Opacity GAN
Color GAN
Composing these gives the full generator
input:
output:
Why two gans?
Opacity GAN
Color GAN
Training Data
Acquiring data is relatively easy.
view: (azimuth sin, azimuth cos, elevation, in plane rotation, distance to camera)
200,000 samples generated per model
Procedure
Not much interesting here
Opacity GAN:
Color GAN:
Transfer function sensitivity
Transfer function sensitivity
Transfer function sensitivity
Latent space exploration
Latent space exploration
High dimensional points, use tSNE to reduce dimensionality (to 2D)
Latent space exploration
Latent space exploration
Text