Francois Lanusse (CNRS/CEA Saclay)
Trained under an l2 loss:
See Benjamin's poster here
Given the extra dimension in our model, we cannot maintain the same spatial resolution in our 3D model due to the limited memory on GPU. We reform the input data to an array shape of 64×64×32 in position position-velocity (PPV).
Application of Convolutional Neural Networks to Identify Stellar Feedback Bubbles in CO Emission, Xu et al. 2020
Limited by the size of GPU memory and the fact that 3D data consume more memory than lower dimensional tasks, we cannot feed the whole simulations into the GPU during training and testing.
AI-assisted super-resolution cosmological simulations, Li et al. 2020
FlowPM, TensorFlow N-body solver
try me out here
CosmicRIM: Recurrent Inference Machine for initial condition reconstruction
(Modi, Lanusse, Seljak, Spergel, Perreault-Levasseur 2021)
Pipeline parallelism
#A simple 2 layer fully connected network
#y=Relu(xw+bias)v
#Define dimensions
batch = mtf.Dimension("batch", b)
io = mtf.Dimension("io", d_io)
hidden = mtf.Dimension("hidden", d_h)
# x.shape == [batch, io]
#Get tensors
w = mtf.get_variable("w", shape=[io, hidden])
bias = mtf.get_variable("bias", shape=[hidden])
v = mtf.get_variable("v", shape=[hidden, io])
#Define operations
h = mtf.relu(mtf.einsum(x, w, output_shape=[batch, hidden]) + bias)
y = mtf.einsum(h, v, output_shape=[batch, io])
2048^3 simulation of the Universe, distributed on 256 GPUs
=> Runs in 3m
=> multiple TB of RAM
Deep Learning has revolutionized structure identification, Stella's talk is an excellent illustration of that.
U-nets as an architecture are extremely versatile and successful.
Deep Learning with 3D data is still a challenge in 2021, this is because our needs in (astro-)physics are unique.