Putting the Cosmic Large-scale Structure on the Map: Theory Meets Numerics
Sept. 2025
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM
Implicit Simulation-Based Inference
Explicit Simulation-Based Inference
a.k.a:
a.k.a:
A few things to note:
Dr. Justine Zeghal
(now at U. of Montreal)
Implicit Inference is easier, cheaper, and yields the same results as Explicit Inference...
But Explicit Inference is cooler, so let's try to do it anyways!
Credit: Yuuki Omori, Chihway Chang, Justine Zeghal, EiffL
More seriously, Explicit Inference has some advantages:
Work led by Hugo Simon (CEA Paris-Saclay)
In collaboration with Arnaud de Mattia
<- On the job market! Don't miss out!
\(\Omega := \{ \Omega_m, \Omega_\Lambda, H_0, \sigma_8, f_\mathrm{NL},...\}\)
Linear matter spectrum
Structure growth
\(\Omega\)
\(\delta_L\)
\(\delta_g\)
inference
\(128^3\) PM on 8GPU:
4h MCLMC vs. \(\geq\) 80h HMC
Fast and differentiable model thanks to (\(\texttt{NumPyro}\) and \(\texttt{JaxPM}\))
= NUTS within Gibbs
= auto-tuned HMC
= adjusted MCHMC
= unadjusted Langevin MCHMC
10 times less evaluations required
Unadjusted microcanonical sampler outperforms any adjusted sampler
Hamiltonian Monte Carlo (e.g. Neal2011)
MicroCanonical HMC (Robnik+2022)
reducing stepsize rapidly brings bias under Monte Carlo error
Recipe😋 to sample from \(\mathrm p \propto e^{-U}\)
gradient guides particle toward high density sets
scales poorly with dimension
must average over all energy levels
Hamiltonian Monte Carlo (e.g. Neal2011)
Recipe😋 to sample from \(\mathrm p \propto e^{-U}\)
single energy/speed level
let's try avoiding that
gradient guides particle toward high density sets
MicroCanonical HMC (Robnik+2022)
10 times less evaluations required
\(128^3\) PM on 8GPU:
4h MCLMC vs. \(\geq\)80h NUTS
Mildly dependent with respect to formation model and volume
Probing smaller scales could be harder
Work led by Wassim Kabalan (IN2P3/APC)
In collaboration with Alexandre Boucaud
<- On the job market! Don't miss out!
Differentiable Lensing Lightcone (DLL) - FlowPM
=> Limit of what fits in a conventional GPU circa 2022
without halo exchange
with halo exchange
Jean Zay Supercomputer
pip install jaxpm
Multi-GPU and Multi-Node simulation with distributed domain decomposition (Successfully ran 2048³ on 256 GPUs), built on top of jaxdecomp
End-to-end differentiability, including force computation and interpolation
Compatible with a custom JAX compatible Reverse Adjoint solver for memory-efficient gradients (including Diffrax)
Work led by Arne Thomsen (ETH Zurich)
In collaboration with Tilman Troester, Chihway Chang, Yuuki Omori
Accepted at NeurIPS 2025 Workshop on Machine Learning for the Physical Sciences
Dr. Denise Lanzieri
(now at Sony CSL)
CAMELS N-body
PM
PM+NN
particle-wise multilayer perceptron
Just food for thought, not a full project