ΩM
\Omega_M
ΩΛ
\Omega_\Lambda
σ8
\sigma_8
Input
x
Neural network
f
Representation
(Summary statistic)
r = f(x)
Output



Modelling cross-correlations
P(θ∣xobs)
P(\theta|x_\mathrm{obs})
Implicit likelihood inference with normalising flows

x=f(z),z=f−1(x)
x = f(z), \, z = f^{-1}(x)
p(x)=pz(f−1(x))detJ(f−1)
p(\mathbf{x}) = p_z(f^{-1}(\mathbf{x})) \left\vert \det J(f^{-1}) \right\vert
No assumptions on the likelihood (likelihoods rarely Gaussian!)
No expensive MCMC chains needed to estimate posterior





P(C∣GNN(G))
P(\mathcal{C}|\mathrm{GNN}(G))
L=−N1∑ilog(P(C∣GNN(G)))
\mathcal{L} = - \frac{1}{N} \sum_i \log\left(P(\mathcal{C}|\mathrm{GNN}(G))\right)
(rij,θij,ϕij)
(r_{ij}, \theta_{ij}, \phi_{ij})
G(rmax)
G(r_\mathrm{max})
Input
Output
Rotation and translation Invariant
Summarising with graph neural networks
G=hiL,eijL→hiL+1,eijL+1
\mathcal{G} = h^{L}_i, e^{L}_{ij} \rightarrow h^{L+1}_i, e^{L+1}_{ij}
eijL+1=ϕe(eijL,hiL,hjL)
e^{L+1}_{ij} = \phi_e(e^L_{ij}, h^L_i, h^L_j)
hiL+1=ϕh(hiL,AjeijL+1)
h^{L+1}_{i} = \phi_h( h^L_i, \mathcal{A}_j e^{L+1}_{ij})
S=ϕs(AihiL)
S = \phi_s( \mathcal{A}_i h^L_i)
edge embedding
node embedding
summary statistic














eijL+1=ϕe(eijL,hiL,hjL)
e^{L+1}_{ij} = \phi_e(e^L_{ij}, h^L_i, h^L_j)
hiL+1=ϕh(hiL,AjeijL+1)
h^{L+1}_{i} = \phi_h( h^L_i, \mathcal{A}_j e^{L+1}_{ij})
S=ϕs(AihiL)
S = \phi_s( \mathcal{A}_i h^L_i)
edge embedding
node embedding
summary statistic
eijL+1=searchsorted(rij)
e^{L+1}_{ij} = \mathrm{searchsorted}(r_{ij})
hiL+1=∑jeijL+1
h^{L+1}_{i} = \sum_j e^{L+1}_{ij}
S=∑ihiL
S = \sum_i h^L_i


Ω M \Omega_M Ω Λ \Omega_\Lambda σ 8 \sigma_8 Input x Neural network f Representation (Summary statistic) r = f(x) Output Modelling cross-correlations P ( θ ∣ x o b s ) P(\theta|x_\mathrm{obs})
deck
By carol cuesta
deck
- 356