Correlation length-scale
Gaussian process prediction with uncertainty
Estimate from data! (kernels)
Idea: consistent comparison of theories to all available data
\(\mathcal{L} = \mathcal{L}_\mathsf{collider} \times \mathcal{L}_\mathsf{Higgs} \times \mathcal{L}_\mathsf{DM} \times \mathcal{L}_\mathsf{EWPO} \times \mathcal{L}_\mathsf{flavour} \times \ldots\)
Idea: consistent comparison of theories to all available data
\(\mathcal{L} = \mathcal{L}_\mathsf{collider} \times \mathcal{L}_\mathsf{Higgs} \times \mathcal{L}_\mathsf{DM} \times \mathcal{L}_\mathsf{EWPO} \times \mathcal{L}_\mathsf{flavour} \times \ldots\)
[GAMBIT, 1705.07919]
Fast estimate of SUSY (strong) production cross sections at NLO, and uncertainties from
Goal
$$ pp\to\tilde g \tilde g,\ \tilde g \tilde q_i,\ \tilde q_i \tilde q_j, $$
$$\tilde q_i \tilde q_j^{*},\ \tilde b_i \tilde b_i^{*},\ \tilde t_i \tilde t_i^{*}$$
Interface
Method
Pre-trained, distributed Gaussian processes
Stand-alone Python code, also implemented in GAMBIT
Processes
at \(\mathsf{\sqrt{s}=7/8/13/14}\) TeV
Soon public on GitHub!
Radial Basis Function kernel
prior distribution over all functions
with the estimated smoothness
posterior distribution over functions
with updated \(m(\vec x)\)
data
Matérn kernel
prior distribution over all functions
with the estimated smoothness
posterior distribution over functions
with updated \(m(\vec x)\)
data
Regression problem, with 'measurement' noise:
\(y=f(\vec x) + \varepsilon, \ \varepsilon\sim \mathcal{N}(0,\sigma_\varepsilon^2) \quad \rightarrow \quad \) infer \(f\), given data \(\mathcal{D} = \{\vec X, \vec y\}\)
Assume covariance structure expressed by a kernel function, like
Consider the data as a sample from a multivariate Gaussian distribution.
\([\vec x_1, \vec x_2, \ldots]\)
\([y_1, y_2, \ldots]\)
signal kernel
white-noise kernel
Regression problem, with 'measurement' noise:
\(y=f(\vec x) + \varepsilon, \ \varepsilon\sim \mathcal{N}(0,\sigma_\varepsilon^2) \quad \rightarrow \quad \) infer \(f\), given data \(\mathcal{D} = \{\vec X, \vec y\}\)
Training: optimise kernel hyperparameters by maximising the marginal likelihood
Posterior predictive distribution at a new point \(\vec x_*\) :
with
Implicit integration over points not in \(\vec X\)
[
Regression problem, with 'measurement' noise:
\(y=f(\vec x) + \varepsilon, \ \varepsilon\sim \mathcal{N}(0,\sigma_\varepsilon^2) \quad \rightarrow \quad \) infer \(f\), given data \(\mathcal{D} = \{\vec X, \vec y\}\)
Training: optimise kernel hyperparameters by maximising the marginal likelihood
Posterior predictive distribution at a new point \(\vec x_*\) :
with
Implicit integration over points not in \(\vec X\)
[
GPs allow us to use probabilistic inference to learn a function from data, in an interpretable, analytical, yet non-parametric Bayesian framework.
A GP model is fully specified once the mean function, kernel and its hyperparameters are chosen.
The probabilistic interpretation only holds under the assumption that the chosen kernel accurately describes the true correlation structure.
e.g. correlation length scales
The choice of kernel allows for great flexibility. But once chosen, it fixes the type of functions likely under the GP prior and determines the kind of structure captured by the model, e.g., periodicity and differentiability.
The choice of kernel allows for great flexibility. But once chosen, it fixes the type of functions likely under the GP prior and determines the kind of structure captured by the model, e.g., periodicity and differentiability.
The choice of kernel allows for great flexibility. But once chosen, it fixes the type of functions likely under the GP prior and determines the kind of structure captured by the model, e.g., periodicity and differentiability.
For our multi-dimensional case of cross-section regression, we get good results by multiplying Matérn (\(\nu = 3/2\)) kernels over the different mass dimensions:
The different lengthscale parameters \(l_d\) lead to automatic relevance determination for each feature: short-range correlations for important features over which the latent function varies strongly.
This is an anisotropic, stationary kernel. It allows for functions that are less smooth than with the standard squared-exponential kernel.
Short lengthscale, small noise
Long lengthscale, large noise
Underfitting, almost linear
Overfitting of fluctuations,
can lead to large uncertainties!
Typically, kernel hyperparameters are estimated by maximising the (log) marginal likelihood \(p( \vec y\ |\ \vec X, \vec \theta) \), aka the empirical Bayes method.
Alternative: MCMC integration over a range of \(\vec \theta\).
Gradient-based optimisation is widely used, but can get stuck in local optima and plateaus. Multiple initialisations can help, or global optimisation methods like differential evolution.
Typically, kernel hyperparameters are estimated by maximising the (log) marginal likelihood \(p( \vec y\ |\ \vec X, \vec \theta) \), aka the empirical Bayes method.
Alternative: MCMC integration over a range of \(\vec \theta\).
Gradient-based optimisation is widely used, but can get stuck in local optima and plateaus. Multiple initialisations can help, or global optimisation methods like differential evolution.
Global optimum: somewhere in between
The standard approach systematically underestimates prediction errors.
After accounting for the additional uncertainty from learning the hyper-parameters, the prediction error increases when far from training points.
[Wågberg+, 1606.03865]
Other tricks to improve the numerical stability of training:
Sometimes, a curious problem arises: negative predictive variances!
It is due to numerical errors when computing the inverse of the covariance matrix \(K\). When \(K\) contains many training points, there is a good chance that some of them are similar:
Nearly equal columns make \(K\) ill-conditioned. One or more eigenvalues \(\lambda_i\) are close to zero and \(K\) can no longer be inverted reliably. The number of significant digits lost is roughly the \(\log_{10}\) of the condition number
This becomes problematic when \(\kappa \gtrsim 10^8 \). In the worst-case scenario,
signal-to-noise ratio
number of points
[Mohammadi+, 1602.00853]
Generating data
Random sampling
SUSY spectrum
Cross sections
Optimise kernel hyperparameters
Training GPs
GP predictions
Input parameters
Linear algebra
Cross section
estimates
Compute covariances
Training scales as \(\mathcal{O}(n^3)\), prediction as \(\mathcal{O}(n^2)\)
Mix of random samples with different priors in mass space
Evaluation speed
Sample coverage
Need to cover a large parameter space
Distributed Gaussian processes
[Liu+, 1806.00720]
Aggregation model for dealing with large datasets
Fast estimate of SUSY (strong) production cross sections at NLO, and uncertainties from
Goal
$$ pp\to\tilde g \tilde g,\ \tilde g \tilde q_i,\ \tilde q_i \tilde q_j, $$
$$\tilde q_i \tilde q_j^{*},\ \tilde b_i \tilde b_i^{*},\ \tilde t_i \tilde t_i^{*}$$
Interface
Method
Pre-trained, distributed Gaussian processes
Stand-alone Python code, also implemented in GAMBIT
Processes
at \(\mathsf{\sqrt{s}=7/8/13/14}\) TeV
Soon public on GitHub!