Exploring Gravitational-Wave Detection and Parameter Inference using Deep Learning

He Wang (王赫)

Institute of Theoretical Physics, CAS

Beijing Normal University

on behalf of the KAGRA collaboration

The 8\(^\text{th}\) KAGRA International Workshop , 14:30-14:45 KST on July 9\(^\text{th}\), 2021

Part of works as a pre-print.

hewang@mail.bnu.edu.cn / hewang@itp.ac.cn

Collaborators:

Zhoujian Cao (BNU)

Yue Zhou (PCL)

Zong-Kuan Guo (ITP-CAS)

Zhixiang Ren (PCL)

Outline

  • Opportunity in Gravitational-Wave Data Analysis
  • Trends in Gravitational-Wave Signal Detection
  • Curse of Dimensionality
  • Motivation: Prior Sampling
  • Normalizing Flow
  • Results

Opportunities in GW Data Analysis

Classification

Feature extraction

Convolutional Neural Network (ConvNet or CNN)

Matched-filtering for waveform features

  • Machine learning algorithms present features that make them particularly suitable for gravitational waves astrophysics


 

  • Highlight:
    • Now we can extract our target features by matched-filtering technique or machine learning from feature space of GW.

    •  


 

  • Review article: Enhancing Gravitational-Wave Science with Machine Learning, Machine Learning: Science and Technology, arXiv:2005.03745 (also in Survey4GWML

Opportunities in GW Data Analysis

 MFCNN

GW150914

  • Machine learning algorithms present features that make them particularly suitable for gravitational waves astrophysics


 

  • Highlight:
    • Now we can extract our target features by matched-filtering technique or machine learning from feature space of GW.

    •  


 

  • Review article: Enhancing Gravitational-Wave Science with Machine Learning, Machine Learning: Science and Technology, arXiv:2005.03745 (also in Survey4GWML

Trends in GW Signal Detection

  • Machine learning algorithms present features that make them particularly suitable for gravitational waves astrophysics


 

  • Highlight:
    • Now we can extract our target features by matched-filtering technique or machine learning from feature space of GW.
    • Can we achieve more by exploiting the whole feature space for parameter estimation?


 

  • Review article: Enhancing Gravitational-Wave Science with Machine Learning, Machine Learning: Science and Technology, arXiv:2005.03745 (also in Survey4GWML

 

Proof-of-principle studies

Production search studies

Detection of early inspiral of GW

 

Curse of Dimensionality

  • Use Bayes’ theorem to estimate the posterior distribution of the source parameters.
  • Stochastically sampling the posterior is a time-consuming process:
    • One parameter estimation run requires collecting tens of thousands of samples from the posterior.
    • Each sample is found by conducting thousands of random walks.
    • Each random walk involves one waveform generation.
    • One parameter estimation run requires more than millions of waveform calculations.
  • May take several days to several weeks to run parameter estimation on a single event.
  • Traditional statistical methods scale poorly in time / accuracy for
    datasets described by many parameters.
     
  • Machine learning algorithms perform very well in exploiting
    correlations across a large number of dimensions

 

  • We tackle this issue by thinking out of the box and directly use the information from the "future", as opposed to the others' work by exploring network structure.

Motivation: Prior Sampling

  • In many high-dimensional cases, the dataset is normally limited relative to the prior physical dimensions and the feature space can not be sufficiently covered by data points.
  • Without representations, it would be challenging and even impossible to learn the subtle features.
  • Therefore the key question is how to effectively sample the feature space.
  • This is essentially equivalent to incorporate the physical domain knowledge into the high-dimensional training data.
  • In our case, we use the interim distribution that is derived from Monte Carlo method as a representation of the prior physical knowledge.
  • This is equivalent to draw more data points to cover the more important areas in the feature space for a better Bayesian inference.
  • We draw the training samples from the interim distribution with a certain probability “\(\alpha\)” and from a uniform distribution with a probability “\(1-\alpha\)”.
  • Eventually, the training dataset is consists of samples from interim distribution and a general distribution without any prior knowledge.
  • Therefore, the prior knowledge can be effective utilized in a more controlled way.

Feature space

Physical parameter space

Normalizing Flow

  • In addition to tweak the data, we also employ a more recent machine learning method dubbed as normalizing flow, instead of using traditional Bayesian approaches, in our high-dimensional GW data analysis.

 

 

  • Although we recognize a similar idea that applies normalizing flow technique to characterize the distribution exits in the previous works [Green & Gair (2020), 2106.12594], our method still differs from the recent study in GW study:
    1. Improved data preparetion
      (sampling all 15 dims instead of extrinsic only)
    2. Prior sampling to construct the training dataset with domain knowledge using SMOTETomek technique.
      (Relatively a better performance than [Green & Gair (2020)])
    3. Effectively and efficiently perform Bayesian inference on ultra-high dimension gravitational-wave data.
      (8s for 50,000 posterior samples)

Results

  • This implies that roughly 10% physical prior knowledge incorporated is enough for accurate Bayesian inference of the high-dimensional gravitational-wave data.

Results

  • The results from our approach are indistinguishable, both qualitatively and quantitatively, with the ground truth.

 

 

 

 

 

 

 

  • Our method may show a highly scalable performance.
    (In working)
  • Marginalized one- (1-σ) and two- dimensional posterior distributions for the benchmarking event (GW150914) over the complete 15 physical parameters, with our approach (orange) and the ground truth (blue).
  • The JS divergences are showed on the top
Preliminary

Results

  • The results from our approach are indistinguishable, both qualitatively and quantitatively, with the ground truth.

 

 

 

 

 

 

 

  • Our method may show a highly scalable performance.
    (In working)
  • Marginalized one- (1-σ) and two- dimensional posterior distributions for the benchmarking event (GW150914) over the complete 15 physical parameters, with our approach (orange) and the ground truth (blue).
  • The JS divergences are showed on the top
Preliminary
for _ in range(num_of_audiences):
    print('Thank you for your attention! 🙏')

Exploring Gravitational-Wave Detection and Parameter Inference using Deep Learning

By He Wang

Exploring Gravitational-Wave Detection and Parameter Inference using Deep Learning

The 8th KAGRA International Workshop , 14:30-14:45 KST on July 9th, 2021

  • 902