Francois Lanusse
Accepted at NeurIPS 2024 🎉
Image credit: Peter Melchior
Image credit: Peter Melchior
Image credit: Peter Melchior
astro-ph abstracts mentioning Deep Learning, CNN, or Neural Networks
The vast majority of these results has relied on supervised learning and networks trained from scratch.
=> Limits in practice the ease of using deep learning for analysis and discovery
Colm-Cille
Caulfield University of Cambridge
|
Leslie
Greengard Flatiron Institute
New York University |
David Ha Sakana AI |
Yann LeCun Meta AI New York University |
---|---|---|---|
Stephane
Mallat École Normale Supérieure
Collège de France Flatiron Institute |
David
Spergel Simons Foundation |
Olga Troyanskaya Flatiron Institute Princeton University |
Laure
Zanna New York University
|
SCIENTIFIC ADVISORY GROUP
Credit: Melchior et al. 2021
Credit:DESI collaboration/DESI Legacy Imaging Surveys/LBNL/DOE & KPNO/CTIO/NOIRLab/NSF/AURA/unWISE
Most General
Most Specific
Independent models for every type of observation
Single model capable of processing all types of observations
Most General
Most Specific
Independent models for every type of observation
Single model capable of processing all types of observations
Bytes Are All You Need (Horton et al. 2023)
Most General
Most Specific
Independent models for every type of observation
Single model capable of processing all types of observations
Bytes Are All You Need (Horton et al. 2023)
AstroCLIP
Project led by Francois Lanusse, Liam Parker, Leopoldo Sarra, Siavash Golkar, Miles Cranmer
Accepted contribution at the NeurIPS 2023 AI4Science Workshop
Published in the Monthly Notices of Royal Astronomical Society
Contrastive Language Image Pretraining (CLIP)
(Radford et al. 2021)
Flamingo: a Visual Language Model for Few-Shot Learning (Alayrac et al. 2022)
Hierarchical Text-Conditional Image Generation with CLIP Latents (Ramesh et al. 2022)
Cosine similarity search
Shared physical information about galaxies between images and spectra
=> We are building summary statistics for the physical parameters describing an object in a completely data driven way
Supervised baseline
We use estimates of galaxy properties from the PROVABGS catalog (Hahn et al. 2023) (Bayesian spectral energy distribution (SED) modeling of DESI spectroscopy and photometry method)
of regression
Negative Log Likelihood of Neural Posterior Inference
Classification Accuracy
We test a galaxy morphology classification task using as labels the GZ-5 dataset (Walmsley et al. 2021)
PCA of patch features
Dense Semantic Segmentation
Dense Depth Estimation
Project led by Alice Desmons, Francois Lanusse, Sarah Brough
Image Similarity
Spectral Similarity
Image-Spectral Similarity
Most General
Most Specific
Independent models for every type of observation
Single model capable of processing all types of observations
Bytes Are All You Need (Horton et al. 2023)
AstroCLIP
Most General
Most Specific
Independent models for every type of observation
Single model capable of processing all types of observations
Bytes Are All You Need (Horton et al. 2023)
AstroCLIP
"Massively Multi-Modal Large Data Model for Astrophysics"
Flamingo: a Visual Language Model for Few-Shot Learning (Alayrac et al. 2022)
Chameleon: Mixed-Modal Early-Fusion Foundation Models (Chameleon team, 2024)
Galaxy Image Segmentation
Walsmley & Spindler (2023)
Galaxy Image Deblending
=> Foundation Models that build a deep understanding of the data at the pixel level.
Credit: Melchior et al. 2021
Multiband images from Legacy Survey
=> Official release October 2024
Accepted at NeurIPS 2024 🎉
Input
Reconstructed
Our strategy:
Field Embedding Strategy Developed for Multiple Physics Pretraining (McCabe et al. 2023)
Jean Zay engineering team visiting Flatiron for a hackathon
Project led by Michael McCabe, Bruno Régaldo, Liam Parker, Ruben Ohana, Miles Cranmer
Best paper award at the NeurIPS 2023 AI4Science Workshop, accepted at NeurIPS 2024
Navier-Stokes
Incompressible
Compressible
Shallow Water
Diffusion-Reaction
Takamoto et al. 2022
Can we improve performance of surrogate models by pretraining on large quantities of easily simulatable systems?
Context size: 16 frames
Compressible Navier-Stokes
M = 0.1
M = 1.0
PDEBench
=> Available in October 2024
Accepted at NeurIPS 2024 🎉
Thank you for listening!