federica bianco PRO
astro | data science | data for good
University of Delaware
Department of Physics and Astronomy
Biden School of Public Policy and Administration
Data Science Institute
LSST Survey Scientist
federica b. bianco
she/her
Grad student
Since 2019 we study the sky (and more!) mostly with AI
Postdoc
experiment driven science -∞:1900
theory driven science 1900-1950
data driven science 1990-2010
the fourth paradigm - Jim Gray, 2009
computationally driven science 1950-1990
experiment driven science -∞:1900
theory driven science 1900-1950
data driven science 1990-2010
the fourth paradigm - Jim Gray, 2009
computationally driven science 1950-1990
AI driven science? 2010...
Frank Rosenblatt, 1958
The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.
The embryo - the Weather Buerau's $2,000,000 "704" computer - learned to differentiate between left and right after 50 attempts in the Navy demonstration
July 8, 1958
what year did the first Neural Network in astronomy review came out?
Join at
slido.com
# 1366553
AI for classification
1988
AI for classification
1988
CSP: Constraint Satisfaction Problems
1988
number of arXiv:astro-ph submissions with abstracts containing one or more of the strings: ‘machine learning’, ‘ML’, ‘artificial intelligence’, ‘AI’, ‘deep learning’ or ‘neural network’.
number of arXiv:astro-ph submissions with abstracts containing one or more of the strings: ‘machine learning’, ‘ML’, ‘artificial intelligence’, ‘AI’, ‘deep learning’ or ‘neural network’.
"In 1994, Ofer Lahav, an early trailblazer, wryly identified the ‘neuro-skeptics’—those resistant to the use of such techniques in serious astrophysics research—and argued that ANNs ‘should be viewed as a general statistical framework, rather than as an estoteric approach’ [8]. Unfortunately, this scepticism has persisted. This is despite the recent upsurge in the use of neural networks (and machine learning in general) in the field [...] Most of the criticism of machine learning techniques, and deep learning in particular, is levelled at the perceived ‘black box’ nature of the methodology."
x
y
physics
Input (observables)
Output
(observable)
Input (observables)
x
y
Output
(observable)
??
Data
x
y
Prediction
function
Machine Learning
x
y
physics
loss function (e.g. χ2, MSE)
Data
Prediction
x
y
b
m
m: slope
b: intercept
Machine Learning
Data
Prediction
x
y
b
m
m: slope
b: intercept
parameters
x
y
learn
goal: find the right m and b that turn x into y
goal: find the right m and b that turn x into y
Machine Learning
Data
Prediction
https://symposia.obs.carnegiescience.edu/series/symposium2/ms/freedman.ps.gz
"Regression toward Mediocrity":
Sir Francis Galton in the 1885-6
"Linear Regression":
Karl Pearson 1908-1911
Input
x
y
output
physics
Data
Prediction
Input
x
y
output
physics
non-linear modification to a linear function
Data
Prediction
Tree models
(at the basis of Random Forest
Gradient Boosted Trees)
Machine Learning
p(class)
extracted
features vector
p(class)
pixel values tensor
GPT-3
175 Billion Parameters
3,640 PetaFLOPs days
Kaplan+ 2020
comput optimal frontier
Kaplan+ 2020
Kaplan+ 2020
x
y
Input
output
hidden layers
latent space
A Neural Network is a kind of function that maps input to output accurately
x
y
Input
output
hidden layers
latent space
A Neural Network is a kind of function that maps input to output accurately
by generating "interesting" internal representation of the input
x
y
Input
output
hidden layers
latent space
loss function (e.g. χ2, MSE)
This latent space is only going to be as good as it needs to solve a particular task
Sam Altman:
(paraphrasing NPR interview 2025):
Well, what we learned was that we needed to get the model coherent and capable first. The content moderation, the safety stuff—that came after.
What are they optimizing for?
back to astro......
trained extensively on large amounts of data to solve generic problems
Foundational AI models
We use the ILSVRC-2012 ImageNet dataset with 1k classes
and 1.3M images, its superset ImageNet-21k with
21k classes and 14M images and JFT with 18k classes and
303M high-resolution images.
Typically, we pre-train ViT on large datasets, and fine-tune to (smaller) downstream tasks. For
this, we remove the pre-trained prediction head and attach a zero-initialized D × K feedforward
layer, where K is the number of downstream classe
why not images too?
lightcurve latent space rep
image
latent space rep
SN 2018cow
Perley+2018
SN 2018cow
Perley+2018
late layers learn complex aggregate specialized features
early layers learn simple generalized features (like lines for CNN)
prediction "head"
original data
trained extensively on large amounts of data to solve generic problems
Foundational AI models
Adrian Crawford
UVA Jefferson fellow, PEO scholar
Beyond classification: Can Maven truly answer questoins it was not designed to answer?
can it tell us if a supernova is double peaked... even if we did not sample the peak?
FASTlab Flash highlight
Testing the performance of MetaAI SAM on astronomical objects
Instead of building our own specialized AI, can we adapt the models that the industry produces?
That would save a lot of computational resources and computational resources have an environmental footprint!
Award #2123264
ADS astronomy articles that mention "foundation model" in the abstract
LSST:The Vera C. Rubin Observatory Legacy Survey of Space and Time
20Tb of data every night. That is equivalent to |
8,000 high definition movies
4,000 hours of tiktok videos
every night for 10 years
what's in a name?
The first ground-based national US observatory named after a woman, Dr. Vera C. Rubin
what's in a name?
what's in a name?
The first ground-based national US observatory named after a woman, Dr. Vera C. Rubin
Building an unprecedented catalog of Solar System Objects
LSST Science Drivers
Mapping the Milky Way and Local Volume
Probing Dark Energy and Dark Matter
Exploring the Transient Optical Sky
To accomplish this, we need:
Objective: to provide a science-ready dataset to transform the 4 key science area
To accomplish this, we need:
1) Dark skies - Cerro Pachon Chile
Objective: to provide a science-ready dataset to transform the 4 key science area
To accomplish this, we need:
1) Dark skies - Cerro Pachon Chile
2) a large telescope mirror to be sensitive - 8m (6.7m)
Objective: to provide a science-ready dataset to transform the 4 key science area
May 2022 - Telescope Mount Assembly
3.2 Gigapixels:
We built the largest (declassified) camera ever built
to look farther and wider into the sky than ever before
1996-1998 Tony Tyson, Roger Angel
How it started
with Zhoran Mandami,
Astronaut Reid Wiseman,
Activist Zabib Musa Loro,
Pope Leo XIV,
Olympic medalist Alysa Liu,
Benicio Del Toro.......
2008
2017
Are We There YET????!!!!
Eye to the sky…on-sky engineering tests have begun at
Rubin Observatory using the world’s largest digital camera!
June 23 2025
First Look party here at UD with 213 people signed up!
678 separate images taken in just over seven hours of observing time. Trifid nebula (top right) and the Lagoon nebula, which are several thousand light-years away from Earth. | NSF-DOE Vera C. Rubin Observatory
June 30, 2025
DP1 release!
The Vera C. Rubin Observatory Data Preview 1
HELL YEAH!
2025
edge computing
Will we get more data???
SKA
(2025)
edge computing
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200)Villar+ 2018
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
edge computing
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200)
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
edge computing
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200)
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200) Villar+ 2018
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200) Villar+ 2018
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200) Villar+ 2018
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
SKA
(2025)
17B stars (x10) Ivezic+19
~10 million QSO (x10) Mary Loli+21
~50k Tidal Disruption Events (from ~150) Brickman+ 2020
~10k SuperLuminous Supernovae (from ~200) Villar+ 2018
~400 strongly lensed SN Ia (from 10) Ardense+24
~50 kilonovae (from 2) Setzer+19, Andreoni+19 (+ ToO)
> 10 Interstellar Objects fom 2.... ?)
True Novelties!
"BUT BIG DATA DOES NOT MEAN BIG SCIENCE"
Yang Huang,
University of Chinese Academy of Sciences
SpecCLIP talk @UNIVERSAI
IAU workshop Greece June 2025
survey optimization
Challenge
Current plan: rolling 8 out of the 10 years
Discovery Engine
10M alerts/night
Community Brokers
target observation managers
BABAMUL
Rubin has involved the community to an unprecedented level in survey design this is a uniquely "democratic" process!
Survey Cadence Optimization Committee
←Dimmer Brighter →
0.01 0.1 1 10 100
2017
80,000
Rubin has involved the community to an unprecedented level in survey design this is a uniquely "democratic" process!
2019
80,000
Rubin has involved the community to an unprecedented level in survey design this is a uniquely "democratic" process!
2023
80,000
Rubin has involved the community to an unprecedented level in survey design this is a uniquely "democratic" process!
2024
80,000
Rubin has involved the community to an unprecedented level in survey design this is a uniquely "democratic" process!
2024
2024
80,000
subverting the time domain astronomy process
Challenge
10 stars explode in the universe every second
Until the 1900s we would see 1 in a century
Until the 1980s we would see 1 in a decade
Until the 2010s we would see 1 in a month
With the Vera C. Rubin Observatory we will see 1000 every night !
in 60 seconds:
Difference Image Analysis
template
in 60 seconds:
Difference Image Analysis
template
difference image
in 60 seconds:
Difference Image Analysis
template
difference image
Improving the efficiency of transient detections with Neural Networks
search
template
difference
-
=
Saliency maps: what pixels matter?
search
template
difference
95% accurate
Acero-Cuellar et al. DESC submitted
Tatiana Acero-Cuellar
UNIDEL fellow,
LSST Data Science Fellow
FASTlab Flash highlight
Saliency maps: what pixels matter?
search
template
difference
Acero-Cuellar et al. DESC submitted
Tatiana Acero-Cuellar
UNIDEL fellow,
LSST Data Science Fellow
The Rubin LSST ML-Reliability Score (aka real-bogus)
accuracy 98.06%, purity 97.87%, completeness of 98.27%.
- requires instantaneous inference
- limited computational resources (CPU)
- evolving data quality
- limited ground truth data (e.g. no variable stars in training)
FASTlab Flash highlight
Challange
data encoding
well... it depends
2025
(2026)
edge computing
Is the data gonna also be better?
LOW RES SIM
HIGH RES SIM
AI-AIDED HIGH RES
visualizatoin and concept credit: Alex Razim
visualizatoin and concept credit: Alex Razim
Kaicheng Zhang et al 2016 ApJ 820 67
SN 2011fe
deSoto+2024
Boone 2017
7% of LSST data
Boone 2017
7% of LSST data
The rest
Lochner et al 2018
Lochner et al 2018
Text
Dr. Somayeh HKhakpash
LSST Catalyst Fellow
Lehigh University Visiting Prof.
FASTlab Flash highlight
Text
we introduce
Gaussian process Optimized Photometric Regression of Extragalactic Archival Ultraviolet-infrared eXplosions, a.k.a GOPREAUX—
a Python package for Gaussian Process Regression of multi-wavelength transient photometry. [...]
This allows for predictions of light curves [...] at higher redshifts, where the rest-frame UV emission is redshifted into the observer-frame optical or infrared.
Gaussian processes work by imposing a kernel that represents the covariance in the data (how data depend on time or time/wavelength). Imposing the same kernel for different time-domain phenomena is principally incorrect
=> bias toward known classes
Methodological issues with these approaches
Neural processes replace the imposed kernel with a learned one
NASA FINESST Fellow
Siddharth Chaini
FASTlab Flash highlight
NASA FINESST Fellow
Siddharth Chaini
FASTlab Flash highlight
Challange
follow up
Rubin will see ~1000 SN every night!
Credit: Alex Gagliano IAIFI fellow MIT/CfA
When they go high, we go low... spectra classification at low resolution
Astrophysical spectra require the capture of enough photons at each wavelength:
large telescopes
long exposure times
bright objects
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
FASTlab Flash highlight
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
Adapting Transformer architecture (Vaswani et al. 2017)
Classification from sparse data: Lightcurves
Viswani+ 2017 Attention is all you need
AI was transformed in 2017 by this paper
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
FASTlab Flash highlight
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
FASTlab Flash highlight
Willow Fox Fortino
UDelaware
When they go high, we go low
Classification power vs spectral resolution for SNe subtypes
data embedding
classification head
Willow Fox Fortino
UDelaware
As seen in Muthukrishna+2019
FASTlab Flash highlight
Text
A new AI-based classifier for SN spectra at low resolution
anomaly detection
Challenge
Most classifiers for variable stars use Random Forest (not distance based)
In distance based classification, optimal distances can be found for the class of interest: flexible, customizable, efficient
https://arxiv.org/pdf/2403.12120.pdf
Astronomy and computing
FASTlab Flash highlight
NASA FINESST Fellow
Siddharth Chaini
UDelaware
Text
Are we prepared to discover new physics?
Text
Are we prepared to discover new physics?
This ensamble distance method excells at identifying out of sample anomalies!
Text
Are we prepared to discover new physics?
Chaini, Bianco, and Mahabal
submitted to MLPS NEURIPS 2025
NSF award
2219731
NASA FINESST Fellow
Siddharth Chaini
UDelaware
This ensamble distance method excells at identifying out of sample anomalies!
Text
Are we prepared to discover new physics?
Chaini, Bianco, Mahabal
submitted to MLPS NEURIPS 2025
Will AI replace us (astrophysicists?)
is a word I am borrowing from Margaret Atwood to describe the fact that the future is us.
However loathsome or loving we are, so will we be.
Whereas utopias are the stuff of dream dystopias are the stuff of nightmares, ustopias are what we create together when we are wide awake
US-TOPIA
thank you!
University of Delaware
Department of Physics and Astronomy
Biden School of Public Policy and Administration
Data Science Institute
federica bianco
fbianco@udel.edu
Opportunity
foundational models
AI
ISN'T FREE
ethics of AI
Challange + Opportunity
Knowledge is power
Knowledge is power
With great power comes grteat responsibility
"Sharing is caring"
the butterfly effect
We use astrophyiscs as a neutral and safe sandbox to learn how to develop and apply powerful tool.
Deploying these tools in the real worlds can do harm.
Ethics of AI is essential training that all data scientists shoudl receive.
Why does this AI model whitens Obama face?
Simple answer: the data is biased. The algorithm is fed more images of white people
But really, would the opposite have been acceptable? The bias is in society
Why does this AI model whitens Obama face?
Simple answer: the data is biased. The algorithm is fed more images of white people
But really, would the opposite have been acceptable? The bias is in society
Why does this AI model whitens Obama face?
Simple answer: the data is biased. The algorithm is fed more images of white people
Joy Boulamwini
Challange
echological AI
←Dimmer Brighter →
←Dimmer Brighter →
0.01 0.1 1 10 100
stellar sexplosions
stellar eruptions
stellar variability
trained extensively on large amounts of data to solve generic problems
Foundational AI models
We use the ILSVRC-2012 ImageNet dataset with 1k classes
and 1.3M images, its superset ImageNet-21k with
21k classes and 14M images and JFT with 18k classes and
303M high-resolution images.
Typically, we pre-train ViT on large datasets, and fine-tune to (smaller) downstream tasks. For
this, we remove the pre-trained prediction head and attach a zero-initialized D × K feedforward
layer, where K is the number of downstream classe
Limited Field of View: Space telescopes often have smaller fields of view compared to ground-based surveys.
Data Latency: Delays in data transmission and processing can affect rapid follow-up.
Resource Allocation: Competition for telescope time can limit observations of certain transients.... LETS NOT TRIGGER 3 ToOs ON THE SAME TRANSIENT!!
(RacusinRacusin et al., 2008et al., 2008
(RacusinRacusin et al., 2008et al., 2008
GRB 080319B, the brightest optical burst ever observed
SWIFT
rapid response
SWIFT
HST, Chandra, SPITZER
...
Kepler, K2, TESS
high precision dense time series
Kepler satellite EB
LSST (simulated) EB
is transient data AI ready?
is transient data AI ready?
is transient data AI ready?
is transient data AI ready?
is transient data AI ready?
is transient data AI ready?
is transient data AI ready?
The PLAsTiCC challenge winner, Kyle Boone was a grad student at Berkeley, and did not sue a Neural Network!
Hlozek et al, 2020
DATA CURATION IS THE BOTTLE NECK
models contributed by the community were in
- different format (spectra, lightcurves, theoretical, data-driven)
- the people that contributed the models were included in 1 paper at best
- incompleteness
- systematics
- imbalance
khakpash+ 2024 showed that the models were biased for SN Ibc
AVOCADO, SCONE, all these models are trained on a biased dataset and are being currently used for classification
Ibc data-driven templates vs PLAsTiCC
Dr. Somayeh Khakpash
LSSTC Catalyst Fellow, Rutgers
Visiting Faculty, Lehigh
By federica bianco
Opportunities and Challenges of Machine Learning and AI for the next-generation time domain astronomical survey