Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Offline
Compute cost immaterial
Online
Must be cheap to run
Collect high-fidelity data
Precompute reduced model
Evaluate model
We are concerned with
Target application
Extensive literature on ROM (traditional/ ML approaches)
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Reduced Order Modeling
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Singular Value Decomposition
On closures for reduced order models—A spectrum of first-principle to machine-learned avenues, Ahmed et al. Phys of Fluids, 2021
Apply learned mapping \(u(x, t) = \Phi_n \cdot \tilde{u}\) to PDE
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Encoding Architectures |
---|
(Variational) Auto-encoder |
(Variational) Auto-decoder |
Implicit Neural representations |
Dynamic weights |
Results from 2D Burgers' problem, \(\mathit{Re} = 10,000\) (\(11\times\) speedup)
Shallow, masked auto-encoder
In follow up work (May 2023), the authors combined shallow encoders with domain decomposition methods.
ML based Nonlinear-Manifold Reduced-Order-Modeling
Nonlinear manifold latent space with ML
Petrov-Galerkin
time-evolution
Challenge | Possible Research Directions |
---|---|
Use of small autoencoders | Use novel encoding methods (implicit neural representations (INR)) |
Fixed latent space size | Develop Fourier based auto-encoder that can adjust latent-space size. Behaves similarly to convolutional autoencoder |
Fixed hyper-reduction points | INR allows for sampling at any point |
Autoencoder training too expensive | - Attempt pre-training / fine-tuning methodology - Attempt auto-decoder architectures |
Reduce data dependence | Explore over-fitting with dynamic-weights methdos |
Paper | Method | Description |
---|---|---|
NM-ROM Deep CAE Carlberg, 2020 |
Latent space: deep conv auto-encoder Time: equation based (PG-HR) |
First work combining ML latent space with physics based dynamics. Slow online solve due to high cost of Jacobian computation. Examples: 1D inviscid Burgers |
NM-ROM Shallow CAE Kim, 2022 |
Latent space: shallow conv auto-encoder Time: equation based (PG-HR) |
Replace deep CAE with shallow CAE to speed up online solve Examples: 1D/2D viscous Burgers, Re = 10k |
Neural Implicit Flow Burton, 2022 |
Space/time: implicit neural field | Full surrogate model Examples: 1D KS, 2D NS (Rayleigh-Taylor instability), isotropic turbulence |
Continuous ROM Chen, Carlberg, 2023 |
Latent space: implicit neural field Time: equation based (PG-HR) |
No online dependence on full-order-model size. Examples: 1D inviscid Burgers, NS (vortex shedding), nonlinear elasticity |
Dynamic Weights Nunez, 2023 |
Latent space: dynamic weights Time: data-driven |
Latent space given by weights of NN. Learns neural ODE to evolve network weights. Examples: 1D Burgers, 1D KdV, 1D KS |
Deep-HyROMnet Manzoni, 2023 |
Latent space: implicit neural field Time: equation based (PG-HR) |
Train neural-network to do hyper-reduction |
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Increasing inductive bias
Data-driven/ black-box
Principal Component Analysis
Learned dynamics
Physics-based dynamics
Neural surrogate
ML based latent space
Closure Modeling
Lee, Carlburg, Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders, J. Comp. Phys, 2019
Challenges | Possible Research Directions |
---|---|
Expensive Training | - Use novel encoding methods in place of auto-encoder - Attempt pre-training / fine-tuning methodology - Attempt auto-decoder architectures |
Reduce data dependence | - Explore over-fitting with dynamic-weights methods |
Fixed latent space size | Develop Fourier based auto-encoder that can adjust latent-space size |
Paper | Method | Description |
---|---|---|
NM-ROM Deep CAE Carlberg, 2020 |
Latent space: deep CAE Time: equation based (PG) |
Solve dynamical system where space discretization is given by NN. Online solve depends on FOM size. |
NM-ROM Shallow CAE Kim, 2022 |
Latent space: shallow CAE Time: equation based (PG-HR) |
Replace deep CAE in NM-ROM Deep CAE with shallow network. Online solve independent of on FOM size thanks to hyper-reduction. |
DL-ROM Manzoni, 2020 |
Latent space: CAE Time: deep NN |
Surrogate model inside an AE bottleneck. Can be queried at any point w/o time-evolution |
Deep-HyROMnet Manzoni, 2023 |
Latent space: implicit neural field Time: equation based (PG-HR) |
Train neural-network to do hyper-reduction |
Neural Implicit Flow Burton, 2022 |
Space/time: implicit neural field | Full surrogate model can be queried at any time w/o requiring a dynamic system solve |
Continuous ROM Chen, Carlberg, 2023 |
Latent space: implicit neural field Time: equation based (PG-HR) |
No online dependence on full-order-model size |
Dynamic Weights Nunez, 2023 |
Latent space: dynamic weights Time: data-driven |
Latent space given by weights of NN. Learns neural ODE to evolve network weights. |
Proposal 1 |
Latent space: Fourier AE, INR Time: equation-based (PG-HR) |
Fourier based encoder, INR based decoder. |
NM ROM D | NM ROM S | DL-ROM | Deep-HyROM | NIF | CROM | DW | Proposal 1 | |
---|---|---|---|---|---|---|---|---|
Nonlinear latent space | Y | Y | Y | Y | Y | Y | Y | Y |
Data => latent space | Y | Y | Y | Y | Y | Y | Y | Y |
Expensive training | Y | Y | Y | Y | Y | Y | Y | Y |
Autoencoder slows down online solve | Y | Y | N | N | N | N | N | N |
Fixed latent space size | Y | Y | Y | Y | Y | Y | Y | N |
Equation-based dynamics | Y | Y | N | Y | N | Y | N | Y |
Fixed latent space size | Y | Y | Y | Y | Y | |||
Fixed hyper-reduction points | - | N | - | N | - | Y | - | Y |
Title: Fourier-based auto-encoders for Nonlinear Manifold Reduced Order Modeling.
Possible new contributions
Plan
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Title: Fourier-based auto-encoders for Nonlinear Model Order Reduction.
Possible new contributions
Updates
Notes from discussion
Plan
Training Trajectory
Test Trajectory
Kim et al, A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. J. of Comp. Phys
Title: Fourier-based auto-encoders for Nonlinear Model Order Reduction.
Learned latent space with auto-encoders, equation-based time-evolution
Literature Review
Possible new contributions
Updates
Topology optimization: Lit review on level set method. Meeting with Prof. Kara.
Plan
Kim et al, A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. J. of Comp. Phys
Training Trajectory
Test Trajectory
Nonlinear Reduction
would be interesting to look at multil|evel nn on sirens.
equivalent on getting pca modes is training a new autoencoder with boosting?
Topology Optimization
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Fourier-based auto-encoders for Nonlinear Model Order Reduction
Literature Review
Possible new contributions
Updates
Topology optimization
Plan
Kim et al, A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. J. of Comp. Phys
Principal Component Analysis (Traditional)
Singular Value Decomposition
Convolutional Autoencoders
On closures for reduced order models—A spectrum of first-principle to machine-learned avenues, Ahmed et al. Phys of Fluids, 2021
Method | ||||
---|---|---|---|---|
|
||||
|
||||
|
||||
|
\(\mu=0.6\) (Train)
\(\mu=0.7\) (Test)
\(\mu=1.2\) (Train)
\(\mu=1.1\) (Test)
Implicit Neural Representation
Encoder/decoder (60k, 3k params)
Latent space size: 4
Convolutional Auto-Encoder
Encoder/decoder (60k, 60k params)
Latent space size: 16
Principal Component Analysis
Projection modes: 32
Principal Component Analysis
Projection modes: 16
CROM: Continuous Reduced-ORrder Modeling of PDEs using Implicit Neural Representations, Int'l Conference on Learning Representations, 2023
Implicit Neural Representation allow for smaller latent-space size with equivalent representation capacity.
Park et al., DeepSDF: Learning Continuous Signed Distance Functions for Shape Representations, 2019
CROM: Continuous Reduced-ORrder Modeling of PDEs using Implicit Neural Representations, Int'l Conference on Learning Representations, 2023
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Literature Review
Possible new contributions
Updates
Topology optimization
Plan
Paper | Method | Contribution | Model training | Online solve |
---|---|---|---|---|
Conventional Approach |
Latent space: Apply PCA to data Time: equation based (PG-HR) |
Learn (linear) reduced representation via PCA. Evolve compact dynamical system | Apply SVD to data matrix Time: <2s |
Evolve dynamical system by minimizing residual at some points (Hyper-Reduction) |
NM-ROM Deep CAE Carlberg, 2020 |
Latent space: deep CAE Time: equation based (PG) |
Solve dynamical system where space discretization is given by NN. | Deep Conv. Auto-Encoders CNN => numerical artifacts 1K epochs, (~60K / 60K params) Time: ~5 mins |
Online solve depends on FOM size. |
Continuous ROM Chen, Carlberg, 2023 |
Latent space: implicit neural field Time: equation based (PG-HR) |
Use implicit neural field in place of Conv AEs. | Conv encoder, MLP decoder. No artifacts, smooth output 180K epochs, (~20K / 10K params) Time: ~2 hrs |
No online dependence on full-order-model size |
Proposal |
Latent space: implicit neural field Time: equation based (PG-HR) |
Reduce training cost by employing encoder-free training approach. Increase latent space size by learning a correction model to the original model |
No encoder, MLP decoder No artifacts, smooth output. 1K epochs, ~10K params Training time: ~2 mins |
No online dependence on full-order-model size |
Training statistics from my model implementation, and training on Burgers' data.
Kim et al, A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. J. of Comp. Phys
Method | ||||
---|---|---|---|---|
|
||||
|
||||
|
||||
|
\(\mu=0.6\) (Train)
\(\mu=0.7\) (Test)
\(\mu=1.2\) (Train)
\(\mu=1.1\) (Test)
Implicit Neural Representation
Encoder/decoder (60k, 12k params)
Latent space size: 4
Convolutional Auto-Encoder
Encoder/decoder (60k, 60k params)
Latent space size: 16
Principal Component Analysis
Projection modes: 32
Principal Component Analysis
Projection modes: 16
CROM: Continuous Reduced-ORrder Modeling of PDEs using Implicit Neural Representations, Int'l Conference on Learning Representations, 2023
From experiment: Implicit Neural Representation has allowed for smaller latent-space size with equivalent representation capacity.
Park et al., DeepSDF: Learning Continuous Signed Distance Functions for Shape Representations, 2019
CROM: Continuous Reduced-ORrder Modeling of PDEs using Implicit Neural Representations, Int'l Conference on Learning Representations, 2023
Results from Autodecoder training on Burgers data
Decoder: 12K parameters
Latent space size: 3
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates
Topology optimization
Plan
Paper | Model training | Trade-Off | Result |
---|---|---|---|
Conventional Approach Latent space: Apply PCA to data Time: equation based (PG-HR) |
Apply SVD to data matrix Time: <2s |
Extremely fast training Large latent space size makes only solve expensive Result: PCA w. 32 projection modes |
|
NM-ROM Deep CAE Carlberg, 2020 Latent space: deep CAE Time: equation based (PG) |
Deep Conv. Auto-Encoders CNN => numerical artifacts 1K epochs, (~60K / 60K params) Time: ~5 mins |
Train large encoder/decoder model. Many tunable parameters. Results not very accurate: MSE~5e-3 Relatively large latent space size. Online solve of the same order as full order calculation. |
|
Continuous ROM Chen, Carlberg, 2023 Latent space: implicit neural field Time: equation based (PG-HR) |
Conv encoder, MLP decoder. No artifacts, smooth output 180K epochs, (~30K / 10K params) Time: est. ~ 9 hrs |
Extremely long training. Small batchsize, stochastic training. Small oscillations in prediction take a long time to die out. Fast online solve. SOTA results. Result: Trained for 6K epochs. Expected MSE~2e-5 after 180K epochs. |
|
Proposal Latent space: implicit neural field Time: equation based (PG-HR) |
No encoder, MLP decoder No artifacts, smooth output. 5K epochs, ~3K params Training time: ~30 mins |
Encoder-free training. Small NNs --> BFGS, Gauss Newton possible. Fast online solve Training: ADAM (6K epochs), BFGS (5K epochs) Got MSE~7e-6. Training time < 30 mins. No artifacts. Shock is perfectly captured. |
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from the week
Proposal
Plan
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from the week
Plan
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from last week
Plan
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from this week
Plan
Next Steps
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from this week
Plan
Next Steps
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from this week
Plan
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from this week
Plan
With L2-Reguarization, \(\lambda=1\)
No regularization
Hidden layer width 32
Hidden layer width 256
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Challenges
Possible new contributions
Updates from this week
Plan
Solution, first derivative
Predicted Solution matches with data
\( \partial_t u + c\cdot\partial_x u = 0\)
Advection Equation
Predicted Solution matches with data
Kim, Choi, Widemann, Zhodi, 2020
Problems
Contributions
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Possible new contributions
Updates from this week
Plan
Challenge:
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Possible new contributions
Updates from this week
Plan
\(\lambda = 0 \) (no regularization)
\(\lambda = 0.5\)
Topic: Learned latent space with auto-encoders, equation-based time-evolution
Title: Making Nonlinear Model-Order Reduction robust and cheap
Possible new contributions
Updates from this week
Plan
Initial condition
Ours
Baseline
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
Solution (red), Data (black)
Error at last time-step
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
Kuramoto-Sivashinsky Equation
2D Burgers problem
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
2D Burgers problem
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress
Writing Plan
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Vedant Puri
Orthogonal Functions | Deep Neural Networks |
---|---|
|
|
Curse of dimensionality
Dimension independent
Model size scales only with the complexity of the signal.
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Neural ODEs
Universal Diff Eq
Reduced Order Modeling
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress this week
Writing Plan
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress this week
Writing Plan
Smooth Neural Field Weight Regularization
Smooth Neural Field Lipschitz Regularization
Smooth Neural Field Weight Regularization
Smooth Neural Field Lipschitz Regularization
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress this week
Writing Plan
Training error
Evolution error
Training error
Evolution error
Training error
Evolution error
Training error
Evolution error
Training error
Evolution error
Training error
Evolution error
Problem:
Fix:
(Cho et al. 2023)
(Berman et al. 2024)
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress this week
Writing Plan
Parameterized PDE Problem (\( \vec{x}, t, \vec{\mu} \))
\( t, \vec{\mu}\)
Hyper Network
\( \vec{x}\)
Decoder Network
\(\tilde{u}\)
\(\vec{u}(\vec{x}, t; \vec{\mu})\)
Physical coordinates
Time, paramters
Latent coordinates
Notes
Training error
Evolution error
Training error
Evolution error
Nonlinear Manifold Reduced Order Modeling with Neural Fields
Existing methods
New Contributions
Progress this week
Writing Plan
Old
New
Old
New
Old
New
Old
New
Old
New