Summary deck:

How to do physics with computers?

 

APR 23, 2025

Vedant Puri

Motivation: understanding turbulence is critical for energy systems

Mesosphere

Wind farm

Turbine

Blade

1000\,\mathrm{km}
10\,\mathrm{km}
100\,\mathrm{m}
10\,\mathrm{m}

Phase 1: High-order finite-elements on large meshes

1

\partial_t \vec{v} + (\vec{v}\cdot\nabla)\vec{v} = -\nabla p + \frac{1}{\text{Re}}\Delta \vec{v} + f\\ \nabla\cdot\vec{v} = 0

Navier-Stokes Equations

(Flow past bluff body \( Re = 3900 \))

Need high quality function representation over (complex) geometry

Main operations: \(\nabla, \, \int_\Omega\)

High-order polynomial interpolation is the underlying technology

Differentiation

Interpolation

Integration

Prohibitively expensive

Challenges with meshing

Requires tailoring solution to problem

Newsflash: ML beats the curse of dimensionality-ish

2

Orthogonal Functions Deep Neural Networks

 







 

Fast, accurate differentiation
 

Fast, accurate differentiatoin

Fast, accurate integration
 

Approximate integration
f = \tilde{f} + \mathcal{O}(h)

(Weinan, 2020)

Model size scales with signal complexity

Model size scales exponentially with dimension

Phase 2: Hybrid ML + FEM systems for closure modeling

3

Reduced order modeling for accelerating PDE solves

4

2D Viscous Burgers problem \( (\mathit{Re} = 1\text{k})\)

Reduced order modeling with smooth neural fields

  • Use neural networks in place of mesh based finite elements
  • Combine known physics (PDE equation) with data
  • Hopefully speed-up large calculations without sacrificing accuracy
  • Reduce degrees of freedom from 500k to 2
  • Speed up calculation by 200x
  • Less than 1% error anywhere
\text{FOM}
\text{POD-ROM}
\text{SNFL-ROM}
\text{CAE-ROM}
\text{SNFW-ROM}

High freq. noise

Non-differentiable!

Accurately capture of dynamics with smooth neural fields

\textcolor{red}\times
\mathrm{NN}(x) \approx u^{}(x) \implies \dfrac{\mathrm{d}^k}{\mathrm{d}x^k} \mathrm{NN}(x) \approx u^{(k)}(x)
\mathrm{NN}(x)
\frac{\mathrm{d}}{\mathrm{d}x} \mathrm{NN}(x)
\frac{\mathrm{d}^2}{\mathrm{d}x^2} \mathrm{NN}(x)

Accurate capture of dynamics

Baseline method

Our approaches

Primer on model order reduction

\frac{\partial \boldsymbol{u}}{\partial t} = \mathcal{L}(\boldsymbol{x}, t, \boldsymbol{u}; \boldsymbol{\mu})

5

Full order model (FOM)

\boldsymbol{u}(\boldsymbol{x}, t; \boldsymbol{\mu}) \approx g_\text{FOM}(\boldsymbol{x}, \textcolor{red}{\bar{u}(t; \boldsymbol{\mu}})) = \mathbf{\Phi} \cdot \textcolor{red}{\bar{u}(t; \boldsymbol{\mu}})

Linear POD-ROM

Nonlinear ROM

\textcolor{red}{\bar{u}(t; \boldsymbol{\mu})} \approx g'_\text{ROM}(\textcolor{orange}{\bar{u}(t; \boldsymbol{\mu}}) = \bar{u}_0 + \mathbf{P} \cdot \textcolor{orange}{\tilde{u}(t; \mathbf{\mu})}
\boldsymbol{u}(\boldsymbol{x}, t; \boldsymbol{\mu}) \approx g_\text{ROM}(\boldsymbol{x}, \textcolor{blue}{\tilde{u}(t; \boldsymbol{\mu}})) = \mathrm{NN}_\theta\left(\boldsymbol{x}, \textcolor{blue}{\tilde{u}(t; \boldsymbol{\mu}}) \right)

Learn low-order spatial representations

Time-evolution of reduced representation with Galerkin projection

\mathbb{R}^{N_\text{FOM}}
\bar{u}(0)
\tilde{u}(0)
\tilde{u}(T)
\mathcal{M}
\bar{u}(T)
h_\text{ROM}
g_\text{ROM}
\begin{pmatrix} \hspace{0.4em} \\ \\ \\ \end{pmatrix}
\begin{pmatrix} \hspace{0.4em} \\ \\ \\ \end{pmatrix}
\bar{u}(t=0)
\tilde{u}(t=0)
\tilde{u}(t=T)
\bar{u}(t=T)
\text{Manifold}\\ \text{projection}
\text{Model}\\ \text{inference}
\frac{\mathrm{d} \bar{u}}{\mathrm{d} t} = \mathcal{L}(\bar{u}, t)
\mathbf{J}_g\frac{\mathrm{d} \tilde{u}}{\mathrm{d} t} = \mathcal{L}(g_\text{ROM}(\tilde{u}), t)
\begin{pmatrix} \hspace{0.8em} \\ \\ \\ \\ \end{pmatrix}
\begin{pmatrix} \hspace{0.8em} \\ \\ \\ \\ \end{pmatrix}
\mathbb{R}^{N_\text{FOM}}
\bar{u}(0)
\bar{u}(T)
\textcolor{orange}{N_\text{Lin-ROM}} << \textcolor{red}{N_\text{FOM}}
\textcolor{blue}{N_\text{Nl-ROM}} <\,\,

SNF-ROM: smooth latent space traversal

4

\boldsymbol{\mu}
t

\(\tilde{u}(t; \boldsymbol{\mu})\)

\(\Xi_\varrho\)

\varrho, \, \theta = \argmin_{\varrho, \, \theta}\left\{ \sum_{\boldsymbol{x}, \, t, \, \boldsymbol{\mu}} || \boldsymbol{u}(\boldsymbol{x}, t; \boldsymbol{\mu}) - g_\theta(\boldsymbol{x}, \Xi_\varrho(t; \boldsymbol{\mu})) ||_2^2 \right\}

Q. What prior to place on the latent space to ensure smooth/accurate traversal?

\frac{\mathrm{d}\textcolor{blue}{\tilde{u}}}{\mathrm{d} t} = \mathbf{J}_g^+ \cdot \textcolor{red}{ \bar{f}_\text{RHS} %\bar{\mathcal{L}}(t, g_\theta(\textcolor{blue}{\tilde{u}(t; \boldsymbol{\mu})}; \boldsymbol{\mu}) }

Control the complexity of latent trajectories.

\mathbf{J}_g^+ \cdot \textcolor{red}{ \bar{f}_\text{RHS} }
\tilde{u}(T; \boldsymbol{\mu})
\mathbb{R}^{N_\text{ROM}}
\tilde{u}(0; \boldsymbol{\mu})

Supervised learning problem: \((\boldsymbol{x}, t; \boldsymbol{\mu}) \to \boldsymbol{u}(\boldsymbol{x}, t; \boldsymbol{\mu})\).

\(\text{Loss } (L)\)

\(\text{Backpropagation}\)

\(\nabla_\theta L\)

\(\nabla_\varrho L\)

\(\nabla_\theta L\)

\(\text{PDE Problem}\)

\((\boldsymbol{x}, t, \boldsymbol{\mu})\)

\(\text{ Parameters}\)

\( \text{and time}\)

\(\text{ Intrinsic ROM manifold}\)

\tilde{\mathcal{U}} = \left\{ \tilde{u}(t; \mathbf{\mu}) |~ t,\, \mathbf{\mu} \right\}
\tilde{u}(t; \mathbf{\mu})

\(\text{Coordinates}\)

\(\text{Smooth neural field MLP }(g_\theta)\)

\(\tilde{u}\)

\(\boldsymbol{x}\)

\(\boldsymbol{u}\left( \boldsymbol{x}, t; \boldsymbol{\mu} \right)\)

Learn \((t; \boldsymbol{\mu}) \to \tilde{u}(t; \boldsymbol{\mu})\) directly

Neural function smoothing for accurate differentiation

6

\mathrm{NN}(x) \approx u^{}(x)
\textcolor{red}\times
\dfrac{\mathrm{d}^k}{\mathrm{d}x^k} \mathrm{NN}(x) \approx u^{(k)}(x)

Derivative calculation is carried out with automatic differentiation making the dynamics evaluation non-intrusive.

SNF-ROM with Lipschitz regularization (SNFL-ROM)

\(\text{Penalize the \textcolor{blue}{Lipschitz constant} of the MLP [arXiv:2202.08345]}\)

\varrho, \, \theta = \argmin_{\varrho, \, \theta}\left\{ L_\text{data}(\varrho, \theta) + \textcolor{blue}{\alpha \bar{c}(\theta)} \right\}
\text{For MLP: } c_\theta \leq \textcolor{blue}{\bar{c}(\theta)} = \prod_{l=1}^L ||W_l||_p
||f(x_2) - f(x_1)||_p \leq \textcolor{blue}{c}||x_2 - x_1||_p
\text{change in output}
\text{change in input}
\text{For a single layer: } \textcolor{blue}{c_l} = ||W_l||_p

\(\text{[enwiki:1230354413]}\)

SNF-ROM with Weight regularization (SNFW-ROM)

\(\text{Directly penalize \textcolor{red}{high-frequency components} in }\dfrac{\text{d}}{\text{d} x}\text{NN}_\theta(x)\)

\frac{\text{d}}{\text{d} x} \mathrm{NN}_\theta(x) = \left( \prod_{l=2}^L W_l \cdot \text{diag}(\textcolor{red}{\sigma'(z_{l-1})}) \right) \cdot W_1
\textcolor{red}{ \cos\left( W_l z_{l-1} + b_l \right) }
\varrho, \, \theta = \argmin_{\varrho, \, \theta}\left\{ L_\text{data}(\varrho, \theta) + \textcolor{red}{ \frac{\gamma}{2} \sum_{l=1}^L \sum_{i,j} ||W_l^{ij}||_2^2 } \right\}

We present two approaches to learn inherently smooth and accurately differentiable neural field MLPs.

\({x}\)

\({u(x)}\)

\mathrm{NN}(x)
u(x)
\text{NN}(x)
\frac{\mathrm{d}}{\mathrm{d}x} \mathrm{NN}(x)
\frac{\mathrm{d}^2}{\mathrm{d}x^2} \mathrm{NN}(x)
\text{SNFL}
\text{SNFW}

Neural field MLPs are

non-differentiable

High freq. noise

Takeaways

  • Numerical methods research is incredibly fun
  • You can do it too

Experiment: 1D Kuramoto-Sivashinsky problem

8

\frac{\partial {u}}{\partial t} + u\frac{\partial {u}}{\partial x} + \frac{\partial^2 {u}}{\partial x^2} + \nu\frac{\partial^4 {u}}{\partial x^4} = 0

Both Lipschitz regularization (SNFL) and weight regularization (SNFW) capture the 4-th order derivative accurately.

\(\text{Relative error } (\Delta t = \Delta t_0)\)

\(\text{Relative error } (\Delta t = 10\Delta t_0)\)

Oscillations due to variation in projection error

Highly diffusive; even POD with 2 modes

Experiment: 2D Viscous Burgers problem \( (\mathit{Re} = 1~{k})\) 

\frac{\partial \boldsymbol{u}}{\partial t} + \boldsymbol{u} \cdot \boldsymbol{\nabla}\boldsymbol{u} = \nu \Delta \boldsymbol{u}

6

\(\text{CAE-ROM}\)

\(\text{SNFL-ROM}\)

\(\text{SNFW-ROM}\)

SNFL-ROM, SNFW-ROM effectively capture the traveling shock.

Phase 3: Learning on evolving geometry

1

The challenge is geometry tokenization!

Challenge: no consistent geometry representation

1

  • No consistent geometry representation as part evolves during 3D printing process
  • Where to deploy neural network?

We learn an attention-based encoding scheme for tokenizing unstructured data that can be deployed on arbitrary point clouds

Summary deck - simplified

By Vedant Puri

Summary deck - simplified

Summary deck

  • 20