Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Boundary-reps
NURBS
Exact geometry
Volume mesh
Linear tets, hexes
Inexact geometry
Automated optimization loop
manual
error-prone
nonlinear
Mesh Iteration
Goal: Alleviate pain of translating between geometry representations
Implicit Geometry
Boundary-rep (NURBS)
Volume mesh (tets, hexes)
Q. What representation do neural networks like?
DistMesh (Persson, 2004)
Implicit Geometry
Volume Mesh
Method
Equivalent to a Laplacian smoothing of point distribution
# Update mesh with gradient descent
sdf = load_geom()
x, y = generate_rand_pts(sdf)
Nepochs = 100
learning_rate = 0.01
function loss(mesh)
# enter heuristics here
return loss_val
end
for i = 1:Nepochs
mesh = triangulate(x, y)
resi = loss(mesh)
resi < tol && break
dx, dy = gradient(resi)
x = x + dx * learning_rate
y = y + dy * learning_rate
x, y = project_boundary_nodes(x, y, sdf)
end
Update node locations with gradient descent
Heuristics:
Implicit Geometry
Volume Mesh
Motivation
Goal
Orthogonal Functions | Deep Neural Networks |
---|---|
|
|
|
|
|
\( N \) parameters, \(M\) points
\( h \sim N^{-c/d} \)
\( h \sim 1 / N \) (for 2-layer networks)
\( N \) points
\( \dfrac{d}{dx} \tilde{f}\sim \mathcal{O}(N^2) \) (exact)
\( \dfrac{d}{dx} \tilde{f} \sim \mathcal{O}(N) \) (exact, AD)
\( \int_\Omega \tilde{f} dx \sim \mathcal{O}(N) \) (quadrature, exact)
(Weinan, 2020)
\( \int_\Omega \tilde{f} dx \sim \mathcal{O}(M) \) (Monte-Carlo, approx)
Mesh
\(A\underline{u} = M\underline{f} \)
Domain
Governing Equation
Boundary Constraint
\( NN_\theta \)
Discretization
\( \dfrac{d}{dt} \underline{u} = f(\underline{u}) \)
Solving
Discrete Problems
\(u(\underline{x},t) \)
\(u(\underline{x}) \)
Solution
Loss
Backpropogation
Data
\( NN_\theta \)
\( NN_\theta \)
Linear Solver
Time-Stepper
Need high-level, fast, AD-compatible software ecosystem!
Strategy: Build out a unified PDEs ecosystem for Julia SciML!
Abstractions
Composable
AD support
Fast solvers
DL ecosystem
Large (NNs + solvers)
Multiple discretizations
Wants
Fast adjoints
Method
Interoperability
Optimized methods
High performance
High level
Unified PDEs interface
Wrap SOTA solvers
ML based discrs
Closure Model
Deterministic
Shocks
Energy Cascade
Eddy Viscosity Model
Vanilla Neural Network
Problem: DNNs produce noisy output.
Solution: SOTA architectures use geometry respecting convolutions
Graph NN
Convolution NN
Convolution autoencoder
Latent space embedding
Convolution decoder
Field output
Fourier Neural Operators
For PDEs on meshes,
(If you know the convolution, the output is guaranteed to be smooth)
Partition of unity
Smooth basis
Convolution
Latent space embedding
DNN
DNN
DNN
Softmax
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
(Deep) Neural Network
(Deep) Neural Network
Text
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Neural Network
NN = Chain(
Dense(N, N, tanh),
Dense(N, N, tanh),
Dense(N, N, tanh),
Dense(N, N),
)
function fwd_pass(params, x, f)
AinvM = NN(params)(x)
upred = AnvM * f
return upred
end
Details
N = 16
NN = Chain(
Dense(N, N, tanh),
Dense(N, N, tanh),
Dense(N, N, tanh),
Dense(N, N),
)
function loss(params, x, f, utrue)
upred = fwd_pass(params, x, f)
norm(upred - utrue, 2)
end
function fwd_pass(params, x, f)
d = NN(params)(x)
D = Diagonal(d)
upred = D * f
return upred
end
@time p = train(_loss, p; opt = Adam(1f-2), E = 500)
@time p = train(_loss, p; opt = Adam(1f-3), E = 7500)
@time p = train(_loss, p; opt = Adam(1f-4), E = 2000)
TEST: LOSS: 258.62992872, meanAE: 0.53594627, maxAE: 1.93666885
### TRAIN LOOP 1 ###
Iter 500: LOSS: 1.26371044, meanAE: 0.00199481, maxAE: 0.01653739
10.948193 seconds (23.98 M allocations: 16.016 GiB, 5.34% gc time, 60.50% compilation time)
### TRAIN LOOP 2 ###
Iter 7000: LOSS: 0.10834251, meanAE: 0.00021605, maxAE: 0.000802
62.686785 seconds (3.72 M allocations: 203.839 GiB, 8.89% gc time)
### TRAIN LOOP 3 ###
Iter 2500: LOSS: 0.01370436, meanAE: 2.405e-5, maxAE: 0.00021251
21.679660 seconds (1.33 M allocations: 72.838 GiB, 8.30% gc time)
### TEST STATS ###
TEST: LOSS: 0.02013022, meanAE: 3.059e-5, maxAE: 0.00086486
N = 8
N2 = N * N
NN = Chain( # predict full matrix
Dense(N, N2, tanh),
Dense(N2, N2, tanh),
Dense(N2, N2, tanh),
Dense(N2, N2),
)
function loss(params, x, f, utrue)
upred = fwd_pass(params, x, f)
norm(upred - utrue, 2)
end
function fwd_pass(params, x, f)
m = NN(x, p, st)[1]
M = reshape(m, (N, N, K))
@tensorprod u[i, k] := M[i, j, k] * f[j, k]
end
@time p = train(_loss, p; opt = Adam(1f-2), E = 500)
@time p = train(_loss, p; opt = Adam(1f-3), E = 7000)
@time p = train(_loss, p; opt = Adam(1f-4), E = 2500)
TEST: LOSS: 298.75399229, meanAE: 0.83071813, maxAE: 5.42661399
### TRAIN LOOP 1 ###
Iter 500: LOSS: 33.31564424, meanAE: 0.0878617, maxAE: 0.68316
17.532605 seconds (3.07 M allocations: 56.414 GiB, 1.97% gc time, 3.49% compilation time)
### TRAIN LOOP 2 ###
Iter 7000: LOSS: 1.41257299, meanAE: 0.0039907, maxAE: 0.02882316
194.406308 seconds (4.33 M allocations: 645.357 GiB, 2.68% gc time)
### TRAIN LOOP 3 ###
Iter 2500: LOSS: 0.31676085, meanAE: 0.00084305, maxAE: 0.00857081
71.722907 seconds (1.56 M allocations: 237.522 GiB, 2.44% gc time)
### TEST STATS ###
TEST: LOSS: 3.96048961, meanAE: 0.00304775, maxAE: 0.26578036
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
x
A^-1
as a full matrix for 1D Fourier discretization using a vanilla network
Neural Network
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Geometry Operator Learning - presented initial implementation on Wednesday (5/26).
lapl(u) = f
over a geometry parameterized by x
by predicting the inverse discretized Laplace operator u = A(x)^{-1} f
on the deformed geometry: A^-1 = NN(x)
.A^-1
as a full matrix for 1D Fourier discretization using a vanilla network.J \ Dr * J \ Dr
). Code for n-dimensions is here. I am cleaning this code, and the domain deformation interface in this PR.Literature Review on Neural Operators
JUN 7, 2023
Vedant Puri
https://github.com/vpuri3
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
(Surrogate modeling) find parameters \( \rightarrow \) solution map from given samples
Given parametric PDE,
(Training) for family of parameterized functions, \(\Psi_\theta: \mathcal{A} \times \Theta \rightarrow \mathcal{U} \), find
Fix a grid
Learn finite neural network model \(\mathbb{R}^{N'} \rightarrow \mathbb{R}^N \)
Deep Neural Network
Disadvantages:
PINNs, on the other hand, do not depend on training meshes.
But they represent the solution, \(\tilde{\Psi}(a)\) only for one instance of \(a\)
Map between (approximations of) Banach spaces
A single set of network parameters describe any \( \mathcal{A}^{h_k} \rightarrow \mathcal{U}^{h_l}\) mapping
For elliptic equation \( \mathcal{L}_a \) parameterized by \(a\),
Then \( \mathcal{L}_a(u) = f \) is solved by
Proof:
Global convolution
Pointwise transformation
Lifting operation
\( \kappa(x, y, a(x), a(y))\)
pointwise transformation
\(G^6 \leftarrow G^3 \) interpolation
Lifting Operator
Projection Operator
Linear transform on first \(K\) Fourier modes
Local linear transform
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
ML crash course - done
Approximate Green's function with NN involving only pointwise evaluations, global convolutions
As (training) discretization is refined, model approximates the continuum operator.
For linear equations, eg. Poisson
Dependence on \( f \), \( g \) is described by a linear convolution
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Recap (last week):
Neural operators literature review
Fourier Neural Operator best in class model
Combine local/global operations
Resolution invariant, but limited usability
Target problem: surrogate modeling
Want to develop complex geometry analog
Separate out training for BCs, forcing
This week:
Reread neural operator papers
Set up first experiment to test idea
Nonlinear in \(a\), linear in \(g, \,f\)
Separate NNs
Problem: Find map \(\nu \longrightarrow u\)
Superimpose distinct neural networks
\(N_\nu = 100, \, N_f = 100\) unique fields \( \implies 10k\) unique trajectories
\( N = 128 \) point Fourier discretization \( \implies 1.2\,\text{m}\) datapoints
Trained \(u_{\{\nu\},f_0}\) dense neural network (20k epochs, 60k params)
meanRE: 0.00029135, maxRE: 0.00173817
Solution much more dependent on \(f\) than on \(\nu\)
meanRE: 0.00348382, maxRE: 0.0233049
Trained \(u_{\{a\},f_0}\) dense neural network (20k epochs, 80k params)
WIP: replace DNN with linear Fourier Neural Operator
meanRE: 0.00500959, maxRE: 1.953022
meanRE: 1.32767527, maxRE: 1608.9298091
Linera convolutions in \(g, \,f\)
Independent training
Separate NNs
\(N_a\) samples
\(N_a\cdot N_g\) samples
\(N_a\cdot N_f\) samples
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Goal
Find (surrogate) map \( a \longrightarrow u \) that is
Updates from this week
Next Steps
Convolution Kernel
\( Wx\)
\(\mathcal{F}^{-1}W \widehat{x} \)
Local Kernel
Problem: Find map \((\nu,f) \longrightarrow u\)
Generate data with \( N = 128 \) point Fourier discretization
# Trajectories | 100 | 100 | 50 * 50 |
# Points | 12,800 | 12,800 | 320,000 |
ML models
Pointwise DNN
Sequence-to-sequence DNN
Fourier Neural Operator
Seq-2-seq DNN
Fourier Neural Operator
Seq-2-seq DNN
Fourier Neural Operator
Seq-2-seq DNN
Fourier Neural Operator
Seq-2-seq DNN
Fourier Neural Operator
Fourier Neural Operator
Pointwise DNN
Fourier Neural Operator
Pointwise DNN
Disucssion
Next Steps
Operator Kernel
Operator Kernel
Bilinear
Operator Kernel
Affine
Linear
Bilinear Convolution Kernel
\( x^TWy\)
\(\mathcal{F}^{-1}(\widehat{x}^T W \widehat{y}) \)
Bilinear Local Kernel
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Target Problem
Find (surrogate) map \( a \longrightarrow u \) that is
Updates
Next Steps
Convolution Kernel
\( Wx\)
\(\mathcal{F}^{-1}W \widehat{x} \)
Local Kernel
Operator Kernel
Operator Kernel
Bilinear
Operator Kernel
Affine
Linear
Affine
Affine
Affine
Affine
Affine
Affine
Bilinear Convolution Kernel
\( x^TW_\text{loc}y\)
\(\mathcal{F}^{-1}(\widehat{x}^T W_\text{conv} \widehat{y}) \)
Bilinear Local Kernel
Problem: Find map \((\nu,f) \longrightarrow u\)
Generate data with \( N = 128 \) point Fourier discretization
# Trajectories | 100 | 100 | 32 x 32 |
# Points | 12,800 | 12,800 | 131,072 |
Architecuters
[x, nu, f] -> BatchNorm() -> Dense(3 , w, tanh) -> OpKernel(w, w, m, tanh)
-> OpKernel(w, w, m, tanh) -> Dense(w , 1) -> u
[x, nu, f] -> Dense(3 , w) -> OpKernel(w, w, m) -> Dense(w , 1) -> u
[x, nu] -> BatchNorm(c) -> Dense(c, w, tanh), OpKernel(w, w, m) ↘
OpConvBilinear(w, w, m, 1) -> u
[f] -> Dense(c, w) ↗
[x, nu] -> Dense(c, w), OpKernel(w, w, m) ↘
OpConvBilinear(w, w, m, 1) -> u
[f] -> Dense(c, w) ↗
Discusion
Workin on
Next Steps
TODO
Train on \(N_\nu\cdot N_f\) trajectories
Train on \(N_\nu\)
trajectories
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Target Problem
Find (surrogate) map \( (x, a, f, g, \dotsc) \longrightarrow u \)
So Far ( ~ 1/3 of the way there)
Next Steps
\(N_a\) samples
\(N_a\cdot N_g\) samples
\(N_a\cdot N_f\) samples
Convolution Kernel
\( W_\text{loc}x+b\)
\(\mathcal{F}^{-1}W_\text{conv} \widehat{x} \)
Local Kernel
@tullio Y_conv[co, m, b] := W_conv[co, ci, m] * X[ci, m, b]
@tullio Y_loc[co, n, b] := W_conv[co, ci] * X[ci, n, b]
Bilinear Convolution Kernel
\( x^TW_\text{loc}y\)
\(\mathcal{F}^{-1}(\widehat{x}^T W_\text{conv} \widehat{y}) \)
Bilinear Local Kernel
@einsum Z_conv[co, m, b] := X[c1, m, b] * W_conv[co, c1, c2, m] * Y[c2, m, b]
@einsum Z_loc[co, n, b] := X[c1, n, b] * W_loc[co, c1, c2] * Y[c2, n, b]
Operator Kernel
Operator Kernel
Bilinear
Operator Kernel
Affine
Linear
Modified FNO
Classic FNO
Hypothesis: Modified model should generalize better on out-of-distribution data
Experiment: Introduce random scales in test set by multiplying with envelop function \(e^{k\sin(\lambda x - \mu)} \)
Modified FNO
Classic FNO
Solid line -> training set, Dashed line -> test set
To run 2D problems
To involve boundary conditions
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Target Problem
Find (surrogate) map \( (x, a, f, g, \dotsc) \longrightarrow u \)
So Far ( ~ 1/3 of the way there)
Next Steps
\(198k\) parameters, 6x Operator Layers \( 16 \times 16\), \(128\) modes
R² score: 0.9982961
MSE (mean SQR error): 0.00057452
RMSE (root mean SQR error): 0.02396922
MAE (mean ABS error): 0.01535009
R² score: 0.9795042
MSE (mean SQR error): 0.00701486
RMSE (root mean SQR error): 0.08375475
MAE (mean ABS error): 0.05496668
\(198k\) parameters, 6x Operator Layers \( 16 \times 16\), \(128\) modes
R² score: 0.9975712
MSE (mean SQR error): 0.00079163
RMSE (root mean SQR error): 0.02813594
MAE (mean ABS error): 0.01866633
R² score: 0.9755317
MSE (mean SQR error): 0.00802541
RMSE (root mean SQR error): 0.08958465
MAE (mean ABS error): 0.06163863
Why
TO DO
\(N_a\) samples
\(N_a\cdot N_g\) samples
\(N_a\cdot N_f\) samples
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
24-703: Numerical Methods in Engineering
24-787: Machine Learning and AI for Engineers
15-860: Monte Carlo Methods and Applications
15-513: Introduction to Computer Systesm
Target Problem
Find surrogate map \(p = (x, a, f, g, \dotsc) \longrightarrow u \) for BVP
So Far
Earlier Proposal
Literature Review
For PDE \( \mathcal{L}(u) = f, \, x \in \mathbb{R}^d \), the Green's function \(G(x; y)\) solves
Neural Operator models approximate the Green's function. Each layer performs the following operation
Eg, the Fourier Neural Operator
where \(\mathcal{F}\) is the Fourier transform
\(G^6 \leftarrow G^3 \) interpolation
Neural Operators parameterize the model in a latent space that can be interpolated to any grid
Spectral Neural Operator is an architecture that learns a model in spectral space
orthonormal basis
transform matrix
Orthogonal WRT inner product on \( C(\mathbb{R}) \)
(Classical Orthogonal Polynomials)
On \(S_1 = \{|z| =1, \, z \in \mathbb{C} \} \) (Fourier) and weight \( w(z) = 1\), we get \( \exp{(ikx)}, \, k\in\mathbb{N}\) on interval \( [0, 2\pi) \)
Many properties (Sturm-Louisville, convolution...)
Based on Green's function solution to BVP
Eg, Poisson equation, \(L(u) = \Delta u \)
Proposed model layers involve boundary information with convolutions
Model can train on multiple geometries (generated by deforming a fixed spectral patch)
Spectral expansions are defined on simple grids (boxes, triangles, circles, tensor products) and deformations. Challenge is to find a spectral expansion on general meshes.
One approach is to preselect a basis. For a meshed domain, \( \Omega\), precompute the first \(K\) eigenfunctions of the Laplacian, \( \Delta \phi = \lambda \phi \)
Then, employ the Spectral Neural Operator model (Laplace Neural Operator)
This model is resolution independent, as the eigenfunctions can be interpolated to any mesh, but does not generalize to new geometries.
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Target Problem:
Work Done till Now
Talking to
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Deep Galerkin
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Deep Galerkin
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Motivation
Mix ML with PDE solvers:
Application
Closure Model
Eddy Viscosity Model
Vanilla Neural Network
The algorithmic backbone of this method is the interpolating adjoint method for backpropagating through ODE solve
Mesh
\(A\underline{u} = M\underline{f} \)
Domain
Governing Equation
Boundary Constraint
\( NN_\theta \)
Discretization
\( \dfrac{d}{dt} \underline{u} = f(\underline{u}) \)
Solving
Discrete Problems
\(u(\underline{x},t) \)
\(u(\underline{x}) \)
Solution
Loss
Backpropogation
Data
\( NN_\theta \)
\( NN_\theta \)
Linear Solver
Time-Stepper
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Deep Galerkin
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Advantages
Challenges
Many variants out there
However, key problems persist
Finite Basis PINNs, 2023
Jul 2023
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Deep Galerkin
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Neural ODEs
Universal Diff Eq
Mechanical Engineering, Carnegie Mellon University
Advisors: Prof. Burak Kara, Prof. Jessica Zhang
Mesh-based
PDE-Based
Neural Ansatz
Data-driven
FEM, FVM, IGA, Spectral
Fourier Neural Operator
Implicit Neural Representations
DeepONet
Physics Informed NNs
Convolution NNs
Graph NNs
Adapted from Núñez, CEMRACS 2023
Neural ODEs
Universal Diff Eq
Hybrid Phys/ML
Generative Models??
Model | Target Problem | Methodology/ Limitation | Input/ Output |
---|---|---|---|
UDE | Learn unknown physics from data | Differentiate through PDE solve to learn unknown params FWD pass: solve PDE w. FEM/spectral with added NN term BWD pass: (adjoint method) run sim in reverse to get sensitivities |
Learn to match with data by solving |
PINN | Replace solver with black-box NN | (Non-parametric model) learn PDE solution for fixed parameters p by overfitting NN with PDE residual + IC/BC loss + data. Very hard to train. | Evaluate at sampled points in |
CNN/GNN | Learn parametric from mesh data |
Utilize domain mesh/grid structure to learn sparse convolutions. Grid-to-grid model. Limited to predicting on training grid. |
Evaluate on grid |
FNO | Learn parametric solution operator for PDE from data + PDE loss | Learn params in Fourier space (resolution independent). Architecture: pointwise evals + global convs. Function-to-function model, but limited to uniform grids |
Evaluate on uniform grid of any res. |
DeepONet | Learn parametric over from fixed sensors locations. Loss PDE + data |
Mimics PDE solve in NN. Linear combination of - learned basis functions spanning from fixed sensor points - learned scaling coefficients for basis Function-to-function model |
For fixed sensor points and arbitrary , |
Operator learning: continuous analog of supervised learning
Both are grid-to-grid model
Fourier transform / convolution equivalent (conv. theorem). For signal \(f\), filter \(g\)
Input
\( C_{i} \times Nx \times Ny\)
Model
Output
\( C_{o} \times Nx \times Ny\)
Fourier Neural Operator | Convolution Neural Network | |
---|---|---|
Target problem | Function-to-function mapping | Grid-to-grid mapping |
Methodology | - Pointwise ops (MLP/ 1x1 conv) capture local features - Global convolutions capture global info (learned in Fourier space) |
Learn local convolution filters |
Continuity | Input/output continuous function (can be evaluated anywhere) In practice, sampled on uniform grids. |
Input/output are data on uniform grids |
Resolution dependence | Performance comparable when evaluated on finer grids. | Performance degrades on finer/coarser grids than training grid. |
Evaluate on finer grids | Can evaluate on any grid High-frequency features (not present on training grid) can be captured. |
Downsample --> evaluate --> Interpolate Loses high-frequency features |
Fourier Neural Operator | DeepONet | |
---|---|---|
Target problem | Function-to-function mapping | Function-to-function mapping |
Methodology | ||
Continuity | ||
Interpolation | ||
Resolution dependence | ||
Differences
Lifting Op
Linear transform on first \(K\) Fourier modes
Local linear transform
Projection Op
#==============================#
# Lifting
v [Cv, Nx, Ny] <- P [Cp, Ca] * a [Ca, Nx, Ny]
#==============================#
# Fourier layer
## local
w1 [Cw, Nx, Ny] <- W_loc [Cw, Cv] * v [Cv, Nx, Ny]
## modal
vh [Cv, Kx, Ky] <- FFT * v [Cv, Nx, Ny] # Transform
vh [Cv, Mx, My] <- Trunc * vh[Cv, Kx, Ky] # Truncation
wh [Cw, Mx, My] <- R [Cw, Cv, Mx, My] * vh [Cv, Mx, My]
wh [Cw, Kx, Ky] <- ZeroPad * wh [Cw, Mx, My] # Zero Pad
w2 [Cw, Nx, Ny] <- iFFT * v [Cv, Kx, Ky] # iTransform
w = w1 + w2
#==============================#
# Projection
u [Cu, Nx, Ny, B] <- Q [Cu, Cw] * a [Cw, Nx, Ny, B]
Lifting Op
Linear transform on first \(K\) Fourier modes
Local linear transform
Projection Op
# Modal operation in Fourier Layer
wh [Cw, Mx, My] <- R [Cw, Cv, Mx, My] * vh [Cv, Mx, My]
## in index notation:
wh[cw, mx, my] <- W_mod [cw, cv, mx, my] * vh [cv, mx, my]
# contraction in cv
# elementwise mult in mx, my