Dynamic Mode Decomposition

Fully data-driven ROM approach

 

 

 

presentation is made by Pavel Temirchev

Motivation

Assume we have a dynamical system:

Dynamics is either unknown
or given as a black-box

Dimensionality is huge

We are interested in:

  • system identification
    (for computer simulations)
  • decreasing  the computational complexity
  • analysis of the system behavior

We can obtain data from the process

\frac{\partial x}{\partial t} = f(x)
x \in \mathbb{R}^n

Example of the dynamics

Fluid flow

Linearity assumption

Approximate the continuous dynamics by a discrete-time linear process:

x_{k+1} = Ax_k

We call           a snapshot

x_k

Multidimensional data can be unravel into a vector of size 

n

Linearity assumption

Let's collect the snapshot matrices as the data from the observed process:

X = \begin{bmatrix} | & | & & | \\ x_0 & x_1 & \cdots & x_{m-1} \\ | & | & & | \end{bmatrix}
X' = \begin{bmatrix} | & | & & | \\ x_1 & x_2 & \cdots & x_m \\ | & | & & | \end{bmatrix}

Time

So, the dynamics became:

X' = AX

Linearity assumption

One can compute        as follows:

where           is a pseudo-inverse of

A = X'X^\dagger
A
X^\dagger
X

However,

A \in \mathbb{R}^{n \times n}

Which is VERY HUGE!

Reduced Linear Model

Let us make Singular Value Decomposition of 

(also known as Proper Orthogonal Decomposition in this snapshot-setting)

X
X = U \Sigma V^T
=
\times
\times

TRUNCATED

\approx \tilde{U} \tilde\Sigma \tilde V^T

Crop up to first       biggest singular values

r

Reduced Linear Model

Instead of computing      
we can compute its projection on the low-rank POD basis:

A
\tilde A = \tilde U^T A \tilde U
= \tilde U^T X'X^\dag \tilde U
= \tilde U^T X' \tilde V\Sigma^{-1}\tilde U^T \tilde U
= \tilde U^T X' \tilde V\Sigma^{-1}
\tilde A \in \mathbb{R}^{r \times r}

Making Predictions

So, now we are interested in predictions of future states:

\tilde A
x_{k+1} = Ax_k

But we can't do it with the truncated        directly!

Luckily, we can find a way to not compute the full      

A

Reminder:

Eigenvalue decomposition

A \Phi = \Phi \Lambda
A = \Phi \Lambda \Phi^\dag
x_{k+1} = \Phi \Lambda \Phi^\dag x_k

Both        and

can be computed

relatively easily

\Phi
\Lambda

Making Predictions

  • Full        and truncated        share the same eigenvalues 
\tilde A
A
\tilde A W = W \Lambda
\Lambda
  • Full eigenvectors        can be computed as follows:
\Phi

We can make predictions as follows

x_{1} = \Phi \Lambda \Phi^\dag x_0
x_{2} = \Phi \Lambda \Phi^\dag \Phi \Lambda \Phi^\dag x_0
x_{2} = \Phi \Lambda^2 \Phi^\dag x_0
x_{k} = \Phi \Lambda^k \Phi^\dag x_0

Element-wise exponentiation!

\Phi = X' \tilde V \tilde \Sigma^{-1} W

Dynamic Modes and
Stability Analysis

Eigenvectors                               are called Dynamic Modes

\Phi \in \mathbb{C}^{n \times r}

Dynamic modes (columns of      ) represent spatio-temporal patterns of your data

\Phi

You can plot them:

Dynamic Modes and
Stability Analysis

Eigenvalues                               can be used for the

stability analysis of a corresponding dynamic mode

\Lambda \in \mathbb{C}^{r \times r}

We can plot them too:

|\lambda_i| < 1
|\lambda_i| = 1
|\lambda_i| > 1
Im(\lambda_i) \neq 0
Im(\lambda)
Re(\lambda)

if                        - mode is decaying

if                        - mode is stable

if                        - mode is growing

if                               - mode oscillates

No way I can represent wells in this model...

But you can

You need DMD with control:

x_{k+1} = Ax_k + Bu_k

Comparison to other ROMs

property Neural Network ROM Dynamic Mode Decomposition POD-Galerkin projection
Parametric dynamical systems yes not really not really
Can work with unknown dynamics yes yes no
Easy to implement no yes not really
Training speed slow moderate moderate
Scalability within spatial dimensionality yes no no
Interpretability not really yes not really
Made with Slides.com