CS6015: Linear Algebra and Random Processes
Lecture 22: Singular Value Decomposition (SVD)
Learning Objectives
What is SVD?
Why do we care about SVD?
Recap
[v]S
[\mathbf{v}]_S
S−1
S^{-1}
[Akv0]S
[A^k\mathbf{v}_0]_S
Λk
\Lambda^k
O(n2)
O(n^2)
Akv=SΛkS−1v
A^{k}\mathbf{v} = S\Lambda^{k}S^{-1}\mathbf{v}
S
S
O(n2)
O(n^2)
O(nk)
O(nk)
O(kn3)
O(kn^3)
O(n2+nk+n2)
O(n^2 + nk + n^2)
+ the cost of computing EVs
EVD/Diagonalization/Eigenbasis is useful when the same matrix A operates on many vectors repeatedly (i.e., if we want to apply An to many vectors)
(this one time cost is then justified in the long run)
v
\mathbf{v}
Akv
A^k\mathbf{v}
Ak
A^k
O(kn3)
O(kn^3)
(diagonalisation leads to computational efficiency)
Recap
[v]S
[\mathbf{v}]_S
S−1
S^{-1}
[Akv0]S
[A^k\mathbf{v}_0]_S
Λk
\Lambda^k
O(n2)
O(n^2)
Akv=SΛkS−1v
A^{k}\mathbf{v} = S\Lambda^{k}S^{-1}\mathbf{v}
S
S
O(n2)
O(n^2)
O(nk)
O(nk)
O(kn3)
O(kn^3)
O(n2+nk+n2)
O(n^2 + nk + n^2)
v
\mathbf{v}
Akv
A^k\mathbf{v}
Ak
A^k
O(kn3)
O(kn^3)
(diagonalisation leads to computational efficiency)
But this is only for square matrices!
What about rectangular matrices?
Even better for symmetric matrices
A=QΛQ⊤
A = Q\Lambda Q^\top
(orthonormal basis)
Wishlist
Can we diagonalise rectangular matrices?
m×nAn×1x=m×mU m×nΣ n×nV⊤n×1x
\underbrace{A}_{m\times n}\underbrace{\mathbf{x}}_{n \times 1} = \underbrace{U}_{m\times m}~\underbrace{\Sigma}_{m\times n}~\underbrace{V^\top}_{n\times n}\underbrace{\mathbf{x}}_{n \times 1}
Translating from std. basis to this new basis
The transformation becomes very simple in this basis
Translate back to the standard basis
(all off-diagonal elements are 0)
(orthonormal)
(orthonormal)
Recap: square matrices
A=SΛS−1
A = S\Lambda S^{-1}
A=QΛQ⊤
A = Q\Lambda Q^\top
(symmetric)
Yes, we can!
(true for all matrices)
The 4 fundamental subspaces: basis
Let Am×n be a rank r matrix
u1,u2,…,ur be an orthonormal basis for C(A)
ur+1,ur+2,…,um be an orthonormal basis for N(A⊤)
v1,v2,…,vr be an orthonormal basis for C(A⊤)
vr+1,vr+2,…,vn be an orthonormal basis for N(A)
Let Am×n be a rank r matrix
The 4 fundamental subspaces: basis
Fact 1: Such basis always exist

Start with any basis for a subspace
Use Gram Schmidt to convert it to an orthonormal basis
Fact 2: u1,u2,…,ur,ur+1,…,um are orthonormal
The red vectors are orthonormal (given)
The green vectors are orthonormal (given)
Every green vector is orthogonal to every red vector
∵N(A⊤)⊥C(A)
C(A)
N(A⊤)
v1,v2,…,vr,vr+1,…,vn are orthonormal
(same argument)
Fact 1: Such basis always exist
Fact 2: u1,u2,…,ur,ur+1,…,um are orthonormal
In addition, we want
v1,v2,…,vr,vr+1,…,vn are orthonormal
Avi=σiui ∀i≤r
A\mathbf{v_i} = \sigma_i\mathbf{u_i}~~\forall i\leq r
∴A↑v1↓↑…↓↑vr↓=↑u1↓↑…↓↑ur↓σ100………00σr
\therefore A
\begin{bmatrix}
\uparrow&\uparrow&\uparrow \\
\mathbf{v}_1&\dots&\mathbf{v}_r \\
\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r \\
\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0 \\
0&\dots&0 \\
0&\dots&\sigma_r \\
\end{bmatrix}
The 4 fundamental subspaces: basis
A↑v1↓↑…↓↑vr↓=↑u1↓↑…↓↑ur↓σ100………00σr
A
\begin{bmatrix}
\uparrow&\uparrow&\uparrow \\
\mathbf{v}_1&\dots&\mathbf{v}_r \\
\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r \\
\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0 \\
0&\dots&0 \\
0&\dots&\sigma_r \\
\end{bmatrix}
Finding U and V
m×nA n×rVr=m×rUr r×rΣ
\underbrace{A}_{m\times n}~\underbrace{V_r}_{n \times r} = \underbrace{U_r}_{m \times r}~\underbrace{\Sigma}_{r \times r}
(we don't know what such V and U are - we are just hoping that they exist)
∴A↑v1↓↑…↓↑vr↓↑vr+1↓↑…↓vn=↑u1↓↑…↓↑ur↓↑ur+1↓↑…↓↑um↓σ10000……………00σr000000000000
\therefore A
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{v}_1&\dots&\mathbf{v}_{r}&\mathbf{v}_{r+1}&\dots&\mathbf{v}_n \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r&\mathbf{u}_{r+1}&\dots&\mathbf{u}_m \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0&0&0 \\
0&\dots&0&0&0 \\
0&\dots&\sigma_r&0&0 \\
0&\dots&0&0&0 \\
0&\dots&0&0&0 \\
\end{bmatrix}

If Vr and Ur exist then
null space
First r columns of this product will be and the last n-r columns will be 0
n-r 0 colums
m-r 0 rows
m×nA n×nV=m×mU m×nΣ
\underbrace{A}_{m\times n}~\underbrace{V}_{n \times n} = \underbrace{U}_{m \times m}~\underbrace{\Sigma}_{m \times n}
V and U also exist
The last m-r columns of U will not contribute and hence the first r columns will be the same as and the last n-r columns will be 0
UrΣ
U_r \Sigma
AVr
AV_r
Finding U and V
AV=UΣ
AV=U\Sigma
A=UΣV⊤
A=U\Sigma V^\top
A⊤A=(UΣV⊤)⊤UΣV⊤
A^\top A=(U\Sigma V^\top)^\top U\Sigma V^\top
A⊤A=VΣ⊤U⊤UΣV⊤
A^\top A=V\Sigma^\top U^\top U\Sigma V^\top
A⊤A=VΣ⊤ΣV⊤
A^\top A=V\Sigma^\top\Sigma V^\top
diagonal
orthogonal
orthogonal
V is thus the matrix of the n eigen vectors of A⊤A
we know that this always exists because is a symmetric matrix
AV=UΣ
AV=U\Sigma
A=UΣV⊤
A=U\Sigma V^\top
AA⊤=UΣV⊤(UΣV⊤)⊤
AA^\top =U\Sigma V^\top(U\Sigma V^\top)^\top
AA⊤=UΣV⊤VΣ⊤U⊤
AA^\top =U\Sigma V^\top V\Sigma^\top U^\top
AA⊤=UΣΣ⊤U⊤
AA^\top=U\Sigma\Sigma^\top U^\top
diagonal
orthogonal
orthogonal
U is thus the matrix of the m eigen vectors of AA⊤
we know that this always exists because is a symmetric matrix
Σ⊤Σ contains the eigenvalues of A⊤A
HW5:Prove that the non-0 eigenvalues of and are always equal
AA⊤
AA^\top
AA⊤
AA^\top
A⊤A
A^\top A
A⊤A
A^\top A
Finding U and V
m×nA=m×mU m×nΣ n×nV⊤
\underbrace{A}_{m\times n} = \underbrace{U}_{m\times m}~\underbrace{\Sigma}_{m\times n}~\underbrace{V^\top}_{n\times n}
eigenvectors of
transpose of the eigenvectors of
square root of the eigenvalues of or
This is called the Singular Value Decomposition of A
∵U and V always exist, the SVD of any matrix A is always possible
since they are eigenvectors of a symmetric matrix
A⊤A
A^\top A
A⊤A
A^\top A
AA⊤
AA^\top
AA⊤
AA^\top
Some questions
∴A↑v1↓↑…↓↑vr↓↑vr+1↓↑…↓vn=↑u1↓↑…↓↑ur↓↑ur+1↓↑…↓↑um↓σ10000……………00σr000000000000
\therefore A
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{v}_1&\dots&\mathbf{v}_{r}&\mathbf{v}_{r+1}&\dots&\mathbf{v}_n \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r&\mathbf{u}_{r+1}&\dots&\mathbf{u}_m \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0&0&0 \\
0&\dots&0&0&0 \\
0&\dots&\sigma_r&0&0 \\
0&\dots&0&0&0 \\
0&\dots&0&0&0 \\
\end{bmatrix}
How do we know for sure that these σs will be 0?
Recall: rank(A)=rank(A⊤A)=r

If rank(A)<n then rank A⊤A<n⟹A⊤A is singular
⟹A⊤A has 0 eigenvalues
How many?
as many as the dimension of the nullspace: n−r
Some questions
∴A↑v1↓↑…↓↑vr↓↑vr+1↓↑…↓vn=↑u1↓↑…↓↑ur↓↑ur+1↓↑…↓↑um↓σ10000……………00σr000000000000
\therefore A
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{v}_1&\dots&\mathbf{v}_{r}&\mathbf{v}_{r+1}&\dots&\mathbf{v}_n \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r&\mathbf{u}_{r+1}&\dots&\mathbf{u}_m \\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow \\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0&0&0 \\
0&\dots&0&0&0 \\
0&\dots&\sigma_r&0&0 \\
0&\dots&0&0&0 \\
0&\dots&0&0&0 \\
\end{bmatrix}
How do we know that these form a basis for the column space of A?
How do we know that these form a basis for the rowspace of A?
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}
so far we only know that these are the eigenvectors of
so far we only know that these are the eigenvectors of
Please work this out! You really need to see this on your own! HW5

AA⊤
AA^\top
A⊤A
A^\top A
Why do we care about SVD?
A=UΣV⊤
A=U\Sigma V^\top
∴A=↑u1↓↑…↓↑ur↓↑…↓↑um↓σ10000……………00σr00000000000000000←←←………v1⊤vr⊤vn⊤⋯⋯⋯………→→→
\therefore A=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\mathbf{u}_1&\dots&\mathbf{u}_r&\dots&\mathbf{u}_m \\
&\\
&\\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\
\end{bmatrix}
\begin{bmatrix}
\sigma_1&\dots&0&0&0&0 \\
0&\dots&0&0&0&0 \\
0&\dots&\sigma_r&0&0&0 \\
0&\dots&0&0&0&0 \\
0&\dots&0&0&0&0 \\
\end{bmatrix}
\begin{bmatrix}
\leftarrow&\dots&\mathbf{v}_{1}^\top&\cdots&\dots&\rightarrow \\
&\\
\leftarrow&\dots&\mathbf{v}_{r}^\top&\cdots&\dots&\rightarrow \\
&\\
&\\
\leftarrow&\dots&\mathbf{v}_{n}^\top&\cdots&\dots&\rightarrow \\
\end{bmatrix}
∴A=↑σ1u1↓↑…↓↑σrur↓↑…↓↑0↓←←←………v1⊤vr⊤vn⊤⋯⋯⋯………→→→
\therefore A=
\begin{bmatrix}
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow \\
\sigma_1\mathbf{u}_1&\dots&\sigma_r\mathbf{u}_r&\dots&0 \\
&\\
&\\
\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\
\end{bmatrix}
\begin{bmatrix}
\leftarrow&\dots&\mathbf{v}_{1}^\top&\cdots&\dots&\rightarrow \\
&\\
\leftarrow&\dots&\mathbf{v}_{r}^\top&\cdots&\dots&\rightarrow \\
&\\
&\\
\leftarrow&\dots&\mathbf{v}_{n}^\top&\cdots&\dots&\rightarrow \\
\end{bmatrix}
∴A=σ1u1v1⊤+σ2u2v2⊤+⋯+σrurvr⊤
\therefore A=\sigma_1\mathbf{u_1}\mathbf{v_1}^\top+\sigma_2\mathbf{u_2}\mathbf{v_2}^\top+\cdots+\sigma_r\mathbf{u_r}\mathbf{v_r}^\top
n-r 0 columns
Why do we care about SVD?
A=UΣV⊤
A=U\Sigma V^\top
∴A=σ1u1v1⊤+σ2u2v2⊤+⋯+σrurvr⊤
\therefore A=\sigma_1\mathbf{u_1}\mathbf{v_1}^\top+\sigma_2\mathbf{u_2}\mathbf{v_2}^\top+\cdots+\sigma_r\mathbf{u_r}\mathbf{v_r}^\top
largest sigma
smallest sigma
we can sort these terms according to sigmas
A has m ×n elements
Each ui has m elements
Each vi has n elements
After SVD you can represent A using r(m+n+1) elements
If the rank is very small then this would lead to significant compression
Even further compression can be obtained by throwing away terms corresponding to very small σs
Fun with flags :-)

Original Image: 1200 x 800
Lot of redundancy
rank<<800

Original Image: 1200 x 800
Lot of redundancy
rank<<800
Puzzle: What is the rank of this flag?
Best rank-k approximation
∣∣A∣∣F=∑i=1m∑j=1n∣Aij∣2
||A||_F = \sqrt{\sum_{i=1}^m\sum_{j=1}^n |A_{ij}|^2}
A=σ1u1v1⊤+σ2u2v2⊤+⋯+σkukvk⊤+⋯+σrurvr⊤
A=\sigma_1\mathbf{u_1}\mathbf{v_1}^\top+\sigma_2\mathbf{u_2}\mathbf{v_2}^\top+\cdots+\sigma_k\mathbf{u_k}\mathbf{v_k}^\top+\cdots+\sigma_r\mathbf{u_r}\mathbf{v_r}^\top
Frobenius norm
A^k=σ1u1v1⊤+σ2u2v2⊤+⋯+σkukvk⊤
\hat{A}_k=\sigma_1\mathbf{u_1}\mathbf{v_1}^\top+\sigma_2\mathbf{u_2}\mathbf{v_2}^\top+\cdots+\sigma_k\mathbf{u_k}\mathbf{v_k}^\top
rank-k approximation of A - dropped the last r - k terms
Theorem: SVD gives the best rank-k approximation of the matrix A
i.e. ∣∣A−A^k∣∣F is minimum when
A^k=UkΣkVkT
\hat{A}_k=U_k\Sigma_kV^T_k
we will not prove this
Still hate eigenvalues?

Do you remember this guy?

Do you remember how we beat him?
Time Travel!

Do you remember who figured out time travel?
Do you remember how?
And with that, I rest my case!
Summary of the course
(in 3 pictures)



Summary of the course
(in 6 great theorems)
Source: Introduction to Linear Algebra, Prof. Gilbert Strang

CS6015: Linear Algebra and Random Processes Lecture 22: Singular Value Decomposition (SVD)
CS6015: Lecture 22
By Mitesh Khapra
CS6015: Lecture 22
Lecture 22: Singular Value Decomposition (SVD)
- 1,894