CS6015: Linear Algebra and Random Processes
Lecture 4: Gaussian Elimination, LU factorisation
Learning Objectives
Why did elimination work so well in the last lecture?
What is LU factorisation?
How to represent Gauss Elimination as matrix multiplication?
What is the intuition behind 0 and infinite solutions?
(for today's lecture)
The bigger picture
\begin{bmatrix}
~~~&~~~&~~~\\
~~~&~~~&~~~\\
~~~&~~~&~~~\\
\end{bmatrix}
m < n
a peek into the future
m=n
\begin{bmatrix}
~~~&~~~&~~~&~~~&~~~\\
~~~&~~~&~~~&~~~&~~~\\
~~~&~~~&~~~&~~~&~~~\\
\end{bmatrix}
\begin{bmatrix}
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&
\end{bmatrix}
m > n
rank =
A
A
A
(we are focusing on this nice well-behaved case for now)
(These two are the more interesting cases that we will come to a bit later in the course)
Recap
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(row 2 + row 1)
(row 3 - 2*row 1)
(row 3 + 3/4*row 2)
A
\mathbf{x}
\mathbf{b}
U
\mathbf{x}
\mathbf{c}
x_3 = 1
4x_2 -4(1) = -4
x_2 = 0
x_1 + 2(0) - (1) = 1
x_1 = 2
back-substitution
Gaussian Elimination
pivots
(along the diagonal)
The good case
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
A
\mathbf{x}
\mathbf{b}
U
\mathbf{x}
\mathbf{c}
m = n
Gaussian Elimination
non-zero pivots
n
unique solution
1
The not-so-good cases
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(equation 2 + equation 1)
(equation 3 - 2*equation 1)
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&-3&4
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
4
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&-3&4
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
4
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(equation 3 + 3/4*equation 2)
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
~1~
~3~
~3~
~0~
The not-so-good cases
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
~0~
0 = 1
4x_2 -4x_3 = -4
x_1 + 2x_2 - x_1 = 1
We have a zero pivot
The last equation cannot be satisfied
0 solutions
(switch to geogebra)
Notes: plot these 3 equations
The not-so-good cases
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(equation 2 + equation 1)
(equation 3 - 2*equation 1)
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&-3&4
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
4
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&-3&4
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
4
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(equation 3 + 3/4*equation 2)
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
~1~
~3~
~3~
~0~
~5~
~3~
~3~
~0~
The not-so-good cases
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
~0~
~0~
0 = 0
4x_2 -4x_3 = -4
x_1 + 2x_2 - x_3 = 1
We have a zero pivot
only two equations need to be satisfied
infinite solutions (intersection of two planes)
(switch to geogebra)
Notes: plot these 3 equations
The not-so-good cases
Will we have a solution for every b?
(some more intuition)
or
Which are the b's for which we will not have 0 solutions?
\begin{bmatrix}
1\\
-1\\
2
\end{bmatrix}
=\begin{bmatrix}
b_1\\
b_2\\
b_3
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
2\\
2\\
1
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
\begin{bmatrix}
-1\\
-3\\
2
\end{bmatrix}
x_1
+ x_2
+ x_3
=\begin{bmatrix}
b_1\\
b_2\\
b_3
\end{bmatrix}
(those b's which are linear combinations of the columns of A)
The not-so-good cases
(some more intuition)
Which are the b's for which we will have infinite solutions?
\begin{bmatrix}
1\\
-1\\
2
\end{bmatrix}
=\begin{bmatrix}
b_1\\
b_2\\
b_3
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
-1\\
1\\
-2
\end{bmatrix}
\begin{bmatrix}
1&-1&-1\\
-1&1&-3\\
2&-2&2
\end{bmatrix}
\begin{bmatrix}
-1\\
-3\\
2
\end{bmatrix}
x_1
+ x_2
+ x_3
=\begin{bmatrix}
b_1\\
b_2\\
b_3
\end{bmatrix}
(those b's which can be expressed as linear combinations of the columns of A in multiple ways)
\mathbf{b}=\begin{bmatrix}
-2\\
-6\\
4
\end{bmatrix}
\mathbf{x}=\begin{bmatrix}
0\\
0\\
2
\end{bmatrix}
,\begin{bmatrix}
1\\
1\\
2
\end{bmatrix}
\dots,\begin{bmatrix}
c\\
c\\
2
\end{bmatrix}
Our current focus is on the good case
\begin{bmatrix}
~~~&~~~&~~~\\
~~~&~~~&~~~\\
~~~&~~~&~~~\\
\end{bmatrix}
m < n
m=n
\begin{bmatrix}
~~~&~~~&~~~&~~~&~~~\\
~~~&~~~&~~~&~~~&~~~\\
~~~&~~~&~~~&~~~&~~~\\
\end{bmatrix}
\begin{bmatrix}
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&\\
~~~&~~~&
\end{bmatrix}
m > n
rank =
A
A
A
n~non\_zero~pivots
Back to the good case
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(row 2 + row 1)
(row 3 - 2*row 1)
(row 3 + 3/4*row 2)
A
\mathbf{x}
\mathbf{b}
U
\mathbf{x}
\mathbf{c}
Gaussian Elimination
pivots
(along the diagonal)
Step 2,1: get 0 in position 2,1
Step 3,1: get 0 in position 3,1
Step 3,2: get 0 in position 3,2
Back to the good case
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1\\
-5\\
6
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
1\\
-4\\
1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix}
(row 2 + row 1)
(row 3 - 2*row 1)
(row 3 + 3/4*row 2)
A
\mathbf{x}
\mathbf{b}
U
\mathbf{x}
\mathbf{c}
Gaussian Elimination
Step 2,1: get 0 in position 2,1
Step 3,1: get 0 in position 3,1
Step 3,2: get 0 in position 3,2
How do we represent the above steps as matrix operations?
B =\begin{bmatrix}
1 & 2 & 2 & 0\\
2 & 1 & 2 & 1\\
0& 2 & 1 &2\\
\end{bmatrix}
Recap
(Fun with matrix multiplication)
E =\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & -2 & 1\\
\end{bmatrix}
EB =
\begin{bmatrix}
1 & 2 & 2 & 0\\
2 & 1 & 2 & 1\\
-4 & 0 & -3 & 0\\
\end{bmatrix}
(subtracting 2 times row 2 from row 3)
Gaussian Elimination as matrix operations
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
A
U
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{Elementary~Matrices}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
E_{31}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
E_{32}
(row 2 + row 1)
(row 3 - 2*row 1)
(row 3 + 3/4*row 2)
Step 2,1: get 0 in position 2,1
Step 3,1: get 0 in position 3,1
Step 3,2: get 0 in position 3,2
E_{21}
=
Gaussian Elimination as matrix operations
\begin{bmatrix}
1&2&-1&~~~\\
-1&2&-3&~~~\\
2&1&2&~~~
\end{bmatrix}
\begin{bmatrix}
1&2&-1&~~~\\
0&4&-4&~~~\\
0&0&1&~~~
\end{bmatrix}
\underbrace{A~\mathbf{b}}_{Augmented~Matrix}
U~\mathbf{c}
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{Elementary~Matrices}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
E_{31}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
E_{32}
(row 2 + row 1)
(row 3 - 2*row 1)
(row 3 + 3/4*row 2)
Step 2,1: get 0 in position 2,1
Step 3,1: get 0 in position 3,1
Step 3,2: get 0 in position 3,2
E_{21}
=
\begin{matrix}
1\\-5\\6
\end{matrix}
\begin{matrix}
1\\-4\\1
\end{matrix}
Gaussian Elimination as matrix operations
E_{32}E_{31}E_{21}A = U
(E_{32}E_{31}E_{21})A = U
associativity law
EA = U
(E = E_{32}E_{31}E_{21})
Gaussian Elimination as matrix operations
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{Elementary~Matrices}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
= \begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
E_{31}
E_{32}
E_{21}
E
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
\frac{-5}{4} & \frac{3}{4} & 1\\
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
=\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
A
U
LU factorisation
\begin{bmatrix}
1&1\\
1&2
\end{bmatrix}
A
\begin{bmatrix}
1&0\\
-1&1
\end{bmatrix}
=\begin{bmatrix}
1&1\\
0&1
\end{bmatrix}
E
U
We know that an elementary matrix is invertible
(in fact we even know how to compute the inverse!)
EA = U
A = E^{-1}U
\begin{bmatrix}
1&1\\
1&2
\end{bmatrix} =
\begin{bmatrix}
1&1\\
0&1
\end{bmatrix}
\begin{bmatrix}
1&0\\
1&1
\end{bmatrix}
LU factorisation
E_{32}E_{31}E_{21}A = U
A = (E_{32}E_{31}E_{21})^{-1}U
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{Elementary~Matrices}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
E_{31}
E_{32}
E_{21}
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
\therefore A = E_{21}^{-1}E_{31}^{-1}E_{32}^{-1}U
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}=
(Proof in HW1)
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & -\frac{3}{4} & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
2 & 0 & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
-1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
LU factorisation
E_{32}E_{31}E_{21}A = U
A = (E_{32}E_{31}E_{21})^{-1}U
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
\end{bmatrix}
\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{Elementary~Matrices}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-2 & 0 & 1\\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & \frac{3}{4} & 1\\
\end{bmatrix}
E_{31}
E_{32}
E_{21}
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}
\begin{bmatrix}
1&2&-1\\
0&4&-4\\
0&0&1
\end{bmatrix}
\therefore A = E_{21}^{-1}E_{31}^{-1}E_{32}^{-1}U
\begin{bmatrix}
1&2&-1\\
-1&2&-3\\
2&1&2
\end{bmatrix}=
(Proof in HW1)
\begin{bmatrix}
1 & 0 & 0\\
-1 & 1 & 0\\
2 & -\frac{3}{4} & 1\\
\end{bmatrix}
A
L
U
(Unlike E, the multipliers sit nicely in the right positions in L)
E
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
\frac{-5}{4} & \frac{3}{4} & 1\\
\end{bmatrix}
LU factorisation
EA = U
A = LU
lower
upper
triangular
L and U contain all the information about:
the matrix A and
the Gaussian Elimination of A
Learning Objectives Achieved
Why did elimination work so well in the last lecture?
What is LU factorisation?
How to represent Gauss Elimination as matrix multiplication?
What is the intuition behind 0 and infinite solutions?
(for today's lecture)
CS6015: Lecture 4
By Mitesh Khapra
CS6015: Lecture 4
Gauss Elimination, LU factorisation
- 3,175