Linear Systems

Numerical Methods

David Mayerich

Scalable Tissue Imaging and Modeling (STIM) Laboratory

Department of Electrical and Computer Engineering

Cullen College of Engineering

University of Houston

David Mayerich

STIM Laboratory, University of Houston

Linear Equations

David Mayerich

STIM Laboratory, University of Houston

Loop Currents with Kirchhoff's Laws

David Mayerich

STIM Laboratory, University of Houston

(7\Omega + 2\Omega + 6\Omega)x_1 - 2\Omega x_2 - 6\Omega x_3 = 300V
-2\Omega x_1 + (2\Omega + 5\Omega + 4\Omega + 1\Omega) x_2 - 4\Omega x_3 - 1\Omega x_4= 0
-6\Omega x_1 -4\Omega x_2 + (4\Omega + 9\Omega + 6\Omega) x_3 - 9\Omega x_4= 0
-1\Omega x_2 - 9\Omega x_3 + (9\Omega + 1\Omega + 11\Omega) x_4= 0
\begin{alignat*}{5} 15\Omega x_1 & {} - {} & 2\Omega x_2 & {} -{} & 6\Omega x_3 & {} +{} & 0\Omega x_4 & {}={} & 300V\\ -2\Omega x_1 & {} + {} & 12\Omega x_2 & {}-{} & 4\Omega x_3 & {} -{} & 1\Omega x_4 & {}={} & 0V\\ -6\Omega x_1 & {} - {} & 4\Omega x_2 & {} +{} & 19\Omega x_3 & {}-{} & 9\Omega x_4 & {}={} & 0V\\ 0\Omega x_1 & {} - {} & 1\Omega x_2 & {} -{} & 9\Omega x_3 & {} +{} & 21\Omega x_4 & {}={} & 0V \end{alignat*}
\begin{bmatrix} 15 & -2 & -6 & 0\\ -2 & 12 & -4 & -1\\ -6 & -4 & 19 & -9\\ 0 & -1 & -9 & 12 \end{bmatrix}
=\begin{bmatrix} 300\\ 0\\ 0\\ 0 \end{bmatrix}
\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{bmatrix}
\mathbf{A}
\mathbf{x}
\mathbf{b}
\mathbf{=}

Solutions to Linear Systems

  • Linear systems take the form: \(\mathbf{Ax}=\mathbf{b}\)

  • The purely analytical solution is: \(\mathbf{A}^{-1}\mathbf{b}=\mathbf{x}\)
                                   where \(\mathbf{A}^{-1}\) is the inverse of \(\mathbf{A}\)

  • Options for solving \(\mathbf{x}\) include:

    1. Calculate \(\mathbf{x}\) using a direct method (ex. Gaussian Elimination)

    2. Calculate \(\mathbf{x}\) using an indirect (often iterative) method

    3. Calculate \(\mathbf{A}^{-1}\) and multiply: \(\mathbf{x} = \mathbf{A}^{-1}\mathbf{b}\)

  • We will cover each of these options, starting with direct methods

David Mayerich

STIM Laboratory, University of Houston

Gaussian Elimination

Row Reduction

Algorithm

Back Substitution

David Mayerich

STIM Laboratory, University of Houston

Gaussian Elimination

  • Gaussian elimination is a 2-step algorithm for directly computing \(\mathbf{x}\)

  1. Convert the augmented matrix \(\left[\mathbf{A}|\mathbf{b}\right]\) to row-eschelon form

  2. Perform back-substitution to calculate the scalar values of \(\mathbf{x}\) from bottom to top

David Mayerich

STIM Laboratory, University of Houston

  • Row-eschelon form:

    • Rows with non-zero elements are above rows with all zeros

    • The leading coefficient of row \(k\) is to the right of the leading coefficient of \(k-1\)

    • All values in a column below a leading coefficient are zero

\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14}\\ 0 & a_{22} & a_{23} & a_{24}\\ 0 & 0 & a_{33} & a_{34}\\ 0 & 0 & 0 & a_{44} \end{bmatrix}
\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14}\\ 0 & a_{22} & a_{23} & a_{24}\\ 0 & 0 & a_{33} & a_{34} \end{bmatrix}
\begin{bmatrix} a_{11} & a_{12} & a_{13}\\ 0 & a_{22} & a_{23}\\ 0 & 0 & a_{33}\\ 0 & 0 & 0 \end{bmatrix}

Row Reduction

  • Assuming \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbb{b}\in \mathbb{R}^{N}\), build the augmented matrix \(\mathbf{C} = \left[\mathbf{A}|\mathbf{b}\right] \in \mathbb{R}^{n \times (n+1)}\)

  • For each row \(i \in \mathbb{N}\) in the matrix

    • For each row \(\{k\in\mathbb{N}|k>i \land k < n\}\) below row \(i\)
      • Calculate a value \(s = \frac{a_{k, p}}{a_{k, k}}\) (used to eliminate the coefficients under \(a_{k, k}\))
      • Subtract the scaled row \(i\) from row \(k\):  \(r_i \leftarrow r_i - s r_k\)

David Mayerich

STIM Laboratory, University of Houston

\left[ \begin{array}{ccc|c} 3 & 6 & 9 & 39 \\ 2 & 5 & -2 & 3 \\ 1 & 3 & -1 & 2 \end{array} \right]
\left[ \begin{array}{ccc|c} 3 & 6 & 9 & 39 \\ 0 & 1 & -8 & -23 \\ 0 & 1 & -4 & -11 \end{array} \right]
r_2 \leftarrow r_2 - \frac{2}{3}r_1
r_3 \leftarrow r_3 - \frac{1}{3}r_1
\left[ \begin{array}{ccc|c} 3 & 6 & 9 & 39 \\ 0 & 1 & -8 & -23 \\ 0 & 1 & -4 & -11 \end{array} \right]
\left[ \begin{array}{ccc|c} 3 & 6 & 9 & 39 \\ 0 & 1 & -8 & -23 \\ 0 & 0 & 4 & 12 \end{array} \right]
r_3 \leftarrow r_3 - 1r_2
r_1
r_2
r_3
r_1
r_2
r_3

Loop Currents (Row 1)

David Mayerich

STIM Laboratory, University of Houston

\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ -2.000 & 12.00 & -4.000 & -1.000 & 0.000\\ -6.000 & -4.000 & 19.00 & -9.000 & 0.000\\ 0.000 & -1.000 & -9.000 & 12.00 & 0.000 \end{array} \right]
\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & -4.800 & 16.60 & -9.000 & 120.0\\ 0.000 & -1.000 & -9.000 & 12.00 & 0.000 \end{array} \right]
\begin{bmatrix} 15 & -2 & -6 & 0\\ -2 & 12 & -4 & -1\\ -6 & -4 & 19 & -9\\ 0 & -1 & -9 & 12 \end{bmatrix}
=\begin{bmatrix} 300\\ 0\\ 0\\ 0 \end{bmatrix}
\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{bmatrix}
r_2 \leftarrow r_2 - \left(-\frac{2}{15}\right)r_1
r_3 \leftarrow r_3 - \left(-\frac{6}{15}\right)r_1
r_4 \leftarrow r_4- \left(-\frac{0}{15}\right)r_1

Loop Currents (Row 2)

David Mayerich

STIM Laboratory, University of Houston

\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & 0.000 & 14.64 & -9.409 & 136.4\\ 0.000 & 0.000 & -9.409 & 11.91 & 3.410 \end{array} \right]
\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & -4.800 & 16.60 & -9.000 & 120.0\\ 0.000 & -1.000 & -9.000 & 12.00 & 0.000 \end{array} \right]
\begin{bmatrix} 15 & -2 & -6 & 0\\ -2 & 12 & -4 & -1\\ -6 & -4 & 19 & -9\\ 0 & -1 & -9 & 12 \end{bmatrix}
=\begin{bmatrix} 300\\ 0\\ 0\\ 0 \end{bmatrix}
\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{bmatrix}
r_3 \leftarrow r_3 - \left(-\frac{4.8}{11.73}\right)r_2
r_4 \leftarrow r_4- \left(-\frac{1}{11.73}\right)r_2

Loop Currents (Row 3)

David Mayerich

STIM Laboratory, University of Houston

\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & 0.000 & 14.64 & -9.409 & 136.4\\ 0.000 & 0.000 & 0.000 & 5.863 & 91.07 \end{array} \right]
r_4 \leftarrow r_4- \left(-\frac{9.409}{14.64}\right)r_3
\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & 0.000 & 14.64 & -9.409 & 136.4\\ 0.000 & 0.000 & -9.409 & 11.91 & 3.410 \end{array} \right]
\begin{bmatrix} 15 & -2 & -6 & 0\\ -2 & 12 & -4 & -1\\ -6 & -4 & 19 & -9\\ 0 & -1 & -9 & 12 \end{bmatrix}
=\begin{bmatrix} 300\\ 0\\ 0\\ 0 \end{bmatrix}
\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{bmatrix}

Gaussian Elimination Algorithm

David Mayerich

STIM Laboratory, University of Houston

gaussian(double* A, double* b, int N){
  for(int k=0; k<N-1; k++){ 		// for each pivot
    for(int i=k+1; i<N; i++){ 		// for each equation
      float m=A[i*N+k] / A[k*N+k]; 	// calculate scale
      for(int j=k; j<N; j++){ 		// for each coefficient
        A[i*N+j] -= m * A[k*N+j]; 	// subtract
      }
      b[i] -= m * b[k]; 			// forward elimination
    }
  }
}
  • What is the computational complexity?

Back Substitution

David Mayerich

STIM Laboratory, University of Houston

\left[ \begin{array}{cccc|c} 15.00 & -2.000 & -6.000 & 0.000 & 300.0\\ 0.000 & 11.73 & -4.800 & -1.000 & 40.00\\ 0.000 & 0.000 & 14.64 & -9.409 & 136.4\\ 0.000 & 0.000 & 0.000 & 5.863 & 91.07 \end{array} \right]
5.863 x_4=91.07
x_4=15.53
14.64 x_3-9.409(15.53)=136.4
x_3=19.30
11.73 x_2 -4.800(19.30) -1.000(15.53)=40.00
11.73 x_2 -92.64 -15.53=40.00
11.73 x_2 -108.2=40.00
11.73 x_2=148.2
x_2=12.63
15.00 x_1 -2.00(12.63) -6.000(19.30)=300.0
15.00 x_1 -25.26 -115.8=300.0
15.00 x_1 -141.1=300.0
15.00 x_1 = 441.1
x_1 = 29.41

Back Substitution - Algorithm

David Mayerich

STIM Laboratory, University of Houston

backsub(double* A, double* b, double* x, int N){
  x[N-1] = b[n-1] / A[(n-1)*N+n-1];	// calculate 𝑥𝑛
  for(int i=N-2; i>=0; i--){ 		// for each equation
    float sum = b[i]; 				// start a sum
    for(int j=i+1; j<N; j++){ 		// for each coefficient
      sum+=A[i*N+j] * x[j]; 		// add coeffs and knowns
    }
    x[i]=sum/A[i*N+i]; 				// calculate 𝑥𝑖
  }
}
  • What is the computational complexity?

  • Total complexity: \(O\left(n^3\right) + O\left(n^2\right) = O\left(n^3\right)\)

Loss of Precision

Testing Gaussian Elimination

Geometric Series

Compounding Errors

David Mayerich

STIM Laboratory, University of Houston

Testing Gaussian Elimination

  • We can test robustness by creating an artificial problem

  • Consider the geometric series:

David Mayerich

STIM Laboratory, University of Houston

p(t)=a+at+at^2+at^3+at^4+\cdots+at^{n-1}
=\sum_{j=1}^n at^{j-1}
  • A geometric series has an analytical solution:

p(t)=\sum_{j=1}^n at^{j-1}=a \frac{t^n - 1}{t-1}

where \(2\leq t \leq n+1\)

  • Try to recover the coefficient \(a\) for an \(n\)th order geometric series given \(n\) equations

  • For each equation:

\(a=1\) and \(t_i=1+i\)    where \(i=1\) to \(n\)

Linearized Geometric Series

David Mayerich

STIM Laboratory, University of Houston

a_1 + (1+1)a_2 + \cdots + (1+1)^{n-1} a_n
a_1 + (1+2)a_2 + \cdots + (1+2)^{n-1} a_n
a_1 + (1+3)a_2 + \cdots + (1+3)^{n-1} a_n
\cdots
a_1 + (1+n)a_2 + \cdots + (1+n)^{n-1} a_n
=\frac{(1+1)^n-1}{(1+1)-1}
=\frac{(1+1)^n-1}{1}
=\frac{(1+2)^n-1}{(1+2)-1}
=\frac{(1+2)^n-1}{2}
=\frac{(1+3)^n-1}{(1+3)-1}
=\frac{(1+3)^n-1}{3}
=\frac{(1+n)^n-1}{(1+n)-1}
=\frac{(1+n)^n-1}{n}

Geometric Series Example

  • For \(3\) equations \((n=3)\) and coefficient \(a=1\):

David Mayerich

STIM Laboratory, University of Houston

a + 2a + 2a^2 = \frac{2^3-1}{1}=7
a + 3a + 3a^2 = \frac{3^3-1}{1}=13
a + 4a + 4a^2 = \frac{4^3-1}{1}=21
\begin{bmatrix} 1 & 2 & 4\\ 1 & 3 & 9\\ 1 & 4 & 16 \end{bmatrix} \begin{bmatrix} a\\ a\\ a \end{bmatrix} = \begin{bmatrix} 7\\ 13\\ 21 \end{bmatrix}
  • For \(4\) equations \((n=4)\) and coefficient \(a=1\):

\begin{bmatrix} 1 & 2 & 4 & 8\\ 1 & 3 & 9 & 27\\ 1 & 4 & 16 & 64\\ 1 & 5 & 25 & 125 \end{bmatrix} \begin{bmatrix} a\\ a\\ a\\ a \end{bmatrix} = \begin{bmatrix} 15\\ 40\\ 85\\ 156 \end{bmatrix}

Testing Gaussian Elimination

  • Now we can test the robustness of the algorithm by adjusting \(n = 4, 5, 6, 7, \cdots\)

  • Compare the result vector \(x\) to the known values \(\mathbf{x}=[1, 1, \cdots, 1]^T\)

  • Using Gaussian Elimination to solve this system:

    • Precise results up to \(n=9\)

    • At \(n=9\), one result has a relative error \(\epsilon_r = 16,000\%\)
       

  • Next, we will discuss why this happens

David Mayerich

STIM Laboratory, University of Houston

Errors

  • Consider the following linear system:

David Mayerich

STIM Laboratory, University of Houston

x_2=1
x_1 + x_2 = 2
  • First step of Gaussian elimination results in division by zero
     

  • If a numerical method fails for some value, it might be unstable near that value

Rounding Errors

  • Take a linear system involving some small number \(\epsilon\):

David Mayerich

STIM Laboratory, University of Houston

\epsilon x_1 + x_2 = 1
x_1 + x_2 = 2
  • What is the correct answer?

x_1 \approx 1
x_2 \approx 1
  • What will you actually get?

Rounding Errors

  • Row reduction:

David Mayerich

STIM Laboratory, University of Houston

\epsilon x_1 + x_2 = 1
x_1 + x_2 = 2
\begin{bmatrix} \epsilon & 1\\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 1\\ 2 \end{bmatrix}
\begin{bmatrix} \epsilon & 1\\ 0 & 1 - \frac{1}{\epsilon} \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 1\\ 2 - \frac{1}{\epsilon} \end{bmatrix}
  • Backsubstitution:

\left(1 - \frac{1}{\epsilon}\right)x_2 = 2 - \frac{1}{\epsilon}
x_2=\frac{2-\frac{1}{\epsilon}}{1 - \frac{1}{\epsilon}} \approx 1
  • If \(\frac{1}{\epsilon}\) is large, then both \(\left(2 - \frac{1}{\epsilon}\right)\) and \(\left(1 - \frac{1}{\epsilon}\right)\) approach \(-\frac{1}{\epsilon}\)

x_1 = \frac{1-x_2}{\epsilon} = \frac{0}{\epsilon}=0
\epsilon x_1 +
x_2 = 1
\left(1 - \frac{1}{\epsilon}\right)x_2 = 2 - \frac{1}{\epsilon}

Numerical Demonstration

  • Solve the following linear system keeping \(4\) significant figures

David Mayerich

STIM Laboratory, University of Houston

\begin{bmatrix} 0.1036 & 0.2122\\ 0.2081 & 0.4247 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 0.7381\\ 0.9327 \end{bmatrix}
\begin{bmatrix} 0.1036 & 0.2122\\ 0.0000 & -0.001600 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 0.7381\\ -0.5503 \end{bmatrix}
-0.001600x_2=-0.5503
x_2=343.9
0.1036x_1+0.2122(343.9)=0.7381
0.1036x_1+72.97=0.7381
0.1036x_1=-72.23
x_1=-697.2
\epsilon_1=\frac{|-722.7+697.2|}{|-722.7|} = 3.5\%
\epsilon_1=\frac{|356.3+343.9|}{|356.3|} = 3.5\%

correct values

x_1=-722.7
x_2=356.3

Rounding Errors - 2nd Example

  • This issue doesn't just happen when one coefficient is small

  • The problem is when one coefficient is small relative to other coefficients in the row

David Mayerich

STIM Laboratory, University of Houston

\begin{bmatrix} 1 & \frac{1}{\epsilon}\\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} \frac{1}{\epsilon}\\ 2 \end{bmatrix}
  • Row reduction:

\begin{bmatrix} 1 & \frac{1}{\epsilon}\\ 0 & \left(1-\frac{1}{\epsilon}\right) \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} \frac{1}{\epsilon}\\ 2 - \frac{1}{\epsilon} \end{bmatrix}
x_1 + \frac{1}{\epsilon}x_2 = \frac{1}{\epsilon}
x_1 + x_2 = 2
  • Back substitution:

x_2=\frac{2-\frac{1}{\epsilon}}{1-\frac{1}{\epsilon}} \approx 1
x_1=\frac{1}{\epsilon} - \frac{1}{\epsilon}x_2 \approx 0

correct values

x_2 \approx 1
x_1 \approx 1