Day 10:

More rank theorems

Theorem 1. \(\text{rank}(AB)\leq \min\{\text{rank}(A),\text{rank}(B)\}\).

Rank theorems

Proof.  

  • Note that the columns of \(AB\) are all in \(C(A)\), hence the dimension of \(C(AB)\) is at most the dimension of \(C(A)\), that is, \[\text{rank}(AB)\leq \text{rank}(A).\]
  • \(\text{rank}(AB)=\text{rank}((AB)^{\top}) = \text{rank}(B^{\top}A^{\top})\)
  • By the first bullet \(\text{rank}(B^{\top}A^{\top})\leq \text{rank}(B^{\top}).\)
  • Since \(\text{rank}(B^{\top})=\text{rank}(B)\) we have \[\text{rank}(AB) = \text{rank}(B^{\top}A^{\top})\leq \text{rank}(B^{\top}) = \text{rank}(B).\]
  • Hence, \[\text{rank}(AB)\leq \text{rank}(A)\quad\text{ and }\quad\text{rank}(AB)\leq \text{rank}(B).\]
  • This is the same as \[\text{rank}(AB)\leq \min\{\text{rank}(A),\text{rank}(B)\}.\]

Theorem 2. \(\text{rank}(A+B)\leq \text{rank}(A) + \text{rank}(B)\).

Rank theorems

Proof.  

  • Let \(v_{1},\ldots,v_{k}\) be a basis for \(C(A)\), where \(k=\text{rank}(A)\)
  • Let \(w_{1},\ldots,w_{\ell}\) be a basis for \(C(B)\), where \(\ell=\text{rank}(B)\)
  • The columns of \(A+B\) are in \(\text{span}\{v_{1},\ldots,v_{k},w_{1},\ldots,w_{\ell}\}\)
  • So, \(C(A+B)\subset\text{span}\{v_{1},\ldots,v_{k},w_{1},\ldots,w_{\ell}\}\) 
  • Hence \[\text{rank}(A+B)\leq k+\ell = \text{rank}(A)+\text{rank}(B)\]

Lemma. \(N(A)=N(A^{\top}A).\)

Rank theorems

Proof.  First we show that \(N(A)\subset N(A^{\top}A)\).

  • \(x\in N(A)\) \(\Rightarrow\) \(Ax=0\) \(\Rightarrow\) \(A^{\top}Ax=0\) \(\Rightarrow\) \(x\in N(A^{\top}A)\).

Next, we show the reverse inclusion \(N(A)\supset N(A^{\top}A).\)

  • Let \(x\in N(A^{\top}A)\)
  • Then \(A^{\top}Ax=0\).
  • Either \(x\in N(A)\) or \(Ax\in N(A^{\top})\).
  • If \(x\in N(A)\), then we're done, so assume \(Ax\in N(A^{\top})\).
  • But \(Ax\in C(A)\).
  • We have already shown that \(C(A)\cap N(A^{\top}) = \{0\}\). 
  • Since \(Ax\in N(A^{\top})\) and \(Ax\in C(A)\), this implies \(Ax=0\), and hence \(x\in N(A)\)

Theorem 3. \(\text{rank}(A^{\top}A)=\text{rank}(AA^{\top}) = \text{rank}(A) = \text{rank}(A^{\top})\).

Rank theorems

Proof.  

  • By the lemma \(N(A) = N(A^{\top}A)\).
  • Let \(v_{1},\ldots,v_{k}\) be a basis for \(C(A)\).
  • Then, \(A^{\top}v_{1},\ldots,A^{\top}v_{k}\) spans \(C(A^{\top}A)\).
  • We claim that this set of vectors is a basis.
  • Assume we have scalars \(\alpha_{1},\ldots,\alpha_{k}\) so that \[0 = \alpha_{1}A^{\top}v_{1} + \cdots + \alpha_{k}A^{\top}v_{k} = A^{\top}(\alpha_{1}v_{1} + \cdots + \alpha_{k}v_{k})\]
  • Set \(x = \alpha_{1}v_{1} + \cdots + \alpha_{k}v_{k}\) and note that \(x\in C(A)\).
  • There is some vector \(y\) so that \(x=Ay\).
  • \(A^{\top}Ay=0\)
  • \(A^{\top}Ay=0\) \(\Rightarrow\) \(Ay=0\)
  • \(Ay=0\) \(\Rightarrow\) \(x=0\)
  • \(v_{1},\ldots,v_{k}\) is independent \(\Rightarrow\) the \(\alpha_{i}\)'s are all zero.

Computing \(B\) such that \(BA=\text{rref}(A)\).

Theorem. Let \(A\) be an \(m\times n\) matrix. Let \(I\) be an \(m\times m\) identity matrix. If

\[\text{rref}([A\ |\ I]) = [D\ |\ B]\]

then \(BA=\text{rref}(A)\).

Proof. Note that \[\text{rref}([A\ |\ I]) = [\text{rref}(A)\ |\ B].\] Assume \(C\) is the matrix such that

\[C\cdot [A\ \vert\ I] = \text{rref}([A\ \vert\ I])= [\text{rref}(A)\ \vert\ B].\]

Finally, note that \[C\cdot [A\ \vert\ I] = [CA\ \vert\ CI] = [CA\ \vert\ C],\] and hence \(C=B\) and \(BA=\text{rref}(A)\). \(\Box\)

Computing \(B\) such that \(BA=\text{rref}(A)\).

Example.

\[\text{rref}\left(\left[\begin{array}{rrr|rr} 2 & 3 & 1 & 1 & 0\\ 2 & 3 & -2 & 0 & 1 \end{array}\right]\right) = \left[\begin{array}{rrr|rr} 1 & 3/2 & 0 & 1/3 & 1/6\\ 0 & 0 & 1 & 1/3 & -1/3 \end{array}\right]\]

 

\[\left[\begin{array}{rrr} 1/3 & 1/6\\ 1/3 & -1/3 \end{array}\right]\left[\begin{array}{rrr} 2 & 3 & 1\\ 2 & 3 & -2 \end{array}\right] = \left[\begin{array}{rrr} 1 & 3/2 & 0\\ 0 & 0 & 1\end{array}\right]\]

The Inverse of a matrix

Definition. Given a square matrix \(A\), a square matrix \(B\) such that \(AB=BA=I\) is called the inverse of \(A\). The inverse of \(A\) is denoted \(A^{-1}\). If a matrix \(A\) has an inverse, then we say that \(A\) is invertible.

Examples.

  • If \(A=\begin{bmatrix} 1 & 1\\ 0 & 1\end{bmatrix}\), then \(A^{-1} = \begin{bmatrix} 1 & -1\\ 0 & 1\end{bmatrix}\)

 

  • If \(A = \begin{bmatrix} 1 & 0 & -2\\ 3  & 1 & 0\\ 0 & 0 & 1\end{bmatrix}\), then \(A = \begin{bmatrix} 1 & 0 & 2\\ -3  & 1 & -6\\ 0 & 0 & 1\end{bmatrix}\)

Theorem. A matrix \(A\) is invertible if and only if \(A\) is square and \(\text{rref}(A)=I\).

Proof. Suppose \(\text{rref}(A)=I\). There exists a matrix \(B\) so that \(BA=\text{rref}(A)\). From this we deduce that \(BA=I\). For each elementary matrix \(E\) there is an elementary matrix \(F\) such that \(FE=I\). Since \(B\) is a product of elementary matrices \[B= E_{k}\cdot E_{k-1}\cdots E_{1}\]

We take \(F_{i}\) such that \(F_{i}E_{i}=I\) for each \(i\), and set  \[C = F_{1}\cdot F_{2}\cdots F_{k}\]

and we see that \(CB=I\). Finally, we have \[AB = CBAB =C I B = CB = I.\]

Now, suppose \(A\) is invertible. By definition \(A\) is square. If \(x\) is a vector such that \(Ax=0\), then \(A^{-1}Ax=A^{-1}0\). Thus, \(x=0\) is the only solution to \(Ax=0\). Hence, \(\{0\}=N(A) = N(\operatorname{rref}(A))\). This implies that each row of \(\operatorname{rref}(A)\) must have a pivot. Since it is square, every column of \(\operatorname{rref}(A)\) contains a pivot. This implies \(\operatorname{rref}(A)=I\). \(\Box\)

Example. Let \[A = \begin{bmatrix} 2 & 3 & 4\\ 3 & 4 & 0\\ 5 & 7 & 4\end{bmatrix}\]

Is \(A\) invertible? If it is, find \(A^{-1}\).

Note that \[\text{rref}(A)\neq I\] and hence \(A\) is not invertible.

Note that

\[\text{rref}\left(\left[\begin{array}{ccc|ccc} 2 & 3 & 4 & 1 & 0 & 0\\ 3 & 4 & 0 & 0 & 1 & 0\\ 5 & 7 & 4 & 0 & 0 & 1\end{array}\right]\right) = \left[\begin{array}{ccc|ccc} 1 & 0 &-16 & 0 & 7 & -4\\ 0 & 1 & 12 & 0 & -5 & 3\\ 0 & 0 & 0 & 1 & 1 & -1\end{array}\right]\]

Example.  Let \[A = \begin{bmatrix} 0 & -1 & -2\\ 1 & 1 & 1\\ -1 & -1 & 0\end{bmatrix}\]

Is \(A\) invertible? If it is, find \(A^{-1}\).

Since \(\text{rref}(A)= I\) we see that \(A\) is invertible. Moreover, \[A^{-1} = \begin{bmatrix} 1 & 2 & 1\\ -1 & -2 & -2\\ 0 & 1 & 1\end{bmatrix}\]

Note that

\[\text{rref}\left(\left[\begin{array}{ccc|ccc} 0 & -1 & -2 & 1 & 0 & 0\\ 1 & 1 & 1 & 0 & 1 & 0\\ -1 & -1 & 0& 0 & 0 & 1\end{array}\right]\right) = \left[\begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 2 & 1\\ 0 & 1 & 0 & -1 & -2 & -2\\ 0 & 0 & 1 & 0 & 1 & 1\end{array}\right]\]

End Day 10

Linear Algebra Day 10

By John Jasper

Linear Algebra Day 10

  • 456