Day 14:

Linear inverse problems and solving \(Ax=b\)

Linear inverse problems

Given a linear map \(L:V\to W\) and a vector \(w\in W\), finding all \(v\in V\) such that

\[L(v) = w\]

is a linear inverse problem.

Examples.

  • If \(A\in\mathbb{R}^{m\times n}\) and \(b\in\mathbb{R}^{m}\) then finding the solutions to \(Ax=b\) is a linear inverse problem.
  • Finding all polynomials \(f(x)\in\mathbb{P}_{2}\) such that \(f'(x) = 2x+1\) is a linear inverse problem.
  • Solving the system of equations \[\begin{cases} x+y = 3\\ 2x-y=0\end{cases}\] is a linear inverse problem.

Linear inverse problems

Examples.

  • Given a set of vectors \(\{v_{1},v_{2},\ldots,v_{n}\}\) and a vector \(v\), finding coefficients \(a_{1},a_{2},\ldots,a_{n}\) such that \[\sum_{i=1}^{n}a_{i}v_{i}=v\] is a linear inverse problem.
  • Find all \(f(x)\in\mathbb{P}_{2}\) such that \(f(0)=0\) and \(f(1) = 1\).
  • Given sets of vectors \(\{x_{1},x_{2},\ldots,x_{k}\}\subset\mathbb{R}^{n}\) and \(\{y_{1},y_{2},\ldots,y_{k}\}\subset\mathbb{R}^{m}\). Find all matrices \(A\in\mathbb{R}^{m\times n}\) such that \(Ax_{i} = y_{i}\) for all \(i\in\{1,2,\ldots,k\}\).
  • Find all possible missing entries in a magic square. For example:

Linear inverse problems

Suppose we have a linear map \(L:V\to W\) and a vector \(w\in W\) and we wish to solve the linear inverse problem:

\[L(v) = w.\]

If we choose bases \(S=\{v_{i}\}_{i=1}^{n}\) for \(V\) and \(T=\{w_{i}\}_{i=1}^{m}\) for \(W\), then we can find the matrix representation of \(L\) with respect to \(S\) and \(T\), call it \(A\). We can also find the coordinate vector of \(w\) with respect to \(T\). 

 

A vector \(x\in\mathbb{R}^{n}\) satisfies \(Ax=b\) if and only if \(x\) is the coordinate vector of \(v\in V\) with respect to the basis \(S\), where \(v\) satisfies \(L(v) = w.\)

Takeaway: Any linear inverse problem can be transformed into the problem of solving a matrix equation.

Example.  Find all \(f(x)\in\mathbb{P}_{2}\) such that \(f(0)=0\) and \(f(1) = 1\).

How is this a linear inverse problem?

Define the map \(L:\mathbb{P}_{2}\to\mathbb{R}^{2}\) by \[L(f(x)) = \begin{bmatrix} f(0)\\ f(1)\end{bmatrix}.\]

(You should verify that this function is linear.) Then, for \(w:=[0\ \ 1]^{\top}\) we are looking for all \(v\in\mathbb{P}_{2}\) such that \(L(v) = w\).

Convert it to a matrix equation:

Pick bases for \(\mathbb{P}_{2}\) and \(\mathbb{R}^{2}\): There are many choices that would work, I will choose

\[S = \left\{1,x,x^{2}\right\}\subset \mathbb{P}_{2}\quad\text{and}\quad T=\{e_{1},e_{2}\}\subset\mathbb{R}^{2}.\]

The matrix representation of \(L\) with respect to \(S\) and \(T\) is

\[\begin{bmatrix} 1 & 0 & 0\\ 1 & 1 & 1\end{bmatrix}.\]

Example continued.  Since the coordinate vector of \([0\ \ 1]^{\top}\) with respect to \(T\) is \([0\ \ 1]^{\top}\), we see that our matrix equation is

\[\begin{bmatrix} 1 & 0 & 0\\ 1 & 1 & 1\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\\ x_{3}\end{bmatrix} = \begin{bmatrix}0\\ 1\end{bmatrix}.\]

Note that \([0\ \ 1\ \ 0]^{\top}\) and \([0\ \ 0\ \ 1]^{\top}\) are both solutions to the matrix equation. Moreover, these are the coordinate vectors of \(x\) and \(x^{2}\), both of which are solutions to the original linear inverse problem!

What's the full set of solutions?

Finding a nice description of the set of solutions to a matrix equation is the topic to today's lecture! In this case, the full set of solutions to the matrix equation above is \[\left\{\begin{bmatrix} 0\\ 1\\ 0\end{bmatrix} + a\begin{bmatrix}0\\ -1\\ 1\end{bmatrix} : a\in\mathbb{R}\right\}.\] Thus, the set of solutions to the original linear inverse problem is \[\{x+a(x^2-x) : a\in\mathbb{R}\}.\]

Solving to \(Ax=b\)

Let \(A\) be an \(m\times n\) matrix, and let \(b\in\R^{m}\). The set

\[\{x\in\R^{n} : Ax=b\}\]

is the set of solutions to \(Ax=b\).

There are two basic questions that one can ask about the equation \(Ax=b\).

  1. Does \(Ax=b\) have a solution? (Is the set of solutions nonempty?)
  2. What are all of the solutions to \(Ax=b\)? (Describe the set of solutions.)

Solving \(Ax=b\)

Example. Consider the matrix and vector

\[A = \begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\quad\text{and}\quad b=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

 

Does \(Ax=b\) have a solution?

 

Equivalently, is \(b\) an element of \(C(A)\)?

 

Equivalently, are there numbers \(x_{1},x_{2},x_{3},x_{4}\) such that

\[\left\{\begin{array}{rl} 2x_{1}+4x_{2}-2x_{4} & = 0\\ 2x_{1}+4x_{2}+x_{3}+3x_{4} & = 0\\ x_{3}+x_{4} & = -4\end{array}\right.?\]

Solving \(Ax=b\)

Example. Consider the matrix and vector

\[A = \begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\quad\text{and}\quad b=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

 

Take the matrix \(B\) so that \(BA=\text{rref}(A)\).

\[B = \frac{1}{4}\begin{bmatrix} 1 & 1 & -1\\ 1 & -1 & 5\\ -1 & 1 & -1\end{bmatrix}\]

 

And take the matrix \(C\) so that \(C\,\text{rref}(A) = A\).

\[C=B^{-1}=\begin{bmatrix} 2 & 0 & -2\\ 2 & 1 & 3\\ 0 & 1 & 1\end{bmatrix}\]

Solving \(Ax=b\)

Example. Consider the matrix and vector

\[A = \begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\quad\text{and}\quad b=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

 

Consider the matrix equation \(Ax=b\):

\[\begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{bmatrix}=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

Multiply both sides by \(B\) on the right:

\[\begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{bmatrix}=\begin{bmatrix} 1\\ -5\\ 1\end{bmatrix}\]

 

\[\frac{1}{4}\begin{bmatrix} 1 & 1 & -1\\ 1 & -1 & 5\\ -1 & 1 & -1\end{bmatrix}\begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{bmatrix}=\frac{1}{4}\begin{bmatrix} 1 & 1 & -1\\ 1 & -1 & 5\\ -1 & 1 & -1\end{bmatrix}\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\] 

Solving \(Ax=b\)

Example. Consider the matrix and vector

\[A = \begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\quad\text{and}\quad b=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

 

\[\begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{bmatrix}=\begin{bmatrix} 1\\ -5\\ 1\end{bmatrix}\] 

\[\Leftrightarrow\ \left\{\begin{array}{rl}x_{1}+2x_{2} & = 1\\ x_{3} & = -5\\ x_{4} & = 1\end{array}\right.\]

From this, we see that \(\mathbf{x}_{0} = \begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}\) is a solution to \(\text{rref}(A)x = Bb\).

We got this by picking \(x_{2} = 0\), then \(x_{1},\ x_{3},\) and \(x_{4}\) were determined. But we could pick any number for \(x_{2}\) and we would get a different solution.

Solving \(Ax=b\)

Example. Consider the matrix and vector

\[A = \begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\quad\text{and}\quad b=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

 

Thus, we see that \(\text{rref}(A)\mathbf{x}_{0} = Bb\), that is,

\[\begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}=\begin{bmatrix} 1\\ -5\\ 1\end{bmatrix}\] 

\[\begin{bmatrix} 2 & 0 & -2\\ 2 & 1 & 3\\ 0 & 1 & 1\end{bmatrix}\begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}=\begin{bmatrix} 2 & 0 & -2\\ 2 & 1 & 3\\ 0 & 1 & 1\end{bmatrix}\begin{bmatrix} 1\\ -5\\ 1\end{bmatrix}\]

Now, we multiply by \(B^{-1}\) on both sides of \(\text{rref}(A)\mathbf{x}_{0} = Bb\) and we have

\[\begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}=\begin{bmatrix} 1\\ -5\\ 1\end{bmatrix}\]

\[\begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

\(A\mathbf{x}_{0} = b\)

Solving \(Ax=b\)

Finding a solution to \(Ax=b\).

Assume \(A\) is an \(m\times n\) matrix and \(b\in\R^{m}\).

  1. Find the matrix \(B\) so that \(BA = \text{rref}(A)\).
  2. Find a solution \(\mathbf{x}_{0}\) to \(\text{rref}(A)x = Bb\).
  3. The vector \(\mathbf{x}_{0}\) is also a solution to \(Ax=b\).

Thus, we need to know how to find a solution to \(\text{rref}(A)x=\tilde{b}\).

Solving \(Ax=\mathbf{0}\)

Thus, we know all the vectors in \(N(A) = \text{span}\{v_{1},v_{2},\ldots,v_{k}\}\)

This is a very nice description of the set of solutions to \(Ax=\mathbf{0}\)

Recall that \(N(A)\) is the set of solutions to \(Ax=\mathbf{0}\).

We can find a basis \(v_{1},v_{2},\ldots,v_{k}\) for \(N(A) = N(\text{rref}(A))\)!

We also know that \(N(A) = N(\text{rref}(A))\).

Theorem. Let \(A\) be an \(m\times n\) matrix, and let \(b\in\R^{m}\).

If \(\mathbf{x}_{0}\in\R^{n}\) is a solution to \(Ax=b\), then

\[\{x\in\R^{n} : Ax=b\} = \{\mathbf{x}_{0} + z : z\in N(A)\}\]

Moreover, if \(\{v_{1},v_{2},\ldots,v_{k}\}\) is a basis for \(N(A)\), then

\[\{x\in\R^{n} : Ax=b\} = \{\mathbf{x}_{0} +a_{1}v_{1}+a_{2}v_{2}+\cdots+a_{k}v_{k} : a_{1},a_{2},\ldots,a_{k}\in\R\}\]

Proof. Let \(x\) be any vector such that \(Ax=b\). Note that \[A(x-\mathbf{x}_{0}) = Ax-A\mathbf{x}_{0} = b-b = \mathbf{0}.\]

This shows that \(x-\mathbf{x}_{0}\in N(A)\). Set \(z = x-\mathbf{x}_{0}\). Since \[x=\mathbf{x}_{0} + (x-\mathbf{x}_{0}),\] we see that \(x\in \{\mathbf{x}_{0} + z : z\in N(A)\}\).

Next, assume \(y\in \{\mathbf{x}_{0} : z\in N(A)\}\), that is, \(y=\mathbf{x}_{0}+z\) for some \(z\in N(A)\). Then we see that

\[Ay = A(\mathbf{x}_{0}+z) = A\mathbf{x}_{0} + Az = b+\mathbf{0} = b.\]

Therefore, \(y\) is a solution to \(Ax=b\). \(\Box\)

This is a very nice description of the set of solutions to \(Ax=b\)

Solving \(Ax=b\)

Example. Consider the matrix equation \(Ax=b\):

\[\begin{bmatrix} 2 & 4 & 0 & -2\\ 2 & 4 & 1 & 3\\ 0 & 0 & 1 & 1\end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{bmatrix}=\begin{bmatrix} 0\\ 0\\ -4\end{bmatrix}\]

\[\text{rref}(A) = \begin{bmatrix} 1 & 2 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\]

 

We already found that \(\mathbf{x}_{0} = \begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix}\) is a solution.

Hence, the set of solutions to \(Ax=b\) is \(\left\{\begin{bmatrix} 1\\ 0\\ -5\\ 1\end{bmatrix} + a\begin{bmatrix} -2\\ 1\\ 0\\ 0\end{bmatrix}: a\in\R\right\}\)

\(\Rightarrow\ \left\{\begin{bmatrix} -2\\ 1\\ 0\\ 0\end{bmatrix}\right\}\) is a basis for \(N(A)\)

Linear Algebra Day 14

By John Jasper

Linear Algebra Day 14

  • 414