# Day 12:

Linear Maps and their matrix representations

### Linear maps

**Definition.** Suppose \(V\) and \(W\) are vector spaces. A function \(L:V\to W\) is called **linear** if for every \(x,y\in V\) and \(a\in\mathbb{R}\) the following two properties hold:

- \(L(x+y) = L(x)+L(y)\)
- \(L(ax)=aL(x)\).

**Example 1. **Let \[A = \begin{bmatrix} 1 & 2 & 3\\ 0 & 1 & 0\end{bmatrix}.\] Define the function \(L:\mathbb{R}^{3}\to\mathbb{R}^{2}\) by \(L(x) = Ax.\) Using the properties of matrix multiplication we proved in Week 1, for any \(x,y\in\mathbb{R}^{3}\) and \(a\in\mathbb{R}\) we have \[L(x+y) = A(x+y) = Ax+Ay = L(x) + L(y)\] and \[L(ax) = A(ax) = aAx = aL(x).\] Thus, \(L\) is a linear function.

### Linear maps

**Example 2. **Define the function \(D:\mathbb{P}_{2}\to\mathbb{P}_{1}\) by \(Df(x) = f'(x)\) for \(f(x)\in\mathbb{P}_{2}\). For example,

\[D(2x^2+x-1) = 4x+1.\]

Using some properties of the derivative from calculus, if \(f(x),g(x)\in\mathbb{P}_{2}\) and \(c\in\mathbb{R}\), then

\[D(f(x) + g(x)) = f'(x) + g'(x) = D(f(x)) + D(g(x))\]

and

\[D(cf(x)) = cf'(x) = cD(f(x)).\]

Therefore, \(D\) is linear.

**Example 3. ** The function \(F:\mathbb{R}\to\mathbb{R}\) given by \(F(x) = 2x+1\) is not linear, since

\[F(1+1)=F(2) = 2(2)+1=5\neq 6 = F(1)+F(1).\]

### Image and Kernel

**Definition. **Suppose \(V\) and \(W\) are vector spaces, and \(L:V\to W\) is linear. The **image** of \(L\) is the set

\[\operatorname{im}(L) := \{Lv : v\in V\}.\]

The **kernel** of \(L\) is the set

\[\operatorname{ker}(L):= \{v : L(v) = 0\}.\]

The **rank** of \(L\), denoted \(\operatorname{rank}(L)\), is the dimension of the subspace \(\operatorname{im}(L)\).

The **nullity **of \(L\), denoted \(\operatorname{nullity}(L)\), is the dimension of the subspace \(\operatorname{ker}(L)\).

If \(A\in\mathbb{R}^{m\times n}\), and we set \(L(x) = Ax\) for \(x\in\mathbb{R}^{n}\), then \(L:\mathbb{R}^{n}\to\mathbb{R}^{m}\) is linear, moreover,

\[\operatorname{im}(L) = C(A)\quad\text{and}\quad \operatorname{ker}(L) = N(A).\]

### Surjective and Injective

**Definition. **Suppose \(V\) and \(W\) are vector spaces, and \(L:V\to W\) is linear.

- The map \(L\) is called
**injective (or one-to-one)**if for every \(x,y\in V\) such that \(x\neq y\) it holds that \(L(x)\neq L(y)\). - The map \(L\) is called
**surjective (or onto)**if for every \(w\in W\) there exists \(v\in V\) such that \(L(v)=w\). - If \(L\) is both injective and surjective, then we say that \(L\) is
**bijective**. -
An injective map is called an
**injection**. A surjective map is called a**surjection**. A bijective map is called a**bijection**.

Notes: A map \(L\) is surjective if and only if \(\operatorname{im}(L)=W\).

A map \(L\) is injective if and only if \(L(x)=L(y)\) implies \(x=y\).

### Injections

**Proposition. **A linear map \(L:V\to W\) is an injection if and only if \(\operatorname{ker}(L)=\{0\}\).

*Proof.* Suppose \(L\) is injective. Assume toward a contradiciton that there are two elements \(x,y\in\operatorname{ker}(L)\) with \(x\neq y\). This implies \(L(x)\neq L(y)\), but \(L(x)=0=L(y)\). This contradiction shows that \(\operatorname{ker}(L)\) does not contain two distinct elements. Since \(0\in\operatorname{ker}(L)\), we conclude that \(\operatorname{ker}(L)=\{0\}\).

Next, suppose that \(\operatorname{ker}(L)=\{0\}\). Let \(x,y\in V\) such that \(L(x)=L(y)\). By the linearity of \(L\) we have

\[0=L(x)-L(y) = L(x-y).\]

This implies \(x-y\in\operatorname{ker}(L)\), and hence \(x-y=0\), or \(x=y\). This shows that \(L\) is injective. \(\Box\)

### Isomorphisms

**Corollary. **A linear map \(L:V\to W\) is a bijection if and only if \(\operatorname{ker}(L)=\{0\}\) and \(\operatorname{im}(L)=W\).

**Definition. **A linear map \(L:V\to W\) is called an **isomorphism **if \(\operatorname{ker}(L)=\{0\}\) and \(\operatorname{im}(L)=W\). If there exists an isomorphism from \(V\) to \(W\), then we say that \(V\) and \(W\) are **isomorphic**.

**Example. **Consider the map \(L:\mathbb{P}_{2}\to\mathbb{R}^{3}\) given by

\[L(ax^2+bx+c) = \begin{bmatrix}a\\ b\\ c\end{bmatrix}.\]

We claim that this map is an isomorphism. First, note that if

\[L(f(x)) = \begin{bmatrix} 0\\ 0\\ 0\end{bmatrix},\]

then \(f(x) = 0\). Thus \(L\) is injective.

### Isomorphisms

**Example continued. **Next, let \([a\ \ b\ \ c]^{\top}\) be an arbitrary element of \(\mathbb{R}^{3}\), then \(a,b,c\in\R\) and we see that

\[L(ax^2+bx+c) = [a\ \ b\ \ c]^{\top}.\]

Thus, for any \(v\in\mathbb{R}^{3}\) we see that there is some \(f(x)\in \mathbb{P}_{2}\) such that \(L(f(x)) = v\), that is, \(\operatorname{im}(L) = \mathbb{R}^{3}.\) Hence, we see that \(\mathbb{P}_{2}\) and \(\mathbb{R}^{3}\) are isomorphic.

**Theorem. **For any \(n\in\N\) the spaces \(\mathbb{P}_{n}\) and \(\mathbb{R}^{n+1}\) are isomorphic.

*Proof. *The map \(L:\mathbb{P}_{n}\to\mathbb{R}^{n+1}\) given by

\[L(a_{0} + a_{1}x+a_{2}x^2+\cdots+a_{n}x^{n}) = [a_0\ \ a_{1}\ \ a_{2}\ \ \cdots\ \ a_{n}]^{\top}\]

is an isomorphism. (You should verify this) \(\Box\)

### Linear maps and bases

Suppose \(L:V\to W\) is a linear map, and \(\{v_{i}\}_{i=1}^{n}\) is a basis for \(V\).

If you only know the outputs \(L(v_{1}),L(v_{2}),\ldots,L(v_{n})\), then you know any output of the function \(L\).

Indeed, given \(v\in V\), there exist unique scalars \(a_{1},a_{2},\ldots,a_{n}\) such that

\[v = a_{1}v_{1} + a_{2}v_{2} + \cdots + a_{n}v_{n}.\]

By linearity we have

\[L(v) = L(a_{1}v_{1} + a_{2}v_{2} + \cdots + a_{n}v_{n}) = a_{1}L(v_{1}) + a_{2}L(v_{2}) + \cdots + a_{n}L(v_{n}).\]

We know all the vectors \(L(v_{1}),L(v_{2}),\ldots,L(v_{n}),\) so we can compute \(L(v)\).

### Linear maps and bases

Suppose \(L:V\to W\) is a linear map, \(\{v_{i}\}_{i=1}^{n}\) is a basis for \(V\), and \(\{w_{i}\}_{i=1}^{m}\) is a basis for \(W\).

To know \(L\) we need to know the vectors \(L(v_{1}),L(v_{2}),\ldots,L(v_{n})\). For each \(j\in\{1,2,\ldots,n\}\) there are unique scalars \(a_{1j},a_{2j},a_{3j},\ldots,a_{mj}\) such that

\[L(v_{j}) = \sum_{i=1}^{m}a_{ij}w_{i}.\]

Therefore, in order to know the map \(L\), we only need to know the numbers \(\{a_{ij}\}_{i=1,j=1}^{m,n}\). It's easier to organize this into a grid:

\[\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn}\end{bmatrix}\]

### Linear maps and bases

The matrix

\[\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn}\end{bmatrix}\]

is called the **matrix representation of \(L\) **with respect to the bases \(\{v_{i}\}_{i=1}^{n}\) and \(\{w_{i}\}_{i=1}^{m}\).

### Matrix representation

**Example. **Let \(D:\mathbb{P}_{2}\to\mathbb{P}_{1}\) be given by \[D(f(x)) = f'(x).\]

Fix the bases \(S = \{1,x,x^2\}\) for \(\mathbb{P}_{2}\) and \(T = \{1,x\}\) for \(\mathbb{P}_{1}\).

We compute \(D(v)\) for each \(v\in S\), and write the output as a linear combination of the vectors in \(T\):

\[D(1) = 0 = 0\cdot 1 + 0\cdot x\]

\[D(x) = 1 = 1\cdot 1 + 0\cdot x\]

\[D(x^{2}) = 2x = 0\cdot 1 + 2\cdot x\]

Then, the matrix representation of \(D\) with respect to \(S\) and \(T\) is

\[\begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 2\end{bmatrix}\]

# End Day 12

#### Linear Algebra Day 12

By John Jasper

# Linear Algebra Day 12

- 624