Day 35:

Complex Matrices and the Fourier Transform

Complex Vectors

\(\mathbb{C}^{2} = \left\{\left[\begin{matrix}a\\ b\end{matrix}\right] : a\in\mathbb{C} \text{ and } b\in\mathbb{C}\right\}\)

 

\(\mathbb{C}^{3} = \left\{\left[\begin{matrix}a\\ b\\c\end{matrix}\right] : a\in\mathbb{C} \text{ and } b\in\mathbb{C} \text{ and } c\in\mathbb{C}\right\}\)

 

\(\mathbb{C}^{n} =  \left\{\left[\begin{matrix}a_{1}\\ a_{2}\\\vdots\\a_{n}\end{matrix}\right] : a_{1}\in\mathbb{C} \text{ and } a_{2}\in\mathbb{C}\text{ and }\ldots \text{ and } a_{n}\in\mathbb{C}\right\}\)

(Note that the textbook uses \(\mathbf{C}\) for the set of complex numbers instead of \(\mathbb{C}\).)

Assume we have a matrix \(A\) with complex entries:

\[A=\left[\begin{matrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & & a_{2n}\\ \vdots & & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn}\end{matrix}\right]\]

The adjoint

The adjoint of \(A\), denoted \(A^{\ast}\), is the matrix

\[A^{\ast}=\left[\begin{matrix} \overline{a_{11}} & \overline{a_{21}} & \cdots & \overline{a_{n1}}\\ \overline{a_{12}} & \overline{a_{22}} & & \overline{a_{n2}}\\ \vdots & & \ddots & \vdots \\ \overline{a_{1m}} & \overline{a_{2m}} & \cdots & \overline{a_{nm}}\end{matrix}\right]\]

Note that we obtain the adjoint by taking the transpose of \(A\) and conjugating each entry.

In the homework we will show that our favorite property of the transpose also holds for the adjoint, namely, \[(AB)^{\ast} = B^{\ast}A^{\ast}.\]

Dot Product and Norm

Given two vectors

\[u = \begin{bmatrix}u_{1}\\ u_{2}\\ \vdots\\ u_{d}\end{bmatrix}\quad\text{and}\quad v = \begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{d}\end{bmatrix}\]

in  \(\mathbb{C}^{d}\) (or \(\R^{d}\)), their dot product is given by

\[u\cdot v=u^{\ast}v= \begin{bmatrix}\overline{u}_{1} & \overline{u}_{2} & \cdots & \overline{u}_{d}\end{bmatrix}\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{d}\end{bmatrix} = \sum_{i=1}^{d}\overline{u}_{i}v_{i}.\]

The norm of \(u\) is given by

\[\|u\| = \sqrt{u\cdot u} = \sqrt{u^{\ast}u} = \sqrt{\sum_{i=1}^{d}\overline{u}_{i}u_{i}} = \sqrt{\sum_{i=1}^{d}|u_{i}|^{2}}.\]

Dot Product and Norm

Example. Consider the vectors

\[v_{1} = \frac{1}{2}\left[\begin{array}{r}1\\ 1\\ 1\\ 1\end{array}\right],\ v_{2}=\frac{1}{2}\left[\begin{array}{r}1\\ -i\\ -1\\ i\end{array}\right],\ v_{3}=\frac{1}{2}\left[\begin{array}{r}1\\ -1\\ 1\\ -1\end{array}\right],\ v_{4}=\frac{1}{2}\left[\begin{array}{r}1\\ i\\ -1\\ -i\end{array}\right]\]

\[v_{2}\cdot v_{3} = \left(\frac{1}{2}[1\ \ i\ -1\ \ -i]\right)\left(\frac{1}{2}\left[\begin{array}{r}1\\ -1\\ 1\\ -1\end{array}\right]\right) \]

\[= \frac{1}{4}\big((1)(1)+(i)(-1)+(-1)(1)+(-i)(-1)\big) = \frac{1}{4}(1-i-1+i)=0\]

Hence, \(v_{2}\) and \(v_{3}\) are orthogonal. One can similarly check that each pair of these vectors is orthogonal, and each as norm \(1\), thus \(\{v_{1},v_{2},v_{3},v_{4}\}\) is an orthonormal basis for \(\mathbb{C}^{4}\).

Unitary matrices

Definition. Let \(U\) be an \(n\times n\) matrix with complex entries. The matrix \(U\) is called unitary if  \(U^{\ast}U = UU^{\ast} = I\).

Theorem. An \(n\times n\) matrix \(U\) is unitary if and only if the columns of \(U\) form an orthonormal basis for \(\mathbb{C}^{n}\).

Example. Consider the matrix

\[U = \frac{1}{2}\left[\begin{array}{rrrr} 1 & 1 & 1 & 1\\ 1 & -i & -1 & i\\ 1 & -1 & 1 & -1\\ 1 & i & -1 & -i\end{array}\right]\]

Using the notation from the previous example we have

\[U^{\ast}U = \begin{bmatrix} v_{1}\cdot v_{1} & v_{1}\cdot v_{2} & v_{1}\cdot v_{3} & v_{1}\cdot v_{4}\\  v_{2}\cdot v_{1} & v_{2}\cdot v_{2} & v_{2}\cdot v_{3} & v_{2}\cdot v_{4}\\  v_{3}\cdot v_{1} & v_{3}\cdot v_{2} & v_{3}\cdot v_{3} & v_{3}\cdot v_{4}\\  v_{4}\cdot v_{4} & v_{4}\cdot v_{2} & v_{4}\cdot v_{3} & v_{4}\cdot v_{4}\\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\]

The Fourier Transform Matrix

Definition. Given a positive whole number \(N\) the Fourier transform matrix is the matrix

\[F_{N} = \frac{1}{\sqrt{N}}\begin{bmatrix} 1 & 1 & 1 & 1 & \cdots & 1\\ 1 & \omega & \omega^2 & \omega^3 & \cdots & \omega^{N-1}\\ 1 & \omega^2 & \omega^4 & \omega^6 & \cdots & \omega^{2(N-1)}\\ 1 & \omega^3 & \omega^6 & \omega^9 & \cdots & \omega^{3(N-1)}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & \omega^{N-1} & \omega^{2(N-1)} & \omega^{3(N-1)} & \cdots & \omega^{(N-1)(N-1)}\end{bmatrix}\]

where \(\omega=e^{-2\pi i/N}\).

Example. For \(N=4\) we have \(\omega=e^{-2\pi i/4} = e^{-\pi i/2} = -i\), and hence

\[F_{4} = \frac{1}{\sqrt{4}}\begin{bmatrix} 1 & 1 & 1 & 1\\ 1 & (-i) & (-i)^2 & (-i)^3\\ 1 & (-i)^2 & (-i)^4 & (-i)^6\\ 1 & (-i)^3 & (-i)^6 & (-i)^9\end{bmatrix} = \frac{1}{2}\left[\begin{array}{rrrr} 1 & 1 & 1 & 1\\ 1 & -i & -1 & i\\ 1 & -1 & 1 & -1\\ 1 & i & -1 & -i\end{array}\right]\]

Lemma (The geometric sum formula). If \(r\in\mathbb{C}\setminus\{0,1\}\) and \(N\in\N\), then

\[\sum_{m=0}^{N-1}r^{m} = \frac{1-r^{N}}{1-r}.\]

Proof. Set

\[S = \sum_{m=0}^{N-1}r^{m} = 1+r+r^2+\cdots + r^{N-1}.\]

Then, 

\[rS = r+r^2+r^3+\cdots +r^{N}.\]

Subtracting, we have

\[(1-r)S = S-rS = \big(1+r+r^2+\cdots + r^{N-1}\big) - \big(r+r^2+r^3+\cdots +r^{N}\big)\]

\(=1-r^N\)

Since \(r\neq 1\) we see that \(1-r\neq 0\). Thus we can divide both sides by \(1-r\) and we obtain the desired equality. \(\Box\)

Theorem. For each \(N\in\N\) the Fourier Transform Matrix \(F_{N}\) is a unitary matrix.

Proof. For \(k=0,1,2,\ldots,N-1\) set

\[f_{k} = \frac{1}{\sqrt{N}}\begin{bmatrix} 1\\ \omega^k\\ \omega^{2k}\\ \omega^{3k}\\ \vdots\\ \omega^{(N-1)k}\end{bmatrix}.\]

Since \(f_{0},f_{1},f_{2},\ldots,f_{N-1}\) are the columns of \(F_{N}\), it is enough to show that they form an orthonormal basis for \(\mathbb{C}^{N}\).

Next note that if \(\theta\in\R\), then
\[\overline{e^{i\theta}} = \overline{\cos(\theta)+i\sin(\theta)} = \cos(\theta)-i\sin(\theta) = \cos(-\theta) + i\sin(-\theta) = e^{-i\theta}.\]

Since \(\omega = e^{-2\pi i/N}\) we see that

\[\overline{\omega^{n}} = \overline{\left(e^{-2n\pi i/N}\right)} = \left(e^{2n\pi i/N}\right) = \omega^{-n}.\]

Proof continued. Note that

\[f_{k}^{\ast} = \frac{1}{\sqrt{N}}\begin{bmatrix} 1 & \overline{\omega^k} & \overline{\omega^{2k}} & \overline{\omega^{3k}} & \cdots & \overline{\omega^{(N-1)k}}\end{bmatrix}\]

\[= \frac{1}{\sqrt{N}}\begin{bmatrix} 1 & \omega^{-k} & \omega^{-2k} & \omega^{-3k} & \cdots & \omega^{-(N-1)k}\end{bmatrix}\]

Now if \(j\neq k\), we use the geometric sum formula to compute

\[f_{j}\cdot f_{k} = f_{j}^{\ast}f_{k} = \frac{1}{N}\begin{bmatrix} 1 & \omega^{-j} & \omega^{-2j} & \omega^{-3j} & \cdots & \omega^{-(N-1)j}\end{bmatrix}\begin{bmatrix} 1\\ \omega^k\\ \omega^{2k}\\ \omega^{3k}\\ \vdots\\ \omega^{(N-1)k}\end{bmatrix}\]

 \[= \frac{1}{N}\sum_{m=0}^{N-1}\omega^{-jm}\omega^{km} = \frac{1}{N}\sum_{m=0}^{N-1}(\omega^{k-j})^{m} = \frac{1}{N}\frac{1-(\omega^{(k-j)})^N}{1-\omega^{j-k}}.\]

Note that  \((\omega^{j-k})^{N} = (\omega^{N})^{(j-k)} = (1)^{(j-k)}=1\), and hence

\[f_{j}\cdot f_{k} = 0.\]

Proof continued. On the other hand

 \[f_{j}\cdot f_{j}= \frac{1}{N}\sum_{m=0}^{N-1}\omega^{-jm}\omega^{jm} = \sum_{m=0}^{N-1}\omega^{0} = \frac{1}{N}\sum_{m=0}^{N-1}1 = \frac{1}{N}\cdot N = 1.\]

In summary,

\[f_{j}\cdot f_{k} = \begin{cases} 1 & \text{if } i=j,\\ 0 & \text{if }i\neq j,\end{cases}\]

that is \(\{f_{0},f_{1},\ldots,f_{N-1}\}\) is orthonormal. Since this is \(N\) orthonormal vectors in \(\mathbb{C}^{N}\), it is an orthonormal basis. \(\Box\)

The Fourier Transform

Definition. Given a vector \(v\in\mathbb{C}^{N}\), the Fourier transform of \(v\) is the vector \(F_{N}v\).

(Important note: In some places, including Matlab, the Fourier transform of \(v\) is defined to be \(\sqrt{N}F_{N}v\).)

Example. If \(v = [1\ \ 1\ \ 1\ \ 1]^{\top}\), then the Fourier transform of \(v\) is the vector

\[F_{4}v = \frac{1}{2}\left[\begin{array}{rrrr} 1 & 1 & 1 & 1\\ 1 & -i & -1 & i\\ 1 & -1 & 1 & -1\\ 1 & i & -1 & -i\end{array}\right]\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix} = \begin{bmatrix}2\\ 0\\ 0\\ 0\end{bmatrix}.\]

Note that \(\|F_{4}v\| = \|v\|.\) This always happens!

The Fourier Transform

Theorem. If \(v\in\mathbb{C}^{N}\), and \(U\) is an \(N\times N\) unitary matrix, then

\[\|v\| = \|Uv\|.\]

Proof. \(\|Uv\|^{2} = (Uv)^{\ast}(Uv) = v^{\ast}U^{\ast}Uv = v^{\ast}Iv = \|v\|^{2}.\ \Box\)

Corollary. If \(v\in\mathbb{C}^{N}\), then

\[\|v\| = \|F_{N}v\|.\]

Convolution

Motivation. Given two vectors

\[x=\begin{bmatrix} a_{0} & a_{1} & a_{2} & \cdots & a_{N-1}\end{bmatrix}^{\top}\]

\[y = \begin{bmatrix} b_{0} & b_{1} & b_{2} & \cdots & b_{N-1}\end{bmatrix}^{\top}\]

Define the polynomials

\[f(x) = a_{0} + a_{1}x + a_{2}x^2+\cdots + a_{N-1}x^{N-1}\]

\[g(x) = b_{0} + b_{1}x + b_{2}x^2+\cdots + b_{N-1}x^{N-1}\]

Then,

\[f(x)g(x) = c_{0} + c_{1}x+c_{2}x^2+\cdots + c_{2N-2}x^{2N-2}\]

where

\[c_{0} = a_{0}b_{0},\quad c_{1} = a_{0}b_{1} + a_{1}b_{0},\quad c_{2} = a_{0}b_{2} + a_{1}b_{1} + a_{2}b_{0},\ldots\]

Following this pattern:

\[c_{N} = a_{0}b_{N} + a_{1}b_{N-1} + a_{2}b_{N-2} + \cdots + a_{N-1}b_{1} + a_{N}b_{0}\]

If we take \(a_{i}=b_{i}=0\) for \(i\geq N\), then we have \[c_{k} = \sum_{i=0}^{k}a_{i}b_{k-i}\]

Convolution

Definition. Given two vectors

\[x=\begin{bmatrix} a_{0}\\ a_{1}\\ a_{2}\\ \vdots\\ a_{N-1}\end{bmatrix}\quad\text{and}\quad y = \begin{bmatrix} b_{0}\\ b_{1}\\ b_{2}\\ \vdots\\ b_{N-1}\end{bmatrix}\]

The convolution, denoted \(x\ast y\), is the vector

\[x\ast y = \begin{bmatrix} c_{0}\\ c_{1}\\ c_{2}\\ \vdots\\ c_{2N-2}\end{bmatrix}\quad\text{where }c_{k} = \sum_{i=0}^{k}a_{i}b_{k-i}\]

Note that if \(x,y\in\R^{N}\), then \(x\ast y\in\R^{2N-1}\).

Convolution

Example. If

\[x=\begin{bmatrix} 1\\ 0\\ 2\end{bmatrix}\quad\text{and}\quad y = \begin{bmatrix} 1\\ 5\\ 3\end{bmatrix}\]

Since

\[(1+2x^2)(1+5x+3x^2) = (1+2x^2)(1) + (1+2x^2)(5x)+(1+2x^2)(3x^2)\]

 

 

we see that \[x\ast y = \begin{bmatrix} 1 & 5 & 5 & 10 & 6\end{bmatrix}^{\top}\]

 

\[\begin{array}{ccc} 1 &  0 &  2\\ 1 &  5 &  3\\ \hline\end{array}\]

\[\begin{array}{ccccc} & & 3 & 0 & 6\\ & 5 & 0 & 10 & \\ 1 & 0 & 2 & & \\ \hline 1 & 5 & 5 & 10 & 6\end{array}\]

\(1+0x+2x^2\)

\(5x+0x^2+10x^3\)

\(3x^2+0x^3+6x^4\)

\( = (1+2x^2) + (5x+10x^3) + (3x^2+6x^4)\)

\[= 1+5x+5x^2+10x^3+6x^4\]

To quickly compute \(x\ast y\):

Cyclic Convolution

Example. If

\[x=\begin{bmatrix} 1\\ 0\\ 2\end{bmatrix}\quad\text{and}\quad y = \begin{bmatrix} 1\\ 5\\ 3\end{bmatrix}\]

\[x\ast y = \begin{bmatrix} 1 & 5 & 5 & 10 & 6\end{bmatrix}^{\top}\]

 

\[\begin{array}{ccc} 1 &  0 &  2\\ 1 &  5 &  3\\ \hline\end{array}\]

\[\begin{array}{ccccc} & & 3 & 0 & 6\\ & 5 & 0 & 10 & \\ 1 & 0 & 2 & & \\ \hline 1 & 5 & 5 & 10 & 6\end{array}\]

Convolution:

\[x\circledast y = \begin{bmatrix} 11 & 11 & 5\end{bmatrix}^{\top}\]

 

\[\begin{array}{ccc} 1 &  0 &   2\\ 1 &  5 &   3\\ \hline\end{array}\]

\[\begin{array}{ccc} 0 & 6 & 3\\ 10 & 5 & 0\\ 1 & 0 & 2\\\hline 11 & 11 & 5\end{array}\]

Cyclic convolution:

Cyclic Convolution

Example. If

\[x=\begin{bmatrix} \phantom{-}1\\ \phantom{-}0\\ \phantom{-}2\\ -1\end{bmatrix}\quad\text{and}\quad y = \begin{bmatrix} 1\\ 2\\ 3\\ 0\end{bmatrix}\]

Compute \(x\ast y\) and \(x\circledast y\)

Answer: \[x\ast y = \left[\begin{array}{r} 1\\ 2\\ 5\\ 3\\ 4\\ -3\\ 0\end{array}\right]\quad\text{and}\quad x\circledast y = \left[\begin{array}{r} 5\\ -1\\ 5\\ 3\end{array}\right]\]

Linear Algebra Day 35

By John Jasper

Linear Algebra Day 35

  • 376