Skip to main content
\(\newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\fe}[2]{#1\mathopen{}\left(#2\right)\mathclose{}} \newcommand{\cinterval}[2]{\left[#1,#2\right]} \newcommand{\ointerval}[2]{\left(#1,#2\right)} \newcommand{\cointerval}[2]{\left[\left.#1,#2\right)\right.} \newcommand{\ocinterval}[2]{\left(\left.#1,#2\right]\right.} \newcommand{\point}[2]{\left(#1,#2\right)} \newcommand{\fd}[1]{#1'} \newcommand{\sd}[1]{#1''} \newcommand{\td}[1]{#1'''} \newcommand{\lz}[2]{\frac{d#1}{d#2}} \newcommand{\lzn}[3]{\frac{d^{#1}#2}{d#3^{#1}}} \newcommand{\lzo}[1]{\frac{d}{d#1}} \newcommand{\lzoo}[2]{{\frac{d}{d#1}}{\left(#2\right)}} \newcommand{\lzon}[2]{\frac{d^{#1}}{d#2^{#1}}} \newcommand{\lzoa}[3]{\left.{\frac{d#1}{d#2}}\right|_{#3}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\sech}{\operatorname{sech}} \newcommand{\csch}{\operatorname{csch}} \newcommand \dd[1] { \,\textrm d{#1} } \newcommand \de[2] { \frac{\mathrm d{#1}}{\mathrm d{#2}} } \newcommand \intl[4]{ \int\limits_{#1}^{#2}{#3}\dd{#4} } \newcommand\at[2]{\left.#1\right|_{#2}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section5.1A1

Example5.1.1

Let \(T\) be a linear transformation from \(\mathbb{R}2 \mapsto \mathbb{R}2\) which is a projection and suppose that \begin{equation*}T \left(\begin{array}{r} 3 \\ 1 \end{array}\right) = \left(\begin{array}{r} 1 \\ 2 \end{array}\right)\end{equation*} Find the standard matrix of \(T\) (The Projection Matrix).

[6 marks]

We will use the techniques detailed by Professor Strang in the lecture series 18.06 Linear Algebra Lecture 15: Projections onto subspaces

Figure5.1.2Diagram Of The Problem
Figure5.1.3Professor Strang In Action
Figure5.1.4Projection Matrix Formula

In these screenshots we can see that vector \(\vec{p}\) is the projection of vector \(\vec{b}\) onto \(\vec{a}\).

\(\vec{p}\) is a multiple,\(x\), of \(\vec{a}\): \begin{equation*}\vec{p} = x \vec{a}\end{equation*} The error vector \(\vec{e}\) is given by: \begin{equation*}\vec{e}=\vec{b}-\vec{p}\end{equation*} The other piece of information we require is that \(\vec{e}\) is perpendicular to \(\vec{p}\). In other words, the dot product of these two vectors is zero. Recall that the dot product of two vectors is the same as the transpose of one with the other.

Combining the above gives us:

\begin{equation*}\vec{a}^T (\vec{b} - x \vec{a}) = 0\end{equation*}\begin{equation*}\implies x = \frac{\vec{a}^T \vec{b}}{\vec{a}^T \vec{a}}\end{equation*}

Since \(\vec{p} = x \vec{a}\) then:

\begin{equation}\vec{p} = \frac{\vec{a} \vec{a}^T}{\vec{a}^T \vec{a}} \vec{b}\label{men-34}\tag{5.1.1}\end{equation}

From this we can see that the Projection Matrix, \(\textbf{P}\), is given by:

\begin{equation}\textbf{P} = \frac{\vec{a} \vec{a}^T}{\vec{a}^T \vec{a}}\label{men-35}\tag{5.1.2}\end{equation}

This matches the formula in the screenshot shown in figure 5.1.4 above. In our exercise \(\textbf{P}\) becomes \(\textbf{T}\).

Now lets see how we can solve this example using SageMath.

One of the most annoying points is that vectors are represented as rows. Usually, we prefer to write them in column form. However, we can switch the view by using the column() method on the vector.

The vector \(\vec{a}\) times the transpose of itself can be found by taking the outer_product():

The dot product (inner product) given by \(\vec{a}^T \vec{a}\) is easily found by either of these two steps:

Combining these we get the complete solution for \(\textbf{T}\), what Strang calls \(\textbf{P}\) as:

These concepts are combined with some fancy plotting methods to give us a nice visualisation of the problem:

Notice that the diagram is upside down compared with Strang's and that we multiplied \(\vec{a}\) by a factor of 2 so that we could more clearly see that \(\vec{p}\) lies on the same line as \(\vec{a}\).

Let us now check some other properties of the projection matrix \(\textbf{T}\)