# Matrix and Linear Mappings

With matrix multiplication, we can extend the ways that we manipulate matrices. One method of doing this is matrix mappings and linear mappings. The idea of a matrix mapping is a function that takes in a matrix as an input, and outputs another matrix in $\mathbb{R}^n$ as the output. We traditionally define a matrix mapping as $f_A(\vec{x}) = A \vec{x}$

**Example:
**Let $A = \left[\begin{array}{cc} 2 & 3 \\ -1 & 4 \\
0 & 1 \end{array}\right]$. Find $f_A(1,2)$ and $f_A(-1,4)$

If $A = \left[\begin{array}{cc} 2 & 3 \\ -1 & 4 \\ 0 & 1 \end{array}\right]$, then $f_A(\vec{x}) = \left[\begin{array}{cc} 2 & 3 \\ -1 & 4 \\ 0 & 1 \end{array}\right] \vec{x}$

From here, we just need to do matrix multiplication with our vector inputs, and we have our answer.

$f_A(1,2) = \left[\begin{array}{cc} 2 & 3 \\ -1 & 4 \\ 0 & 1 \end {bmatrix} \left[\begin{array}{cc} 1 \\ 2 \end{array}\right] = \left[\begin{array}{cc} 8 \\ 7 \\ 2 \end{array}\right]$

$f_A(-1,4) = \left[\begin{array}{cc} 2 & 3 \\ -1 & 4 \\ 0 & 1 \end{array}\right] \left[\begin{array}{cc} -1 \\ 4 \end{array}\right] = \left[\begin{array}{cc} 10 \\ 17 \\ 4 \end{array}\right]$

The reason we care about matrix mappings is because several matrix related applications involve multiplying matrices. If we can conclude information about these types of relations, we can conclude the same information when we apply it to other concepts. One such concept that we will see is transformations, which we can use to manipulate objects in space.

**Theorem: **Let A be a matrix, let $\vec{x}, \vec{y} \in \mathbb{R}^n$, and let $ t \in \mathbb{R}^n$. The following statements are true:

- $f_A(\vec{x} + \vec{y}) = f_A(\vec{x}) + f_A(\vec{y})$
- $f_A(t \vec{x}) = t f_A(\vec{x})$

If a function satisfies the first property, we say that it preserves addition. Similarly, if a function satisfies the second property, we say that it preserves scalar multiplication. If a function has both properties, we call it a linear mapping, since it preserves linear combinations. This means that the following statement holds:

$f_A(t_1 \vec{x_1} + t_2 \vec{x_2} + \dots + t_n \vec{x_n}) = t_1 f_A(\vec{x_1}) + t_2 f_A(\vec{x_2}) + \dots + t_n f_A(\vec{x_n})$

**Example:
**Show that the mapping $f(x_1,x_2) = \left[\begin{array}{cc}
2x_1+x_2 \\ -3x_1 + 5x_2 \end{array}\right]$ is a linear mapping.

To prove that this mapping is linear, we need to show that the function preserves addition and scalar multiplication.

$f(\vec{y} + \vec{z}) = f(\left[\begin{array}{cc} y_1 + z_1 \\ y_2 + z_2 \end{array}\right]) = \left[\begin{array}{cc} 2(y_1 + z_1) + (y_2+z_2) \\ -3(y_1 + z_1) + 5(y_2 + z_2) \end{array}\right]$

$= \left[\begin{array}{cc} 2y_1 + y_2 \\ -3y_1 + 5y_2 \end{array}\right] + \left[\begin{array}{cc} 2z_1 + z_2 \\ -3z_1 + 5z_2 \end{array}\right] = f(\vec{y}) + f(\vec{z})$

Therefore, we can conclude that the function preserves addition.

$f(t \vec{y}) = f(ty_1, ty_2) = \left[\begin{array}{cc} 2(ty_1) + ty_2 \\ -3(ty_1) + 5(ty_2) \end{array}\right] = t * \left[\begin{array}{cc} 2y_1 + y_2 \\ -3y_1 + 5y_2 \end{array}\right] = tf(\vec{y})$

There are a number of operations we can do with linear mappings, such as compositions and linear combinations of mappings. We are able to conclude that if we have linear mappings L and M, then $(L + M)$ and $(tL)$ are also linear mappings. In addition to this, the composition function $L(M)$ is a linear mapping.

Since these operations are linear mappings, we can simplify them in the following ways:

- $[L+M] = [L] + [M]$
- $[tL] = t[L]$
- $[L[M]] = [L][M]$

This gives us some tools to be able to combine linear operations, and manipulate them to make problems as clean as possible to solve.