Matrix Operations

Now that we have introduced how matrices are used in linear systems, we can look at the operations and properties of them. In general, matrices pop up in all sorts of situations, including linear systems, so being able to work with them will prove beneficial.

The general form of a matrix is: A = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} &  \dots & a_{2n} \\  \dots &  \dots &  \dots &  \dots \\ a_{m1} & a_{m2} &  \dots & a_{mn} \end{bmatrix}

We sat that A is a m x n matrix, where A has m rows, and n columns. Two matrices are equal if and only if they have the same size, and every entry is the same. We often refer to an individual entry in A as A­ij, where i is the row of the entry, and j is the column of the entry.

When a matrix is n x n in size (the rows and columns are the same number), we call the matrix a square matrix. We call a matrix upper triangular if the entries beneath the diagonal are 0, and we call a matrix lower triangular if the entries above the diagonal are 0. The diagonal is the elements of the matrix where the row and column are the same number, for example, A_{11}, A_{22}, A_{33}, \dots

For example, \begin{bmatrix} 3 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 1 \end{bmatrix} is upper triangular. The diagonal of this matrix are the values at A_{11}, A_{22}, A_{33}. As you can see anything below these three elements is 0.

If a matrix is both upper triangular and lower triangular (that meaning the only non-zero values are on the diagonal), we call the matrix a diagonal matrix. The reason why we want to categorize these matrices is that you will see certain theories and ideas that only apply to specific types of matrices. Due to this, it is valuable to be able to categorize what type of matrix we are working with, to know what rules we can apply to it.

We can still define addition and scalar multiplication for matrices like how we did for vectors. If we want to add two matrices A and B, we need to first make sure A and B are the same size. If they are, we simply add the elements in the same row and column to each other. So, A_{11} + B_{11}, A_{12} + B_{12} \dots

Similarly, scalar multiplication is applied to every component. If we have a matrix A, and want to compute t * A, where t is a scalar real number, we just need to compute A_{11} * t, A_{12} * t \dots

Having addition and scalar multiplication defined the same as vectors also allows us to easily extend our definition of linear independence. A set of matrices is linearly independent if and only if the solution to t_1A_1+\dots+t_kA_k = 0 is where t_1=t_2=\dots=0. Just to note here, the zero matrix is a matrix where all entries are 0.

To summarize matrix operations, we will look at a few properties that apply to matrix addition and scalar multiplication. Let A, B, C be m x n matrices, and let s, t be scalars that are real numbers. The following statements are true:

  1. A + B is an m x n matrix (closed under addition)
  2. A + B = B + A (addition is commutative)
  3. (A + B) + C = A+ (B + C) (addition is associative)
  4. There exists a matrix 0, such that A + 0 = A (zero matrix)
  5. For each matrix A, there exists a matrix (-A) with a property that A + (-A) = 0 (additive inverse)
  6. sA is an m x n matrix (closed under scalar multiplication)
  7. s(tA) = (st)A (scalar multiplication is associative)
  8. (s+t)A = sA + tA (distributive law)
  9. S(A + B) = sA + sB (distributive law)
  10. 1A = A (scalar multiplicative identity)

From this work, it becomes clear that vectors are just special cases of matrices. You will see that a lot of the initial operations and properties we discussed for vectors apply comfortably to matrices. Matrices, however, give us a bit more power in terms of what they can represent. Moving forward, we will take a look at some more unique properties that are helpful for working with matrices.

1 thought on “Matrix Operations”

Leave a Reply

Your email address will not be published. Required fields are marked *