Now that we have introduced how matrices are used in linear systems, we can look at the operations and properties of them. In general, matrices pop up in all sorts of situations, including linear systems, so being able to work with them will prove beneficial.
The general form of a matrix is:
We sat that A is a m x n matrix, where A has m rows, and n columns. Two matrices are equal if and only if they have the same size, and every entry is the same. We often refer to an individual entry in A as Aij, where i is the row of the entry, and j is the column of the entry.
When a matrix is n x n in size (the rows and columns are the same number), we call the matrix a square matrix. We call a matrix upper triangular if the entries beneath the diagonal are 0, and we call a matrix lower triangular if the entries above the diagonal are 0. The diagonal is the elements of the matrix where the row and column are the same number, for example,
For example, is upper triangular. The diagonal of this matrix are the values at . As you can see anything below these three elements is 0.
If a matrix is both upper triangular and lower triangular (that meaning the only non-zero values are on the diagonal), we call the matrix a diagonal matrix. The reason why we want to categorize these matrices is that you will see certain theories and ideas that only apply to specific types of matrices. Due to this, it is valuable to be able to categorize what type of matrix we are working with, to know what rules we can apply to it.
We can still define addition and scalar multiplication for matrices like how we did for vectors. If we want to add two matrices A and B, we need to first make sure A and B are the same size. If they are, we simply add the elements in the same row and column to each other. So,
Similarly, scalar multiplication is applied to every component. If we have a matrix A, and want to compute t * A, where t is a scalar real number, we just need to compute
Having addition and scalar multiplication defined the same as vectors also allows us to easily extend our definition of linear independence. A set of matrices is linearly independent if and only if the solution to is where . Just to note here, the zero matrix is a matrix where all entries are 0.
To summarize matrix operations, we will look at a few properties that apply to matrix addition and scalar multiplication. Let A, B, C be m x n matrices, and let s, t be scalars that are real numbers. The following statements are true:
- A + B is an m x n matrix (closed under addition)
- A + B = B + A (addition is commutative)
- (A + B) + C = A+ (B + C) (addition is associative)
- There exists a matrix 0, such that A + 0 = A (zero matrix)
- For each matrix A, there exists a matrix (-A) with a property that A + (-A) = 0 (additive inverse)
- sA is an m x n matrix (closed under scalar multiplication)
- s(tA) = (st)A (scalar multiplication is associative)
- (s+t)A = sA + tA (distributive law)
- S(A + B) = sA + sB (distributive law)
- 1A = A (scalar multiplicative identity)
From this work, it becomes clear that vectors are just special cases of matrices. You will see that a lot of the initial operations and properties we discussed for vectors apply comfortably to matrices. Matrices, however, give us a bit more power in terms of what they can represent. Moving forward, we will take a look at some more unique properties that are helpful for working with matrices.