We discussed that scalar multiplication applies to matrices by multiplying the scalar by each component in the matrix. In order to compute transformations, we also need to be able to multiply two matrices together. This operation will be slightly more complex compared to the others we have seen so far, but after a few examples, it will seem more natural.
Matrix multiplication involves taking the dot product of the rows of one matrix by the columns of another matrix. Suppose we wanted to multiply two matrices, A and B. Let’s define and . To multiply A by B, we are going to dot product all the rows in matrix A by the columns in matrix B.
To start, let’s take the first row in A, . We will dot product this by the first column in B, . Doing this would give us: . This means that the first component in our answer is equal to 9. We will repeat this process with the remaining rows and columns as shown below.
This tells us that . To discuss this more formally, let’s define the resulting matrix from the multiplication as R. The value of is the first row of A dot product the first column of B. The value of is the first row of A dot product the second column of B. This pattern continues until we have multiplied all the rows and columns.
One thing to keep in mind is that this operation only works if the number of rows in the first matrix is equal to the number of columns in the second matrix. If this isn’t true, we wouldn’t be able to successfully take the dot product, due to a mismatch in the number of elements. Knowing this, we can explore some additional theories related to matrix multiplication.
Theorem: Suppose that A, B, and C are matrices that can be multiplied together, and t is a real number scalar. The following statements are true.
- A(B+C) = AB + AC
- (A+B)C = AC + BC
- t(AB) = (tA)B = A(tB)
- A(BC) = (AB)C
One important note to make about matrix multiplication is that it is not communitive. If we reverse the order of matrix multiplication, the result is not guaranteed to be the same.