Identity and Inverse Matrices

Suppose that we have a matrix A, and we wish to find a matrix I, with the property that AI = A. We refer to I as a special type of matrix called the identity matrix. When we multiply any matrix by I, we get the matrix we started with back.

Let’s derive the identity matrix for a 2 by 2 matrix. First, let’s define an arbitrary matrix A = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}. Now, let’s multiply it by our identity matrix, and use the result to determine which values in the identity matrix will give us the matrix A as a result.

\begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} \begin{bmatrix} y_1 & y_2 \\ y_3 & y_4 \end{bmatrix} = \begin{bmatrix} x_1y_1 + x_2y_3 & x_1y_2 + x_2y_4 \\ x_3y_1 + x_4y_2 & x_3y_2 + x_4y_4 \end{bmatrix}

This gives us a system of equations to solve, which look like this:

x_1y_1 + x_2y_3 = x_1
x_1y_2 + x_2y_4 = x_2
x_3y_1 + x_4y_2 = x_3
x_3y_2 + x_4y_4 = x_4

Now, you could use some of the system solving techniques we learned in previous sections to solve this, however the solution can be found rather easily just through inspection. For the first equation to be true, y_3 = 0 and y_1 = 1. For the last equation to be true, y_2 = 0 and y_4 = 1. From this, we can conclude that the identity matrix is: \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.

We can derive a higher dimension identity matrices in a similar way. We simply define a general matrix, multiply by the identity, and solve for the correct values. Identity matrices are nice to know as they allow us to find matrices that can be equivalent to multiplying by 1. In addition to this, we can also use them to define an inverse operation for matrix multiplication.

Suppose we have a matrix A, and a matrix B. If AB = I, then B is the inverse of the matrix A. We also call A an invertible matrix, since it has an inverse. Inverse matrices have a lot of power in linear algebra, and we will be able to conclude a lot of information using the concept of inverses, and information from previous sections.

Theorem: Let A be a square matrix. Suppose that BA = AB = I and CA = AC = I. If this is true, then B = C.

We know that B = BI, since the identity gives us back the matrix we started with. From here, we can substitute for I, giving us B(AC), since AC = I. If we regroup the brackets, we get (BA)C = IC = C, therefore B = C.

Theorem: Suppose that A and B are n x n matrices, such that AB = I and BA = I. If this is true, B = A^{-1}, and B and A have a rank n.

If we assume that matrix B does not have a rank of n, there is no way that it could be the inverse matrix. This is because the resulting system of equations we discussed when deriving the identity matrix would not have a trivial solution. Since this is the case, both B and A must have a rank of n for B to be the inverse matrix.  

Theorem: Suppose that A and B are invertible matrices and that t is some non-zero real number. The following statements are true.

  1. (tA)^{-1} = \frac{1}{t}A^{-1}
  2. (AB)^{-1} = B^{-1}A^{-1}
  3. (A^T)^{-1} = (A^{-1})^T

These theories give us a great idea of how we can manipulate inverses, but we still haven’t looked at how we find an inverse to start with. To find the inverse of a matrix A, we need to solve the equation AB = I. This is going to look pretty like our actual derivation of the identity matrix earlier, however, the right-hand side is the identity matrix instead of the original matrix A. One effective way to solve such a system would be using the reduced row echelon format. We begin with the left-hand side of the matrix being A, and the right-hand side of the matrix being I. We then reduce A until it is equal to I, and the resulting right-hand side will be the inverse matrix. Let’s look at an example to see how this can be done.

Example: Find the inverse of \begin{bmatrix} 1 & 1 & 2 \\ 1 & 2 & 2 \\ 2 & 4 & 3 \end{bmatrix}

First, we are going to create an augmented matrix, with our matrix on the left-hand side, and the identity on the right-hand side.

\begin{bmatrix} 1 & 1 & 2 & | & 1 & 0 & 0 \\ 1 & 2 & 2 & | & 0 & 1 & 0 \\ 2 & 4 & 3 & | & 0 & 0 & 1 \end{bmatrix}

From here, we are going to manipulate the right-hand side until it is equal to the identity on the left-hand side. Once this is done, we will have the inverse matrix on the right-hand side, and the identity on the left-hand side.

We start by subtracting the first row by the second

\begin{bmatrix} 1 & 1 & 2 & | & 1 & 0 & 0 \\ 0 & -1 & 0 & | & 1 & -1 & 0 \\ 2 & 4 & 3 & | & 0 & 0 & 1 \end{bmatrix}

Next, we will multiply the first row by 2 and subtract it from row 3.

\begin{bmatrix} 1 & 1 & 2 & | & 1 & 0 & 0 \\ 0 & -1 & 0 & | & 1 & -1 & 0 \\ 0 & -2 & 1 & | & 2 & 0 & -1 \end{bmatrix}

We multiply the second row by 2 and subtract it from row 3.

\begin{bmatrix} 1 & 1 & 2 & | & 1 & 0 & 0 \\ 0 & -1 & 0 & | & 1 & -1 & 0 \\ 0 & 0 & -1 & | & 0 & -2 & 1 \end{bmatrix}

Multiply the second and third row by -1

\begin{bmatrix} 1 & 1 & 2 & | & 1 & 0 & 0 \\ 0 & 1 & 0 & | & -1 & 1 & 0 \\ 0 & 0 & 1 & | & 0 & 2 & -1 \end{bmatrix}

Subtract the second row by the first row

\begin{bmatrix} 1 & 0 & -2 & | & -2 & 1 & 0 \\ 0 & 1 & 0 & | & -1 & 1 & 0 \\ 0 & 0 & 1 & | & 0 & 2 & -1 \end{bmatrix}

Add 2 times the third row to the first row

\begin{bmatrix} 1 & 0 & 0 & | & -2 & 5 & -2 \\ 0 & 1 & 0 & | & -1 & 1 & 0 \\ 0 & 0 & 1 & | & 0 & 2 & -1 \end{bmatrix}

We have now finished reducing the left-hand side, giving us the inverse matrix, \begin{bmatrix} -2 & 5 & -2 \\ -1 & 1 & 0 \\ 0 & 2 & -1 \end{bmatrix}, on the right-hand side.

1 thought on “Identity and Inverse Matrices”

Leave a Reply

Your email address will not be published. Required fields are marked *