A linear combination is an expression that includes only the operations of addition and multiplication. In the case of vectors, we utilize vector addition and scalar multiplication. Linear combinations are of interest because they can expose information about vectors, specifically subsets, and subspaces of vectors.
Theorem: If is a set of vectors in and S is the set of all possible linear combinations of these vectors, , then S is a subspace of .
We are saying here that if we take any subset of vectors from , the set of linear combinations is subspace of . Let’s take a look at why this is true.
Proof: To prove that this is true, we need to show that the three properties of subspaces holds for the linear combination we are constructing, S.
Given that these vectors are taken from , we know that their addition and scalar multiplication are closed under , since we know that is a space itself. From here, we know that using t = 0 would give us the zero vector, so it must be contained in S. Therefore, we can conclude that S is a subspace of
To introduce some new terminology, the subset is a special subset called a spanning set. It is important because if we have a subset , we can use it construct any other vector in the set it spans using only vector addition and scalar multiplication.
As an example, consider the set . We can construct any vector in using just two vectors that are in . These two vectors are and . We call these vectors a basis of , and using just these two vectors, you can build any other vector in . This is particularly useful if we ever want to do translations, scaling, or rotation. They give us a place that we can always start to construct any other vector.
Often, we may also want to use linear combinations to determine if a set of vectors are related to each other or not. The idea of linear independence allows us to find situations like this. If there exists no solution to , where , then we state that are linearly independent. If there exists a non-zero solution, then we state that the vectors are linearly dependent.
Revisiting our idea of a basis of a set of vectors will help us understand why we care about this idea. If we want to find the basis of a set of vectors, we want to find the smallest one possible. Otherwise, we will have an additional vector that is not useful, since it can be constructed using the others already. Being able to determine if vectors are linearly dependent will allow us to construct basis subsets that are the smallest possible.
Let’s take a look at an example to show how we can determine if a set of vectors is linearly independent of each other.
Example: Show that the set is linearly dependent.
To show that this is linearly dependent, we need to show that
The easiest way to do this is to construct a system of linear equations, and solve it. Doing this gives us the following system.
If we can determine values that satisfies this system, we can conclude that the set of vectors is linearly dependent. You can solve this system however you’d like, I will show one example of a solution.
First, I will rearrange equation 2 for
Next, I’ll substitute into equation 1 to get a value for
From here, I can substitute the and values into equation 3 to get a value for
This gives us a value of . We can now substitute this value into the other equations to solve the rest of the system. This would give us and .
Therefore, there is a non-zero solution for this set of vectors, meaning it is linearly dependent. If you tried solving this problem on your own, you may have gotten different numbers than me. There are actually many correct solutions to this system, so as long as the three vectors add to make 0, you have a correct solution.