Part 6: Eigenvectors and Eigenvalues

The final part

Albert Ming
5 min readApr 12, 2022
Photo by Matt Howard on Unsplash

As my presentation has been concluded, I’d like to give a few remarks about my experiences, as well as go over the final topic in my journey diving into linear algebra. I have to say, I enjoyed learning about this new branch of mathematics, especially knowing that it has uses in other disciplines such as computer science. Linear algebra is a special kind of math that made me think in a more visual way. I also got an even deeper insight into what college math would look like. From calculus last year, to a little bit of multivariable calculus this year, to linear algebra, I’ve gained a better appreciation for higher-level math, and I’m excited for what encounters I will have with numbers next year! I’m fully expecting the work to be difficult. Just like how my first experience with calculus opened me up to a new form of math, my first experience with linear algebra back in late 2021/early 2022 opened me up to more ways of thinking about math. Because linear algebra is so useful, I plan to learn more about it throughout the summer.

Eigenvectors and Eigenvalues

Now for the main part of the article. I’ll be going over the final part of what I talked about in my presentation: eigenvectors and eigenvalues. I found that this was the most mechanically challenging topic in linear algebra that I got to experience.

Definition: Remember linear transformations and how they can change vectors? Well, now imagine a vector that does not get veered off its original track, even after a linear transformation is applied on it. These vectors are eigenvectors. However, while these vectors remain on their same path, their lengths can change, and the factor by which they are changed is the eigenvalue.

Now we’ll get into the mechanics. From what I’ve seen, there are a few ways to tackle this eigenvector problems. One method makes more sense to me intuitively, while the other one uses a lot more fancy math. But before we get to all that, let’s first set up some of the basic premises.

This is our initial formula. A represents our matrix (linear transformation), v represents the eigenvector that we’re trying to solve for, and lambda represents the eigenvector. If we think about it a little bit, this equations makes perfect sense: The left-hand side of the equation represents how a linear transformation is applied on an eigenvector, and the right-hand side of the equation represents that same eigenvector being scaled. The fact that they’re equal tells us that the eigenvector doesn't change direction after being altered, only scaled.

We can then manipulate the equation. Note that I represents the identity matrix, which we can think of like multiplying A by 1. This only work with square matrices, and it is constructed in a matrix with 1s running down its diagonal. Remember that the determinant represents how much area is scaled, so we take the determinant of the non-eigenvector part and set it to 0. By now, we can start with an example problem.

If we remember the determinant equation, we simply take the difference of the product of the diagonals, and then set that expression equal to 0. We can now solve for lambda, which is our eigenvalue.

We now go back to our original equation at the very beginning. Let’s start with lambda = 5.

Using what we know about simple matrix multiplication and linear systems, we can now go back and solve for x and y, which will give us our eigenvector.

Follow the work above. We have come to the conclusion that the eigenvector with eigenvalue = 5 is [1,2].

Note, however, that there are technically an infinite number of eigenvectors that just simply get scaled more along this vector.*

Now let’s go over the more fancy method. We’ve already gone over how to solve for lambda, so there’s no need to rewrite all of that. This method requires us to take the “null space” of the identity vector of lambda minus our matrix. Note that the null space of matrix A consists of all vectors B such that AB = 0. We can get this null space by computing the RREF form of the matrix (convenient, as we covered this last time).

Using row operations, we can compute the RREF form of this matrix, and then using some nifty matrix multiplication, we can arrive at our conclusion.

Final Thoughts

So this concludes the final topic in my presentation, which I gave around two weeks ago. I’ve had time to reflect on what went well and what didn’t, both related to my learning and my fluency presenting. I think that this experience was extremely beneficial to my math mind, as I was able to learn the basics of linear algebra by myself. Additionally, the fact that I presented my work allowed for me to explain my thoughts in the classroom, and I elaborated on the questions my classmates posed to me. I hope this prepares me for college math next year!

--

--