Today, I’m going to talk about linear operators on vector spaces. To start, we’ll let be a finite-dimensional (say dimension ) vector space over a field , and consider a linear operator . We will also write , since is the -algebra of linear operators on .

The first thing I’ll talk about is invariant subspaces. A subspace is called *invariant *or (more specifically) –*invariant* if . Note that certainly and are both invariant subspaces (easy to verify).

We know that if we fix two ordered bases for then our operator has an associated matrix with respect to this choice. What is interesting is that if is -invariant of dimension , we can choose a basis whose first elements make up a basis for . In this manner, using the -invariance, we actually obtain a matrix for of the form

where, as you might have guessed, is a square matrix, and is the zero matrix. In other words, the last rows of the first columns are zero, precisely because is transforming those basis vectors in to vectors that lie within to vectors that again lie within .

We say that is an eigenvalue of if for some vector . In this case, is called the eigenvalue associated to the eigenvector . So in other words, an eigenvector is a vector which is simply rescaled by the transformation ! This is a very nice property, and so it is in our interest to analyze this phenomenon.

Firstly, we note that if is an eigenvector with eigenvalue then any vector in is also an eigenvector with that same eigenvalue: . So if two eigenvectors have different associated eigenvalues, they are linearly independent. As a corollary, we see already that can only have distinct eigenvalues, for otherwise contains a linearly independent set of more than vectors, which is absurd.

Next I’ll introduce something called the characteristic polynomial of . This is the polynomial given by where is the identity matrix, and is the matrix of the operator under some basis for . It turns out that the characteristic polynomial of the operator does not depend on which basis we choose. So instead of talking about matrices, we can just talk about the determinant of an operator. Hence we can just define .

**Theorem**. The roots of are the eigenvalues of .

**Proof**. If is a root of , then , hence is not an invertible linear operator. Invoking the rank-nullity theorem, we see that has a nontrivial kernel, and hence there is nonzero so that , and hence . The existence of such means that by definition, is an eigenvalue. The proof that every eigenvalue is a root of is just a reversal of this argument.

We should be careful to note that while it seems we have given an “intrinsic” (basis-independent) definition of the characteristic polynomial, we are still relying on the fact that our linear operator *does* have a matrix representation. Thus, we are still relying on the finite-dimensionality of . The reason why the characteristic polynomial is important is because its roots are precisely the *finite set of scalars* such that is not an invertible linear operator. We will denote this set , the spectrum of the linear operator .

In the infinite-dimensional case, we can no longer talk about determinants or characteristic polynomials, so the spectrum takes its place. Consider the vector space of real-valued functions, defined on the real line. Note that if is the differentiation operator, then for each , the function is an eigenfunction with eigenvalue . Since there are infinitely many choices for , the differentiation operator has infinitely many distinct eigenvalues. It follows that our vector space must surely be infinite-dimensional.

Interesting remark: If a linear operator on a vector space has a non-trivial kernel, then it is certainly not invertible. However, if a linear operator is non-invertible, its kernel might still in general be trivial (!!!), but luckily when is finite-dimensional, the rank-nullity theorem tells us that this cannot happen (i.e., ).