I feel like poking around with linear algebra right now but I’m too lazy to go to campus so I’m just going to post stuff here.
So it turns out that matrices actually encode a hell of a lot of information about the transformations they represent. And by this I mean the left-multiplication transformation we associate to a given matrix by putting where is some column vector. (Here we assume has coefficients from some field .)
Post #1: Injectivity, surjectivity, etc.
In this post I’ll examine linear maps, their matrix representations and several key relationships between a few of their properties. I’m sure there is much more to be said, for example about invertibility and isomorphism, but I’ll play around with that stuff in another post.
In the following, and are finite-dimensional vector spaces with dimensions and respectively. Let’s examine , the matrix representation of a linear transformation with respect to a basis for and a basis for , and see what some of its properties imply. Note for any linear map. We have the following fact, which does not rely on finite-dimensionality:
Theorem. is injective if and only if the kernel of is .
Proof. Suppose and . Then by linearity. Then we see immediately that and hence , hence proving injectivity of . The other direction is equally straightforward.
Noting that if then is surjective, we have:
Theorem. If , then is injective if and only if it is surjective.
Proof. This is a direct corollary of the rank-nullity theorem , and the above.
How does the linear independence of the matrix’ columns affect all this?
Theorem. The columns of are linearly independent if and only if .
Proof. Suppose , and denote the columns of by (each one of these is an column matrix.) Then . Independence gives . Therefore , proving . For the other direction, suppose and for the sake of contradiction that are linearly dependent. Then we get a nonzero vector lying inside the kernel, a contradiction.
Now, let’s consider whether the map sends linearly independent sets to linearly independent sets…
Theorem. is linearly independent for any linearly independent if and only if .
Proof. First, suppose for contradiction that and there exists a linearly independent set such that is dependent. Then we obtain scalars , not all zero, and vectors from so that . Then by linearity, whereby due to the triviality of the kernel. Immediately it follows that is dependent, a contradiction. For the other direction, suppose preserves independence. We seek to show that ‘s kernel is trivial. To see this, suppose there is nonzero . Then is linearly independent, but which is dependent, a contradiction. So .
After all this, we can conclude that the following are equivalent for :
- Trivial kernel
- Preservation of independence
- Independent columns in matrix representation (choice of does not matter)