## Linear algebra #1

I feel like poking around with linear algebra right now but I’m too lazy to go to campus so I’m just going to post stuff here.

So it turns out that matrices actually encode a hell of a lot of information about the transformations they represent. And by this I mean the left-multiplication transformation $\mathsf{L}_A : \mathsf{F}^n \to \mathsf{F}^m$ we associate to a given $m \times n$ matrix $A$ by putting $\mathsf{L}_A(x) = Ax$ where $x$ is some $n \times 1$ column vector. (Here we assume $A$ has coefficients from some field $\mathsf{F}$.)

Post #1: Injectivity, surjectivity, etc.
In this post I’ll examine linear maps, their matrix representations and several key relationships between a few of their properties. I’m sure there is much more to be said, for example about invertibility and isomorphism, but I’ll play around with that stuff in another post.

In the following, $U$ and $V$ are finite-dimensional vector spaces with dimensions $n$ and $m$ respectively. Let’s examine $[L]_\alpha^\beta$, the matrix representation of a linear transformation $L : U \to V$ with respect to a basis $\alpha = \{ a_1, \ldots, a_n \}$ for $U$ and a basis $\beta = \{ b_1, \ldots, b_m \}$ for $V$, and see what some of its properties imply. Note $L(\mathbf{0}) = \mathbf{0}$ for any linear map. We have the following fact, which does not rely on finite-dimensionality:

Theorem. $L$ is injective if and only if the kernel of $L$ is $\{ \mathbf{0} \}$.
Proof. Suppose $\ker(L) = \{ \mathbf{0} \}$ and $L(x) = L(y)$. Then $L(x-y) = \mathbf{0}$ by linearity. Then we see immediately that $x-y = \mathbf{0}$ and hence $x = y$, hence proving injectivity of $L$. The other direction is equally straightforward.

Noting that if $\mathrm{rank}(L) = \mathrm{dim}(V)$ then $L$ is surjective, we have:

Theorem. If $n=m$, then $L$ is injective if and only if it is surjective.
Proof. This is a direct corollary of the rank-nullity theorem $\mathrm{rank}(L) + \mathrm{nullity}(L) = n$, and the above.

How does the linear independence of the matrix’ columns affect all this?

Theorem. The columns of $[L]_\alpha^\beta$ are linearly independent if and only if $\ker(L) = \{ \mathbf{0} \}$.
Proof. Suppose $L(x) = \mathbf{0}$, and denote the columns of $[L]_\alpha^\beta$ by $C_i$ (each one of these is an $m \times 1$ column matrix.) Then $L(x) = x_1 C_1 + \ldots + x_n C_n$. Independence gives $x_1 = \ldots = x_n = 0$. Therefore $x = \mathbf{0}$, proving $\ker(L) = \{ \mathbf{0} \}$. For the other direction, suppose $\ker(L) = \{ \mathbf{0} \}$ and for the sake of contradiction that $\{ C_1, \ldots, C_n \}$ are linearly dependent. Then we get a nonzero vector lying inside the kernel, a contradiction.

Now, let’s consider whether the map sends linearly independent sets to linearly independent sets…

Theorem. $L(S)$ is linearly independent for any linearly independent $S \subseteq U$ if and only if $\ker(L) = \{ \mathbf{0} \}$.
Proof. First, suppose for contradiction that $\ker(L) = \{ \mathbf{0} \}$ and there exists a linearly independent set $S \subseteq U$ such that $L(S) \subseteq V$ is dependent. Then we obtain scalars $\lambda_1, \ldots, \lambda_k$, not all zero, and vectors $L(s_1), \ldots, L(s_k)$ from $L(S)$ so that $\lambda_1 L(s_1) + \ldots + \lambda_k L(s_k) = \mathbf{0}$. Then $L(\lambda_1 s_1 + \ldots + \lambda_k s_k) = \mathbf{0}$ by linearity, whereby $\lambda_1 s_1 + \ldots + \lambda_k s_k = \mathbf{0}$ due to the triviality of the kernel. Immediately it follows that $S$ is dependent, a contradiction. For the other direction, suppose $L$ preserves independence. We seek to show that $L$‘s kernel is trivial. To see this, suppose there is nonzero $x \in \ker(L)$. Then $\{ x \}$ is linearly independent, but $L(\{ x \}) = \{ \mathbf{0} \}$ which is dependent, a contradiction. So $ker(L) = \{ \mathbf{0} \}$.

After all this, we can conclude that the following are equivalent for $L$:

• Trivial kernel
• Injectivity
• Preservation of independence
• Independent columns in matrix representation (choice of $\alpha, \beta$ does not matter)