I now pause from the usual high-voltage saga to give a bit of commentary on something more elementary: linear algebra. Today, while slacking off in the PMC, I was asked whether a proof scrawled on the board was from a linear algebra assignment. In fact, it was a problem in algebraic geometry, involving smooth varieties, I had (almost) solved the previous night. A lot of mathematics looks like linear algebra, and in retrospect this is no surprise because (let me go out on a limb here) a lot of mathematics essentially *is *linear algebra.

One seldom encounters such bold statements in which verity is not to some degree betrayed for the sake of eloquence, and this is no such exception. Nonetheless, looking at a few examples leads me to suspect that this is a reasonably accurate approximation.

Functional analysis is far too broad to succinctly define, but I feel that defining it to be the study of certain kinds of topological *vector spaces* would do it decent justice. All of the classical Lie groups are *matrix* groups, not to mention the wildly successful technique of studying a Lie group locally, namely by studying an *inherently linear* structure: its Lie *algebra*. Large parts of statistics and probability seem to rely crucially on the techniques of linear algebra: least-squares problems, which are an application of the theory of orthogonal projections, come to mind immediately, although this seems minor.

Module theory, which admittedly may generalise what most people are willing to call “linear algebra”, is still inherently *linear* in nature, except now your Euclidean intuition has been completely flushed down the drain. I will try to convince you later that it should in fact have been flushed a long time ago. The category of -modules is a (indeed, in some sufficiently broad sense, *the!*) homological utopia.

Finally, any fool attempting to dodge every linear-algebraic tree on the slopes of algebraic number theory, representation theory, or geometry is as good as dead, representation theory in particular being an example of an entire branch of mathematics *founded* on the idea of studying more mysterious algebraic objects (groups) by passing to their *actions* on — you guessed it — *vector spaces*.

Honestly, it takes a while to understand *what* exactly a vector space is, and high school physics teachers going on and on to you about directed line segments, angles, and parallelogram laws really do little to help the matter. The geometric data carried by a vector space is far less than the pictures that appear in most students’ minds.

Vector spaces are analogous to people who, say, are deaf and have lost an eye, therefore rendering them unable to properly perceive depth. They’re obnoxiously ignorant of volume, angle, or even orthogonality. They can’t make heads or tails of orientation. The only thing these poor souls understand is how to add two elements, and scale an element by a certain factor. *Geometry*?! There practically isn’t any here.

One of the key points I want to drive home is that *a vector space doesn’t prefer any particular basis*. As an example, we, as humans, have some perverse tendency to represent linear mappings as matrices, by using the so-called “standard” (or even worse, “natural”) basis. Since I’m a human, and I do enough math, I can kind of guess why we find this standard, but if I was a vector space, this would likely stump me for the rest of my life. What an embarrassing eulogy they would put on my tombstone:

TRIED TO UNDERSTAND WHY THE NATURAL BASIS WAS NATURAL

UNFORTUNATELY FAILED

RIP GG

Let me reiterate. If you’re considering and as vector spaces, then you are living in the theory of linear algebra, where there’s really nothing *natural* at all about the “natural” basis!

This should shed some light on diagonalisation. It seems like some just get lost in all the machinery, muttering to themselves under their breath while pouring their morning coffee, “Find the roots of the characteristic polynomial… find a basis for the nullspace of …”. The whole *point* of diagonalisation is that the way we write matrices down (that is, in the standard basis) is idiotically narcissistic, and most linear operators get their face in a knot when we subject them to this cruelty. When we speak to an operator, when we attempt to understand its geometry, the fruit of that endeavour is precisely what people refer to as an *eigenbasis*: a basis which, for this particular operator, is “natural” in the sense that it is at harmony with the essence of the operator.

I mentioned before that vector spaces can’t make sense of volume. This is true if I’m referring to the “volume” of, say, some parallelepiped spanned by a bunch of vectors. That simply doesn’t make sense: if you think about how you might define such a thing, you will no doubt find yourself either invoking some piece of nonexistent structure, like a norm or an inner product, or choosing a basis arbitrarily. However, vector spaces can, in a suitable sense, understand the “volume” of a *linear operator*, because to every linear operator we can naturally assign a “volume” without even choosing a basis: its determinant! Just like the characteristic polynomial, the determinant is an invariant of a matrix (i.e. does not depend on a choice of basis). Indeed, it is (up to a sign) the constant term of the characteristic polynomial.

After enough tinkering, one comes to a realisation: in order to have some reasonably familiar notion of geometry in a vector space, one must consider linear *functionals* defined on the vector space, namely, linear maps , where is the field of scalars (usually is or ). It turns out that these objects behave “dually” to the actual vectors in . In particular, they don’t transform in the same way under a change of basis (figure out what I mean by this).

If anything, one can find philosophical justification for this statement by observing that is canonically isomorphic to the vector space , where this notation means “the set of all linear maps “. Thus it makes sense to call a “dual” to . A lot of geometric notions spring out of the woodwork when we are given some way of identifying with : one way of doing this is by a non-degenerate bilinear/sesquilinear form. For example, any inner product will do. I don’t have any more time to continue writing about this, so I will simply encourage you to think about it. Doing this really is worthwhile — this exact same boomerang will constantly keep circling back and hitting you in the face over and over again as you wade through differential geometry.

Sounds like that was a really insightful question. Lol.

By the way, another example for Statistics is dimensionality reduction which often comes down to factoring your data matrix, or finding the “closest possible” factorization to a matrix with certain properties.

Assume we consider the determinant as a linear transformation for matrices in graph theory (say adjacency matrices, for example). I firmly believe in the fact that LTs such as determinants are fundamentally important to fields such as graph theory, and my research revolves around the same. What do you believe is the significance of altering the basis of a given invertible matrix in terms of graph theory?

Cheers,

Shashwat

Sorry for the horribly late reply, but the determinant is not a linear map from , indeed, is a (non-abelian for ) group, not a vector space. However, it is an alternating multilinear map , which is the same as a linear map , by the universal property of exterior powers.