In this article, I will examine various endowments and embellishments which can serve as sources of geometrical structure on a vector space. In a typical first course on linear algebra, even one aimed at students of pure mathematics, the intuition for the notion of a vector space is developed through the use of illustrations decidedly Euclidean in nature. I argue that this is a pedagogical faux pas, mainly for the reason that such pictorial representations give students the misconception that metric structure, or even worse, notions such as projection and angle, are intrinsic to a vector space.

It turns out, in fact, that even something as modest as metric structure is *not* intrinsic, but rather is induced by something external to the object, namely, a norm. So we often speak of *normed linear spaces.* Every norm induces a metric, and hence a topology. However, not every topology arises from a metric, of course, and therefore we sometimes concern ourselves with much more abstract objects known as *topological vector spaces.* Moving in the other direction now, if we have some distinguished non-degenerate bilinear form (in particular, an inner product), then we obtain relatively tame notions of projection, angle, orthogonality and so on. An inner product also gives rise to a norm, however not every norm arises from an inner product. Overall, the hierarchy of “niceness” looks something like this:

.

My hunch is that our quest to harvest these geometric delectations has a lot to do with the isomorphism (or perhaps even a monomorphism would suffice, with a view towards the infinite-dimensional case). Therefore I want to analyze, in detail, the set of all such isomorphisms. Another interesting point is that an automorphism of is an invertible linear operator. I am led to believe that the more isomorphisms we have, the more such invertible linear operators I can construct, merely by taking distinct isomorphisms and considering maps like and so on. I am just thinking out loud here. Of course, I don’t think every element of arises in this way (think of the infinite-dimensional case). But, recalling that , perhaps there *is* something to be said here?

Let be a real -dimensional vector space. A non-degenerate bilinear form induces an isomorphism given by . If we replace “bilinear form” with “sesquilinear form” and let be a complex vector space, we obtain something similar, but the isomorphism becomes an *anti-isomorphism* (we must pay the cost of complex conjugation). Is every isomorphism of this form? Is there some kind of one-to-one correspondence between isomorphisms and a special class of bilinear form?

Note that given any linear functional (“covector”) , we obtain in a natural way an -dimensional subspace — namely, its kernel. This means that if we have a way to associate vectors with covectors, we also have a way to associate -dimensional subspaces with -dimensional subspaces. This is a bijection! Can this be thought of as a crude notion of “orthogonal complement”?

I was originally not going to publish this post yet, but I feel like if I don’t publish it now, I never will. Some of what I said might be wrong, but oh well. I will think about it more and publish more later. This is just a small peek into my thoughts.

Just a few comments/questions:

1. In the second last paragraph: What do you mean by associating 1-dimensional subspaces with (n-1) dimensional subspaces? Aren’t you associating linear functionals on V to their kernels? Also, how do you know that the matrix representation of \phi in that paragraph is of rank 1?

2.What exactly is an anti-isomorphism? The wiki entry just confuses me.

3. V* \tensor V ~ End(V) looks interesting. Did you put up a proof on one of your tensor videos?

Wait… scratch the last question on #1. I forgot the definition of linear functional. xD

But to add to #1, how is the kernel (n-1) dimensional, again?

1. I am associating linear functionals (covectors) to their kernels, but assuming we *also* have a way to associate vectors to covectors (what I mean by this: a linear isomorphism ), this gives us a link between -dimensional and -dimensional subspaces. The kernel of a non-trivial linear functional is -dimensional by rank-nullity, since its rank is . The next question, of course, is whether or not we can use this to give us a more general connection between -dimensional and -dimensional subspaces.

2. What I meant by that is “antilinear isomorphism”, i.e. an additive map which satisfies rather than the usual condition. We learned this in MATH 245: in the complex case, the isomorphism with the dual is

nota linear map, but ratherconjugatelinear (“antilinear”). This is why the adjoint is related toconjugatetranspose of a matrix — by following the canonical isomorphisms down to the duals you “pick up” a conjugation along the way, because your inner product is not bilinear but rather sesquilinear.3. I haven’t talked about this in the videos yet. It is intuitive, though: if we imagine what things in look like, they are in general of the form

for , . The general idea is to send this to the map

.

It turns out that any linear map can be represented this way (we are assuming finite-dimensionality of here — otherwise ).

In general, though, under appropriate assumptions we have .

For example, one often says that the image of a vector under a matrix is obtained by using the vector’s coordinates in the appropriate basis as

coefficientsof the columns of the matrix. In this case, the would merely be the appropriate “coordinate picker” functions, and the would be the columns of the matrix. Does this make it more obvious why there should be such a correspondence?Sidenote: elements of are usually known as -bidegree

mixed tensors. So you may hear people say things like “linear operators are just (1,1) tensors”, etc. The first indices are said to be contravariant, and the latter covariant (it has to do with the way things transform under a change of basis). Question: a bilinear form on is also a mixed tensor; what’s its bidegree?1-3: Got it. Interesting construction on that endomorphism in 3. =D

In the second reply, though, I’m not too sure what you mean in the first paragraph, specifically what “…is obtained by using the vector’s coordinates in the appropriate basis as coefficients of the columns of the matrix.” means and what is a “coordinate picker” function.

Is there a nice example for the bidegree mixed tensors? I’m just not entirely sure what the notation really means.

Specifically, in the second part, what is meant by V^{\tensor m}?

Sorry, I should have been more clear. is called the th

tensorial powerof .What I mean by “coordinate picker” is the following: given a basis we may construct a dual basis by putting if , and zero otherwise. This is what I mean:

You can imagine each as the result of some functional being applied to the vector you’re feeding in. Hence, it starts to look a lot like what I was talking about regarding , right?

More on tensorial powers: actually, this is how we construct the

tensor algebraon . We define to be the ground field (usually or ), and . Then we get a bunch of vector spaceswhich we can just take a big direct sum…

.

This gives us a vector space. Now we define a multiplication on it by mapping by simply “concatenating” the tensors, i.e. for indecomposable tensors . Extend this linearly to all of ; this turns into a (graded) algebra called the tensor algebra on . It’s also called the “free algebra on “, because it’s like the “most general” algebra containing , all while remaining as small as possible. “Most general” in the sense that there are no relations between the elements of other than the ones forced by bilinearity and so on. In fact, the whole notion behind the tensor product was that we treated the products as formally as possible while ensuring that remained bilinear. The tensor algebra kind of “continues” this idea. It therefore has a pretty interesting universal property concerned with factorization of maps, but quotients of the tensor algebra are probably more interesting to study, e.g. the exterior algebra .

So much win to process… let’s see: let’s tensor the tensors via direct product to get a vector space where we construct a tensor for our sum of tensors. Now taking quotient spaces of that monster… I don’t want to even imagine. … Continuing, we take the dual of our under lying vector space and tensor all the tensors that come from it via a direct sum. Take those tensors of tensors of functionals and tensor that with our previous tensors of tensors of vector spaces. Call it a bidegree mixed tensor and… wait, can we create an isomorphism between that and \End(V^n)?

Hum, I think I’m lost. Where does “direct product” come in? “a vector space where we construct a tensor for our sum of tensors” — what do you mean by this? Also they are just called “mixed tensors”, of bidegree . I think I mentioned this to you before but things actually aren’t that bad because tensor products are associative.

Your last remark though however sounds potentially accurate. We know that (assuming finite dimensionality),

.

So, let’s see… consider bidegree mixed tensors:

Note that (again relying on a finite dimensionality assumption), (why?). So actually, we can write

.

In general, if we assume finite dimensionality everywhere, we get the very powerful result, call it (), that

.

In particular the above proves that is isomorphic to . This isomorphism arises from taking the “tensor product of linear maps”. Without the finite dimensionality assumption, the left-hand side of () is isomorphic to a subspace of the right-hand side, but it won’t be the whole thing.

The whole reminds us of quantum information: if we have two one-qubit unitaries and given by some matrices, we can take their Kronecker product and obtain a unitary on a joint (2-qubit) system. Of course, not every unitary on this joint system can be described merely as such a Kronecker product, but in general, in view of the isomorphism above, it can certainly be written as a sum of Kronecker products.

Also, the primary source of my confusion with this stuff is that I often forget that things in do not all have the form . This fact is what makes the tensor product support notions like entanglement, whereas in the Cartesian product , to describe a vector here it is sufficient to describe only its two parts…

“we obtain in a natural way an (n-1)-dimensional subspace”

it might sound less clumsy to write “we obtain, in a natural way, a subspace of codimension one”