Duality and its effects on linear geometry

In this article, I will examine various endowments and embellishments which can serve as sources of geometrical structure on a vector space. In a typical first course on linear algebra, even one aimed at students of pure mathematics, the intuition for the notion of a vector space is developed through the use of illustrations decidedly Euclidean in nature. I argue that this is a pedagogical faux pas, mainly for the reason that such pictorial representations give students the misconception that metric structure, or even worse, notions such as projection and angle, are intrinsic to a vector space.

It turns out, in fact, that even something as modest as metric structure is not intrinsic, but rather is induced by something external to the object, namely, a norm. So we often speak of normed linear spaces. Every norm induces a metric, and hence a topology. However, not every topology arises from a metric, of course, and therefore we sometimes concern ourselves with much more abstract objects known as topological vector spaces. Moving in the other direction now, if we have some distinguished non-degenerate bilinear form (in particular, an inner product), then we obtain relatively tame notions of projection, angle, orthogonality and so on. An inner product also gives rise to a norm, however not every norm arises from an inner product. Overall, the hierarchy of “niceness” looks something like this:

\text{inner product spaces} \subsetneq \text{normed linear spaces} \subsetneq \text{TVSs} \subsetneq \text{vector spaces}.

My hunch is that our quest to harvest these geometric delectations has a lot to do with the isomorphism V \cong V^* (or perhaps even a monomorphism would suffice, with a view towards the infinite-dimensional case). Therefore I want to analyze, in detail, the set of all such isomorphisms. Another interesting point is that an automorphism of V is an invertible linear operator. I am led to believe that the more isomorphisms V \to V^* we have, the more such invertible linear operators I can construct, merely by taking distinct isomorphisms A, B : V \to V^* and considering maps like A^{-1}B and so on. I am just thinking out loud here. Of course, I don’t think every element of \mathrm{GL}(V) arises in this way (think of the infinite-dimensional case). But, recalling that V^* \otimes V \cong \mathrm{End}(V), perhaps there is something to be said here?

Let V be a real n-dimensional vector space. A non-degenerate bilinear form induces an isomorphism V \cong V^* given by x \mapsto \langle x, - \rangle. If we replace “bilinear form” with “sesquilinear form” and let V be a complex vector space, we obtain something similar, but the isomorphism becomes an anti-isomorphism (we must pay the cost of complex conjugation). Is every isomorphism of this form? Is there some kind of one-to-one correspondence between isomorphisms V \cong V^* and a special class of bilinear form?

Note that given any linear functional (“covector”) \varphi \in V^*, we obtain in a natural way an (n-1)-dimensional subspace — namely, its kernel. This means that if we have a way to associate vectors with covectors, we also have a way to associate 1-dimensional subspaces with (n-1)-dimensional subspaces. This is a bijection! Can this be thought of as a crude notion of “orthogonal complement”?

I was originally not going to publish this post yet, but I feel like if I don’t publish it now, I never will. Some of what I said might be wrong, but oh well. I will think about it more and publish more later. This is just a small peek into my thoughts.

Advertisements

About mlbaker

just another guy trying to make the diagrams commute.
This entry was posted in linear algebra and tagged , , , , , . Bookmark the permalink.

11 Responses to Duality and its effects on linear geometry

  1. Just a few comments/questions:

    1. In the second last paragraph: What do you mean by associating 1-dimensional subspaces with (n-1) dimensional subspaces? Aren’t you associating linear functionals on V to their kernels? Also, how do you know that the matrix representation of \phi in that paragraph is of rank 1?

    2.What exactly is an anti-isomorphism? The wiki entry just confuses me.

    3. V* \tensor V ~ End(V) looks interesting. Did you put up a proof on one of your tensor videos?

    • Wait… scratch the last question on #1. I forgot the definition of linear functional. xD
      But to add to #1, how is the kernel (n-1) dimensional, again?

    • mlbaker says:

      1. I am associating linear functionals (covectors) to their kernels, but assuming we *also* have a way to associate vectors to covectors (what I mean by this: a linear isomorphism \Phi : V \to V^*), this gives us a link between 1-dimensional and (n-1)-dimensional subspaces. The kernel of a non-trivial linear functional is (n-1)-dimensional by rank-nullity, since its rank is 1. The next question, of course, is whether or not we can use this to give us a more general connection between k-dimensional and (n-k)-dimensional subspaces.

      2. What I meant by that is “antilinear isomorphism”, i.e. an additive map which satisfies f(\lambda v) = \overline{\lambda} f(v) rather than the usual condition. We learned this in MATH 245: in the complex case, the isomorphism with the dual is not a linear map, but rather conjugate linear (“antilinear”). This is why the adjoint is related to conjugate transpose of a matrix — by following the canonical isomorphisms down to the duals you “pick up” a conjugation along the way, because your inner product is not bilinear but rather sesquilinear.

      3. I haven’t talked about this in the videos yet. It is intuitive, though: if we imagine what things in V^* \otimes V look like, they are in general of the form

      w^1 \otimes v_1 + w^2 \otimes v_2 + \ldots + w^n \otimes v_n

      for w^i \in V^*, v_j \in V. The general idea is to send this to the map

      x \mapsto w^1(x) \cdot v_1 + \ldots + w^n(x) \cdot v_n.

      It turns out that any linear map can be represented this way (we are assuming finite-dimensionality of V here — otherwise V^* \otimes V \ncong \mathrm{End}(V)).

      In general, though, under appropriate assumptions we have V^* \otimes W \cong \mathrm{Hom}(V,W).

    • mlbaker says:

      For example, one often says that the image of a vector under a matrix is obtained by using the vector’s coordinates in the appropriate basis as coefficients of the columns of the matrix. In this case, the w^i would merely be the appropriate “coordinate picker” functions, and the v_j would be the columns of the matrix. Does this make it more obvious why there should be such a correspondence?

      Sidenote: elements of V^{\otimes m} \otimes (V^*)^{\otimes n} are usually known as (m,n)-bidegree mixed tensors. So you may hear people say things like “linear operators are just (1,1) tensors”, etc. The first m indices are said to be contravariant, and the latter n covariant (it has to do with the way things transform under a change of basis). Question: a bilinear form on V is also a mixed tensor; what’s its bidegree?

  2. 1-3: Got it. Interesting construction on that endomorphism in 3. =D

    In the second reply, though, I’m not too sure what you mean in the first paragraph, specifically what “…is obtained by using the vector’s coordinates in the appropriate basis as coefficients of the columns of the matrix.” means and what is a “coordinate picker” function.

    Is there a nice example for the bidegree mixed tensors? I’m just not entirely sure what the notation really means.

    • Specifically, in the second part, what is meant by V^{\tensor m}?

    • mlbaker says:

      Sorry, I should have been more clear. V^{\otimes n} := \underbrace{V \otimes \ldots \otimes V}_{n \text{ times}} is called the nth tensorial power of V.

      What I mean by “coordinate picker” is the following: given a basis \{ e_\alpha \} we may construct a dual basis \{ e_\alpha^* \} by putting e_\alpha^*(e_\beta) = 1 if \alpha=\beta, and zero otherwise. This is what I mean:

      \displaystyle \begin{pmatrix} a_{11} & \ldots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \ldots & a_{mn} \end{pmatrix} \begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix} = v_1 \begin{pmatrix} a_{11} \\ \vdots \\ a_{m1} \end{pmatrix} + \ldots + v_n \begin{pmatrix} a_{1n} \\ \vdots \\ a_{mn} \end{pmatrix}

      You can imagine each v_i as the result of some functional being applied to the vector you’re feeding in. Hence, it starts to look a lot like what I was talking about regarding V^* \otimes V, right?

      More on tensorial powers: actually, this is how we construct the tensor algebra on V. We define V^{\otimes 0} to be the ground field \Gamma (usually \mathbb{R} or \mathbb{C}), and V^{\otimes 1} := V. Then we get a bunch of vector spaces

      \Gamma, V, V \otimes V, V \otimes V \otimes V, \ldots

      which we can just take a big direct sum…

      \displaystyle \otimes V := \bigoplus_{i=0}^\infty V^{\otimes i}.

      This gives us a vector space. Now we define a multiplication on it by mapping V^{\otimes i} \otimes V^{\otimes j} \to V^{\otimes i+j} by simply “concatenating” the tensors, i.e. (t_1, t_2) \mapsto t_1 \otimes t_2 for indecomposable tensors t_1, t_2. Extend this linearly to all of \otimes V; this turns \otimes V into a (graded) algebra called the tensor algebra on V. It’s also called the “free algebra on V“, because it’s like the “most general” algebra containing V, all while remaining as small as possible. “Most general” in the sense that there are no relations between the elements of \otimes V other than the ones forced by bilinearity and so on. In fact, the whole notion behind the tensor product was that we treated the products as formally as possible while ensuring that \otimes remained bilinear. The tensor algebra kind of “continues” this idea. It therefore has a pretty interesting universal property concerned with factorization of maps, but quotients of the tensor algebra are probably more interesting to study, e.g. the exterior algebra \bigwedge(V).

  3. So much win to process… let’s see: let’s tensor the tensors via direct product to get a vector space where we construct a tensor for our sum of tensors. Now taking quotient spaces of that monster… I don’t want to even imagine. o_O … Continuing, we take the dual of our under lying vector space and tensor all the tensors that come from it via a direct sum. Take those tensors of tensors of functionals and tensor that with our previous tensors of tensors of vector spaces. Call it a bidegree mixed tensor and… wait, can we create an isomorphism between that and \End(V^n)?

    • mlbaker says:

      Hum, I think I’m lost. Where does “direct product” come in? “a vector space where we construct a tensor for our sum of tensors” — what do you mean by this? Also they are just called “mixed tensors”, of bidegree (m,n). I think I mentioned this to you before but things actually aren’t that bad because tensor products are associative.

      Your last remark though however sounds potentially accurate. We know that (assuming finite dimensionality),

      V^* \otimes V \cong \mathrm{Hom}(V,V) =: \mathrm{End}(V).

      So, let’s see… consider bidegree (2,2) mixed tensors:

      V^* \otimes V^* \otimes V \otimes V

      Note that (again relying on a finite dimensionality assumption), V^* \otimes V^* \cong (V \otimes V)^* (why?). So actually, we can write

      \begin{aligned} V^* \otimes V^* \otimes V \otimes V \cong (V \otimes V)^* \otimes (V \otimes V) & \cong \mathrm{End}(V \otimes V) \\ & \cong \mathrm{End}(V) \otimes \mathrm{End}(V) \end{aligned}.

      In general, if we assume finite dimensionality everywhere, we get the very powerful result, call it (\dagger), that

      \mathrm{Hom}(V,W) \otimes \mathrm{Hom}(V',W') \cong \mathrm{Hom}(V \otimes V', W \otimes W').

      In particular the above proves that V^* \otimes V^* = \mathrm{Hom}(V,\Gamma) \otimes \mathrm{Hom}(V,\Gamma) is isomorphic to \mathrm{Hom}(V \otimes V, \Gamma \otimes \Gamma) \cong \mathrm{Hom}(V \otimes V, \Gamma) =: (V \otimes V)^*. This isomorphism arises from taking the “tensor product of linear maps”. Without the finite dimensionality assumption, the left-hand side of (\dagger) is isomorphic to a subspace of the right-hand side, but it won’t be the whole thing.

    • mlbaker says:

      The whole \mathrm{End}(V) \otimes \mathrm{End}(V) \cong \mathrm{End}(V \otimes V) reminds us of quantum information: if we have two one-qubit unitaries L_1 and L_2 given by some matrices, we can take their Kronecker product and obtain a unitary on a joint (2-qubit) system. Of course, not every unitary on this joint system can be described merely as such a Kronecker product, but in general, in view of the isomorphism above, it can certainly be written as a sum of Kronecker products.

      Also, the primary source of my confusion with this stuff is that I often forget that things in V \otimes W do not all have the form v \otimes w. This fact is what makes the tensor product support notions like entanglement, whereas in the Cartesian product V \times W, to describe a vector here it is sufficient to describe only its two parts…

  4. yonderwindow says:

    “we obtain in a natural way an (n-1)-dimensional subspace”
    it might sound less clumsy to write “we obtain, in a natural way, a subspace of codimension one”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s