## Tensors #1

Lately, I’ve been reading up a lot about tensors: tensor products, the tensor algebra, the exterior algebra, and so on, and trying to synthesize all the different conventions that are used. Of course, all this stuff falls under a branch of mathematics known as multilinear algebra, which based on my experience so far is significantly more abstract than plain old linear algebra. The tools of tensor analysis, however, do seem to find use in physics and engineering: it’s apparently used heavily in general relativity, and based on discussions I’ve had with a friend in engineering, they’re used there to analyze the phenomena of stress and strain.

I admittedly don’t know much about how they’re used in practice as I just described, but there seems to be several different approaches to the subject. Since I intend to focus on the formal/theoretical development of these things rather than their applications, I’m therefore interested in the abstract/coordinate-free formulation.

The first step is to define the tensor product of two vector spaces, which can be done in many ways. The things I’m going to talk about (from what I understand) can be carried out in a much more general setting, however I will stick to vector spaces over fields since this is what I’m comfortable with.

§1. Cartesian products and multilinear maps

Before I do this, let us fix a field $\mathbb{F}$. Recall that given two vector spaces $\mathsf{V}$ and $\mathsf{W}$ over $\mathbb{F}$, we can endow their Cartesian product $\mathsf{V} \times \mathsf{W}$ (which is apparently sometimes called their “direct sum” or “direct product”?) with a vector space structure in an obvious way: simply define the operations coordinate-wise, that is,

$(v,w) + (v',w') := (v+v',w+w')$ for all $v, v' \in \mathsf{V}$, $w, w' \in \mathsf{W}$
$\lambda (v,w) := (\lambda v, \lambda w)$  for all $v \in \mathsf{V}$, $w \in \mathsf{W}$, $\lambda \in \mathbb{F}$.

Note of course that we’re abusing notation here since on the left hand sides of the above equations, the addition/scalar multiplication we’re referring to is that of $\mathsf{V} \times \mathsf{W}$ (because we’re defining it) and on the right hand sides, the operations are those from $\mathsf{V}$ and $\mathsf{W}$ themselves. You can check that this turns $\mathsf{V} \times \mathsf{W}$ into a vector space.

Now that we have this thing $\mathsf{V} \times \mathsf{W}$, we want to examine mappings from it into some other vector space $\mathsf{Z}$. However, it is not linear maps we wish to examine. The object $\mathsf{V} \times \mathsf{W}$ (visibly) has two components: $\mathsf{V}$ and $\mathsf{W}$. Therefore, the maps we want to talk about are defined as follows. We call a map $f : \mathsf{V} \times \mathsf{W} \to \mathsf{Z}$ a bilinear map if

• for all $v \in \mathsf{V}$, the function $f_v : \mathsf{W} \to \mathsf{Z}$ given by $w \mapsto f(v,w)$ is linear
• for all $w \in \mathsf{W}$, the function $f_w : \mathsf{V} \to \mathsf{Z}$ given by $v \mapsto f(v,w)$ is linear

That is, it’s called bilinear if it’s linear “in each of its components”. You can define, more generally, multilinear (“$p$-linear”) maps on vector spaces of the form $\mathsf{V}_1 \times \ldots \times \mathsf{V}_p$, but we won’t need to do that in this post (plus, the definition is an obvious extension of the above).

§2. Tensor product of two vector spaces

We now define the tensor product of two vector spaces $\mathsf{V}$ and $\mathsf{W}$ (remember we’ve fixed a field $\mathbb{F}$). If $(\mathsf{T}, \otimes)$ is a pair consisting of a vector space $\mathsf{T}$ and a bilinear map $\otimes : \mathsf{V} \times \mathsf{W} \to \mathsf{T}$, we say that the pair $(\mathsf{T}, \otimes)$ has the universal property if every bilinear map $f : \mathsf{V} \times \mathsf{W} \to \mathsf{Z}$ decomposes uniquely as $f = \varphi \circ \otimes$ where $\varphi \in \mathrm{Hom}(\mathsf{T}, \mathsf{Z})$.

It turns out that there exists such a pair $(\mathsf{T}, \otimes)$ satisfying the universal property, and moreover this space $\mathsf{T}$ is unique, up to an isomorphism. It can be constructed from the free vector space on $\mathsf{V} \times \mathsf{W}$ by defining some equivalence relations, and then quotienting them out. We therefore denote it by $\mathsf{V} \otimes \mathsf{W}$, and call it the tensor product of $\mathsf{V}$ with $\mathsf{W}$. The bilinear mapping $\otimes$ (confusingly) goes by the same name.

In the next post, I’ll probably go into more detail about how we construct $\mathsf{V} \otimes \mathsf{W}$ and why it’s unique. For now, my question (which I believe has an affirmative answer) is the following:

If we completely remove the restriction that the mapping $\otimes$ be bilinear, do we suddenly see other freaky candidates for $\mathsf{V} \otimes \mathsf{W}$ appearing (and hence lose the uniqueness)?

just another guy trying to make the diagrams commute.
This entry was posted in abstract algebra, linear algebra and tagged , , , . Bookmark the permalink.

### 18 Responses to Tensors #1

1. Is Hom(T,Z) the set of linear maps from T to Z?

Also, I believe that the term “direct product” is used for the cartesian product of two vector spaces.

So back to the topic at hand, it might just be because my terminology is a bit rusty but I have a few questions. What isomorphism are you talking about in the last part, an isomorphism between T and V x W? To quotient out V x W, what is the subspace of V x W that we are looking at to achieve this?

I assume that if there exists some isomorphism between V x W and T, then V x W should at least be countable with some finite subspace chosen such that its quotient space, T is also countable.
Would this be correct or did you mean something else?

2. dx7hymcxnpq says:

Yes, it’s notation used in a fair amount of linear algebra texts. It’s borrowed from category theory: $\mathrm{Hom}(\mathsf{T}, \mathsf{Z})$ is the set of morphisms from $\mathsf{T}$ to $\mathsf{Z}$ (and since we’re working in the category of vector spaces, the morphisms are understood to be linear maps). See this wiki article (the terminology sounds very odd, but if you read for a while, you’ll see that the stuff about dual spaces, transpose maps etc. he talked about in MATH 245 was just a special case of this).

I learned today that “direct sum” and “direct product” coincide as long as you’re performing the product/sum over a finite collection. For infinite collections of vector spaces, the constructions differ, for example:

$\displaystyle \bigoplus_{i=1}^\infty \mathbb{R}$ is the vector space of real sequences with only finitely many nonzero terms, whereas

$\displaystyle \prod_{i=1}^\infty \mathbb{R}$ is the vector space of real sequences, period.

• dx7hymcxnpq says:

In particular, the correspondence which sends every vector space to its dual space $\mathsf{V}^*$, which can alternatively be denoted $\mathrm{Hom}(\mathsf{V}, \mathbb{F})$, and sends every linear map to its dual (transpose) map, is nothing more than the “contravariant endofunctor” $\mathrm{Hom}(-, \mathbb{F})$ acting on the category of vector spaces. (Contravariant means that it “flips” the direction of morphisms between objects, and endofunctor means it’s a functor whose codomain and domain are the same).

• I think I remember reading about that definition in a wiki article, this one I believe:

http://en.wikipedia.org/wiki/Direct_product

In Math146, I remember that we could take the direct sum of two non-trivial vector spaces that had a trivial intersection to generate some “larger” non-trivial vector space. If we were given that these two spaces were of finite dimension, how are the direct sum and direct product equivalent in this sense?

• dx7hymcxnpq says:

I’m kind of searching for a precise definition of what the direct sum vs. direct product are. From the sounds of Wiki, it seems like I might have to read up on some category theory in order to understand it though…

Also, in MATH 146, I don’t remember using direct sums for “merging” two unrelated vector spaces, but rather the direct sum being involved with decompositions of a vector space into its constituent subspaces, i.e. if $U \subseteq W$ and $V \subseteq W$ and $U \cap V$ was trivial, you would call $W$ the direct sum of those two subspaces. Or in general if you had a family $\{ V_\alpha \}$ of subspaces, you would write $W = \bigoplus_\alpha V_\alpha$.

• dx7hymcxnpq says:

OK, so the situation with direct sums and direct products is actually interesting enough to warrant a blog post. It’s stated precisely (as I thought) in the language of category theory, but I’ll make a post soon giving the definitions and “interpreting” them in a linear-algebraic setting. 😀

3. dx7hymcxnpq says:

When you talk about something like a vector space being “unique”, you often additionally throw in the phrase “up to an isomorphism”, because remember that we can relabel the items in a vector space however we want, and end up with a vector space which is technically “not equal” to the original (because its underlying set is different) even though it’s structurally identical. By saying “up to an isomorphism”, we’re taking care of this. Does that make sense?

• Yep, no confusion here.

4. dx7hymcxnpq says:

We’re actually starting with the free vector space on $\mathsf{V} \times \mathsf{W}$, which is denoted $C(\mathsf{V} \times \mathsf{W})$. I wrote a post explaining what that thing is a while back; one way to see it is as a space of formal finite sums, another way to see it is as a space of functions $f : \mathsf{V} \times \mathsf{W} \to \mathbb{F}$ which are nonzero at only finitely many points (the value of the function $f$ at any point $(v,w) \in \mathsf{V} \times \mathsf{W}$ then corresponds to the coefficient of $(v,w)$ appearing in the corresponding formal sum).

I’ll talk more about how the tensor product is formed by a quotient of the free vector space $C(\mathsf{V} \times \mathsf{W})$ in the next post. Of course, though, the construction that Dr. New gave of the tensor product in Assignment 6 Question 5 of MATH 245 is equivalent.

One thing you can remark, though, is that usually $\mathsf{V} \times \mathsf{W}$ will be uncountable as a set. Since the free vector space over a set is precisely the vector space which has that set as a basis, $C(\mathsf{V} \times \mathsf{W})$ is uncountable-dimensional as a result. Therefore, you can probably guess that if we’re going to quotient something out to obtain the tensor product $\mathsf{V} \otimes \mathsf{W}$, it’s going to be something pretty big.

Here’s a picture of what’s going on in the universal property definition I gave, although the names are different from what I used.

• dx7hymcxnpq says:

About the Assignment 6 construction: Actually, I tried to show that $\mathrm{Bilin}(\mathsf{V}^* \times \mathsf{W}^*, \mathbb{F})$ satisfied the universal property, and I ran into problems, because of the dependence of dual bases on finite dimensionality. However, if you assume finite dimensionality of $\mathsf{V}$ and $\mathsf{W}$, it’s not too hard to prove that it works. The infinite case, I’m not sure though. I have a feeling it fails, and you have to resort to taking a quotient of $C(\mathsf{V} \times \mathsf{W})$ as I said before (which IMO is a little messier).

• Hmm, that seems quite interesting actually. I’ll take a try at it myself, but first, if you’re taking $\mathrm{Bilin}(V^{*}\times W^{*}, \mathbb F)$ to be your vector space, what is the bilinear operator that you’re using to check the universal property?

• dx7hymcxnpq says:

It’s the map $\otimes$ which sends every pair $(v,w) \in \mathsf{V} \times \mathsf{W}$ to the element $f_{(v,w)}$ of $\mathrm{Bilin}(\mathsf{V}^* \times \mathsf{W}^*, \mathbb{F})$ defined to be $f_{(v,w)}(\phi, \psi) = \phi(v) \psi(w)$ for all $\phi \in \mathsf{V}^*$ and $\psi \in \mathsf{W}^*$.

• I’m not sure I am understanding your definition right for $C(V \times W)$ but here’s is what I think the free vector space over $(V \times W)$ intuitively seems like (borrowing from the wiki article):

Suppose that we have a basis $\mathcal{W}=\{w_{1},w_{2},...\}$ for $W$ and a basis $\mathcal{V}=\{v_{1},v_{2},...\}$ for $V$. Let ${e_{(v_i,w_i)}}$ denote the set of distinct $(v_i,w_i) \in V \times W$.

Then, $C(V \times W) = \left\{ \sum\limits _{i=1}^{n}\lambda_{i}e_{(v_{i},w_{i})}\biggr|\lambda_{i}\in\mathbb{F},n\in\mathbb{N},(v_{i},w_{i})\in V\times W\right\}$

• dx7hymcxnpq says:

Did you read my blog post on free vector spaces? Like I said before, the free vector space $C(S)$ on a set $S$ is just “the vector space that has the elements of $S$ as a basis”. Maybe I can give you a concrete example, with a finite set.

Let $S = \{ A, B, C, D \}$. Then the free $\mathbb{R}$-vector space over $S$ consists of finite “formal sums” such as $\pi A + \sqrt{2} B + 2D$, $1B + 2C + \sqrt{3}D$, and so on. Hence $S$ is clearly a basis for $C(S)$. Scalar multiplication works like this: $\lambda(aA + bB + cC + dD) = (\lambda a)A + (\lambda b)B + (\lambda c)C + (\lambda d)D$, and addition works like this: $(a_1A + b_1B + c_1C + d_1D) + (a_2A + b_2B + c_2C + d_2D) = (a_1+a_2)A + (b_1+b_2)B + (c_1+c_2)C + (d_1+d_2)D$.

In this sense, you can see how the space $C(S)$ can be viewed as the space of functions $f : S \to \mathbb{F}$ which are nonzero for only finitely many $s \in S$. The value $f(s)$ merely corresponds to the coefficient of the element $s$ in the “formal sum” I described above, and this is precisely how you draw the isomorphism: formal sums and functions-nonzero-at-only-finitely-many-points are just two different ways of seeing the space $C(S)$. Hope this helps.

5. Oh, as an open and COMPLETELY UNRELATED question, I remember Dr. New posing this question to me and me forgetting to try and solve it:

Is there a way to count the number of k-dimensional real vector spaces over some field $\mathbb F$ ? What can we say about its cardinality?

Of course, I may have heard this question wrong, but I’ll revise the question again if I can remember it.

• dx7hymcxnpq says:

Hmm… “real vector spaces over some field $\mathbb{F}$“? I’m not quite sure what you mean. Doesn’t “real vector space” tend to mean “vector space over $\mathbb{R}$“?

Also you have to be careful with that kind of question. Intuitively, it seems to like if you consider such a collection it will probably end up being a proper class (i.e. it’ll fail to be a set under ZFC due to things like Russell’s Paradox).

If you can revise the question, it does sound interesting though.

• Ah sorry about that, take out the word “real”. I’ll try to recall the exact wording later.

I’m not too sure why Russell’s Paradox would be involved here, though, since it doesn’t seem intuitively obvious that such a set containing all k-dimensional vector spaces could be k-dimensional itself (that is, if we could even define addition and scalar multiplication).

• dx7hymcxnpq says:

I just found this again. The “number” of $k$-dimensional vector space over $\mathbb{F}$? Well, they will all be isomorphic to $\mathbb{F}^k$ haha, so if you are only talking about “up to an isomorphism” then there is one…

On the other hand if you’re considering them all as distinct objects then I don’t think it has a cardinality… that’s kind of philosophical, like asking “what is the cardinality of the set of all possible symbols a human could write”, no?