NOTE: I was going to post this on my other blog, since as a quick jerk of your scrollbar will tell you, it is quite rambling. However it is not completely incoherent, hence I decided to post it here.
So, today, on Canada’s birthday, while everyone is out partying and getting drunk, us *ahem* more refined and sophisticated individuals will be doing some linear-algebraic graph theory. OK, I’m kidding, there is no lucrative innovative intellectual socially-dynamic group-synergy occurring, it’s just me sitting on a couch bashing out my thoughts. I am starting to realize that (with respect to time) this is asymptotic to a metaphor for my life.
You know, I can’t even count the number of abstract algebra/group theory books I’ve read that mentioned the fact that for any set , the power set forms a group under the symmetric difference operation , which is the set-theoretic manifestation of logical XOR (this really reminds me of an embarrassing truth-table proof I did for a quantum information assignment), just like the union corresponds to logical OR. To wit, for two subsets we define
The identity is obviously the empty set , it is associative and commutative, and every element has an inverse, namely itself. So it’s an abelian group, in fact we might term it a Boolean group, or something. After reading such examples, I always thought to myself, “Man, symmetric difference? What a lame and contrived example. I bet that would never be useful.” Well, it turns out that (as is almost always the case) I was wrong. This term, I am taking an introductory course in graph theory, and this very example has been the foundation for the past 2 or 3 weeks worth of lecture material… which brings me to the point of this post.
Why would this seemingly trivial textbook example of a group find any use in a graph-theoretic context? To be honest, I have no idea, but intuitively it seems like we want a way of decomposing edge sets in graphs, in particular, we want to investigate cycles. One thing we note is that in the complete graph , if we choose a 4-cycle, it can be obtained by taking the symmetric difference of two 3-cycles which share an edge. Any abelian group (the likes of which we usually write additively) automatically carries the structure of a –module (view the scalar multiplication as iterated addition), so that’s nice, however it’s not really what we want here. Taking our scalars from is overkill, because the entire ideal of the ring annihilates everything. That is, whenever . So, with the minimalist laziness so characteristic of mathematicians, we quotient out this inconvenient ideal, therefore working instead over , the field with 2 elements (otherwise known as ). We now have a module over a field, otherwise known as (surprise, surprise) a vector space.
Let’s see what kind of sinister avatars the above generalities can animate in the graph-theoretic realm. I won’t waste my breath developing the trivialities of basic linear algebra over ; we’ll proceed instead to discussing the so-called cycle space. Let be a graph (finite, undirected, lacking parallel edges, loops, and other analogous hodgepodge). We denote the vertex and edge sets of by and , respectively. We’ll be toiling away in the space , since we care about cycles. One property we note about edge sets of cycles is as follows: if is a cycle in , then every vertex is incident to an even number of edges in . So we are lead to believe that we should care about subsets satisfying this property. This motivates the following.
Definition. The cycle space of , written because Zyklus is German for cycle, is the set of all subsets such that for all , the vertex is incident to an even number of edges within the set . Equivalently, it consists of all subsets which induce so-called Eulerian subgraphs.
This is naturally (as can be seen by definition) a subset of ; it is also (by a small argument) a subspace of . The elements in the cycle space can, in a sense, be thought of as formal combinations of cycles in .
Recently, on an assignment, we were asked to describe the structure of for a few typical choices of , namely, the complete graph , the complete bipartite graph and the 3-dimensional cube . You may have seen that I posted an awfully-written Python script to compute the size of these cycle spaces. Given the above definition, it is not so clear what the most efficient way to carry out this task is. Since it is germane to this discussion, we will now examine a different way of defining the cycle space — one which clearly suggests an efficient way to compute its size (and even find a basis for it).
Given an (undirected) graph, we can form a matrix known as its incidence matrix, as follows. If we have vertices and edges, the incidence matrix will have size . In the entry of the matrix, we place a 1 if vertex is incident to edge and a 0 otherwise. Of course, the same graph can have many different incidence matrices (simply choose a different ordering on the vertex and/or edge sets), but the incidence matrix uniquely determines the graph up to isomorphism. Let us now think about an incidence matrix, say , as a transformation from some -dimensional space to some -dimensional space. If we suppose the spaces are over the real numbers, then what exactly is the interpretation of, say, , where is a 01-column vector? Recall that given any subset of a finite set , we can form a binary -tuple, denoted (actually it’s a function, but functions with finite domains are conveniently written as tuples after we agree on an ordering), the th coordinate of which is 1 if and 0 otherwise. I have seen this termed the characteristic vector of , although apparently this term is also used for eigenvectors.
So, in keeping with the above, we view the 01-column vector as not a vector, but a subset, denoted , of the edge set of , in agreement with the way we ordered to form the incidence matrix. The image vector is then a -tuple, the th element of which is the usual dot product between the th row of and . In other words, the th coordinate of tells us how many edges in are incident to the th vertex of . Modulo 2, this will be either 0 or 1 depending on whether the th vertex is incident to an even, or odd number of the edges in , respectively.
So now let us view the domain and codomain of an incidence matrix as -vector spaces. Then once we pick an ordering for and , we can form an incidence matrix of . At this point, due to my above remarks, it is easily seen that the cycle space as we have defined it is precisely the nullspace (kernel) of the matrix (the previous definition reads “the set of all subsets so that for all vertices , the corresponding coordinate of the image vector is 0 mod 2″). However, Gaussian elimination makes quick work of any matrix over a field. Therefore, this equivalent definition of the cycle space gives us a simple way to compute a basis for it, and hence understand it completely.
I’ve been thinking lately about how (well, if would be more suitable a word) we can relate the cycle space of to the cycle space of the complement graph (i.e., the graph where two vertices are adjacent if and only if they are not adjacent in ) and also the edge-dual of . Of further curiosity is how the cycle space behaves with respect to the graph tensor product (speaking of tensor products, Hedetniemi’s conjecture is something I was really surprised to learn is an unsolved problem). Also, when we look at the sequence of numbers for , we obtain (as far as I can tell) the sequence of triangular numbers (), which is interesting, although I have not yet found the time to come up with a simple explanation for this.