*This post is a rather basic introduction to the essential foundations of abstract algebra. I will be following it up with a post talking about isomorphisms between commonly used structures, which was my original intention.*

The algebra of numbers was developed very long ago. Abstract algebra is a branch of mathematics that was born from the realization that in fact, algebra is useful for much more than manipulating sets of numbers. Thus, we began to study our number system and observe which *key properties* allowed us to proceed in our everyday manipulations.

The number system in everyday use is a set, together with two binary operations called addition and multiplication. A binary operation is nothing more than a function that sends a pair of elements in the set to a new element in the set. For example, . We can see here that is a binary operation which takes the pair and sends it to .

Let us focus exclusively on addition, ignoring multiplication. The first thing to note about this number system (with respect to addition) is that we have this element known as (zero) which leaves other numbers unchanged when we apply the operation with zero on the left or right: . This is called an identity element for addition, since it does nothing.

Now note also that for every number there is another number often denoted which, when we pair it together with via the operation (in other words, when we *add* it to ), gives (in other words, it *gives back the identity*!) This element is known as the inverse of .

We also have the following facts: for any numbers, it is true that . This justifies our omission of parentheses when we write out sums: it doesn’t matter how expressions are parenthesized. This is called the associative law. Furthermore it is true that (the commutative law). The majority of systems studied by algebraists are associative (although there is work going on in non-associative algebra), but not all are commutative.

We will let and be algebraic structures (by algebraic structure, I mean magma in these articles). An algebraic structure is merely any set together with a binary operation on that set. Above, the sets are and and the binary operations are and respectively. We write them as pairs because, as I just said, they’re made up of two things. When you go on to study algebraic structures with 2 different binary operations, you’d notate it as a triple, .

I will first introduce an algebraic notion known as isomorphism. The reason we speak of isomorphism is because in mathematics, we don’t want to waste time studying something if it has no new properties or oddities to explore. At first glance, some algebraic structure might seem like something new; something we’ve never seen before. But upon further inspection, we might find out that it’s structurally identical to something we’re already familiar with.

This notion of being “structurally identical” is extremely important, since it saves us time, and more importantly, allows us to map bizarre, alien situations to things more familiar to us. Like everything else in mathematics, it has a very precise meaning. First, we will look at the weaker concept of homomorphism.

**Definition**. A *homomorphism* between and is any map such that for all .

**Definition**. Let and be two algebraic structures. An *isomorphism* between them is nothing more than a bijective homomorphism . If such a map exists, then we say that and are *isomorphic*.

Reading this definition, we see that a homomorphism is a special kind of map, namely, a map which in some sense draws a compatibility between the binary operation on and the one on . But in order for a map to be an isomorphism, it is not only required to be a homomorphism, but also a bijection.

Note that when we say two structures are homomorphic we must be a little careful since the notion of a homomorphism is not independent of which direction the map is in, whereas isomorphism is a symmetric relation so it doesn’t matter. In fact, isomorphism is an equivalence relation. Reflexivity can be seen by means of the trivial automorphism (the map which sends every element to itself), and transitivity can be observed by the fact that the composition of isomorphisms is in turn an isomorphism. (You should prove all of these things).

There are many homomorphisms which are not bijective (and are hence not isomorphisms), and there are many bijections which are not homomorphisms. So to be able to find a map which satisfies both of these properties is something special, and if we can do it, I *claim* that such a thing will be pretty damn useful.

To see why this is, I should first confess that when an algebraic structure is studied by mathematicians, it is rarely “just” a magma. If you run down the street and yell, “Look! I have a set, and I’ve defined a binary operation on it!” then people won’t be interested at all. A magma is too weak and general of a concept to yield a theory rich enough to interest us. For this reason, we need to proceed and talk about structures with (for lack of better phrase) a little more structure.

The system of real numbers is a certain kind of algebraic structure called a field. In fact, it is a very special kind of field, a complete ordered field, and it is the *only* complete ordered field, up to isomorphism (when I say “up to isomorphism”, I mean that any other complete ordered fields are isomorphic to the real numbers, which is kind of cool). A field is an algebraic structure endowed with 2 binary operations, commonly referred to as “addition” and “multiplication”.

But this is all too much. Today, we will only consider these things as sets with *one* binary operation. So it’s about time I introduce the notion of a *group*.

Intuitively, a group is an algebraic structure with one operation that satisfies most of the “nice” properties we’d expect from a number system, although I emphasize that algebraic structures are abstract entities, and the objects involved could be anything at all, not necessarily numbers. Now, I’ll give the formal definition.

**Definition**. A *group* is an algebraic structure such that the following holds for all :

- [associativity] for all
- [existence of identity] there exists an element such that for all
- [existence of inverses] for all there exists an element such that

If it also satisfies commutativity, it is called a commutative (or Abelian) group. In abstract algebra jargon, one would say that a group is a monoid with inverses (monoids are semigroups with identity, and semigroups are associative magmas). Considering the definition above, if we suppose is addition, it would make sense to denote the identity element by instead of , and the inverses by instead of . If on the other hand is multiplication, it would make sense to denote the identity by , and the inverses by or to be consistent with what we’re used to. (If either of these conventions are adopted, we say that the group is *additively written* or *multiplicatively written*, depending on which convention is used.)

But since our focus is on the abstract, we will use the notation described in the definition ( for identity, for the inverse of ).

Oooh, I’ve been looking for motivation to study abstract algebra, hopefully this might serve as a push to get me to start studying =D. Thanks!

Thanks for the comment! I’m so glad to hear that someone might find what I write useful. I noticed your IP is from Waterloo; are you also a pure math major? Or just interested in abstract algebra? Either way, it’s a pretty cool field.

Im a CS major, just interested in some topics in pure math. Hopefully I can stick with it =D

Nice; are you in the advanced math classes? Anyhow, if you’re into abstract algebra I’d definitely check out PMATH 345/346. Looking forward to doing 345 in the Fall.