## Algorithm writing

Processes are an everyday part of life. In order to influence the world around us, it is necessary to take some kind of action. These actions can be physical, or purely mental. To find the roots of a quadratic equation, to get yourself to work or school in the morning, to get in touch with someone — all of these things rely on algorithms; some nonempty sequence of initiatives one takes in order to bring reality to a different state. I have not defined “initiative”, but every initiative a person takes, regardless of how small (even something such as having a momentary thought) has an immediate effect. Indeed one may at first ask whether initiatives are themselves algorithms, and they are; but the idea of “algorithm” is essentially the composition of initiatives. In the first example you’re seeking a sequence of operations that will ultimately invoke a state in which you’re aware of the roots of the given quadratic equation. This desired state is unlikely to be realized exclusively by the action of say, removing your jacket. On the other hand, becoming aware of an alternative form of the equation (namely, the equation’s factored form, as a product of irreducible polynomials with coefficients from the complex field) does invoke such a state, because once you’ve reached this stage, you can read off the solutions. (In the complex polynomial ring, the fundamental theorem of algebra says that every irreducible polynomial is of degree at most 1.)

Some time ago, I recall reading that some shockingly small statistic of programmers were capable of implementing a binary search that functioned the first time around. This illustrates a trend which I’ve observed for years. It seems to me that the average programmer begins writing an algorithm before truly understanding its underlying essence. Consider the insertion of $L_1$ into $L_2$, where these are singly linked lists of size $N$ and $M$ respectively, with unique elements, in order with respect to some total order $\leq$, that is, $a_1 < a_2 < \ldots < a_k$. The algorithm that comes to mind immediately is one which runs in $O(N+M)$ time. We take the first element $a_1$ of $L_1$ and look at the first element $b_1$ of $L_2$. (During the entire traversal of $L_2$ in seeking an insertion spot, if we ever see something equal to $a_i$, we save our position and continue to the next element of $L_1$, since we do not want duplicate elements.) If $a_1 < b_1$ then we must create a new list node containing $a_1$, and point it to the head of $L_2$. We then store this as the new head of $L_2$. On the other hand if $a_1 > b_1$, then we go on to the next element $b_2$ of $L_2$. The first element we find greater than $a_1$, we will insert $a_1$ in front of. (If we never find such an element then $a_1$ belongs at the end.)

When writing such algorithms it is important to consider the states of your data structures before and after the application of said algorithm. In the case I described above, since we are in-order inserting $L_1$ into $L_2$, the invariant we must pay attention to is that $L_2$ is sorted (another important invariant is that it has unique elements). It is assumed by the algorithm to be sorted, and if the algorithm is designed correctly, then it will still be sorted once we are done. Proving rigorously that these invariants are preserved by an algorithm is the only sure-fire way a programmer can be sure that his code is sane. An insertion such as the one I described seems intuitively simple, but nevertheless its implementation turns out to be rather nontrivial and subtle. (This is a recurring theme in mathematics and computer science.) Honestly, if I was ever asked on-the-spot to give an implementation of some similar algorithm, I’d probably just admit that I couldn’t do it. I could probably give an intuitive description of what it does, but really there is so much going on in that kind of thing that I wouldn’t be able to keep track of it all.

My previous statement is one of the points I use to defend myself when people ask me why I study both pure mathematics and computer science. In reality, most phenomena are not quite as simple and shallow as we see them. They are subtle. This is why so much money is spent paying people to triple-check corporate infrastructures for security loopholes, rigorously and mercilessly debug software, et cetera. Pure mathematics cannot be studied at the university level without developing a sharp skill for precise reasoning. There was a time when the same was true for computer science, but that time has passed. Many undergraduate CS degrees offered today dilute the true mathematical heart of the discipline. The significance of such degrees, in a lot of professional contexts, is something I bear cynicism towards. So, good footing for future studies in theoretical computer science (should I choose to do so) is one of the many things I will gain from studying pure mathematics.

Since most programmers are not paid nearly enough to have to write out rigorous proofs of an algorithm’s correctness (not to mention the fact that a lot of them probably aren’t even capable of doing so), we have the wonderful thing that is known as debugging. At the end of the day, debugging is the “95% reliable” solution, compared to formal checking which is truly “100% reliable”. However, due to the overheads incurred, applying formal methods to large, unwieldy systems simply becomes inadmissible. Hence, I think debugging is here to stay. It’s among the important skills every software developer should have. And that’s all I’ve got to say for now.