14 Apr 2019

Introduction

Ultimately, mathematicians would like to know what precisely their work will result in? A fundamental understanding of the universe? (Un)limited practical applications? Mere abstract knowledge? Or maybe it is all in vain? That’s where Godel’s Incompleteness theorems fit in and bring this issue to the surface.

Same goes for logicians and computer scientists. Logicians have what is called the “decidability problem”, whereby if you are given a set of premises we would like to know if there is an automatic, machine-like way to know if a conclusion follows from the premises. A similar problem stands before computer scientists: if we have a program (or an algorithm or a machine), is there a way to consistently know if that program will be able to process certain input? This is known as The Halting Problem.

Recap: Godel’s Incompleteness Theorem

See my essay here, but in short, the theorem suggests that any sufficiently expressive system is either incomplete or inconsistent and any consistent system cannot prove its consistency. In other words, the theorem says that not all true statements in a system have proof.

To make that argument, Godel used a sentence that referred to the system within which it is conjured up and showed that the system is either incomplete or inconsistent by proof with contradiction.

The Halting Problem

Here’s a very neat video explaining the halting problem but let me try to summarize it (I strongly recommend that 7-minute video to understand the problem: any visual explanation is better than a written one).

Let’s suppose we can invent a machine H that takes two inputs: a program P and an input I. Machine H can tell is what would happen if we give input I to program P – does it get stuck or does it run successfully. Or in other words, does it halt or does it not halt? So that’s the Halting Problem: Can we have such a machine?

Let’s give a simple example: Let’s feed machine H two inputs: a program P that gives us the multiplication of two numbers and input I of (2,3). Because program P can multiply 2 and 3, then machine H will tell us that the program P will not halt. However, if we feed the same multiplication program P an input I of a bird image, then machine H will tell us that program P will halt, because it can’t process the given input. So it seems like that machine H can solve the Halting Problem very well – i.e. it can always tell us if a program P halts or not with given input I. But does it?

Now, here’s where it gets complicated. Let’s create another program of our own imagination, called X. This machine has two parts: the first one is the above-mentioned machine H followed by another machine N. Machine N takes 1 input (the one from machine H which can be either halt or not halt) and does the following: If machine N receives the input of “halt” then it prints “not halt” but if machine N receives the input of “not halt” then it halts and does not give any output (or in more technical terms, it loops forever).

Now, machine X starts with machine H, therefore it takes two inputs – a program P and an input I. Let’s feed machine X with itself as a program and itself as an input. Given that machine H is the first one, it processes these two inputs.

Let’s assume machine H prints “halt” which means feeding X an input of X is impossible and should halt. Then the second part of machine X, machine N, gets an input of “halt” and therefore prints “not halt”. What just happened is that we fed machine X with itself as input and it did not halt (printed “not halt”). However, the middle part, machine H, whose existence we are trying to prove, printed “halt”, which means we have a contradiction: machine H said that feeding X with X should halt but it did not.

Given this contradiction then, that means that if we feed machine X itself as a program and an input, machine H must print “not halt”. But once machine H prints “not halt”, machine N halts (as we programmed it to do) and therefore the entire thing (machine X) halts. Therefore, again, machine H said that machine X with input X should not halt, but it did.

Therefore, there cannot exist a machine H that will reliably tell us if a program P will halt or not given input I.

Godel and the Halting Problem: Same Thing, Different Package

There is a reason why I bundle Godel and The Halting Problem together – they appeared quite similar to me – not only in their conclusion, but in their form as well. On the one hand, both Godel and the Halting Problem show the limitations of certain knowledge structures (Godel for mathematics and the Halting Problem for computer science). On the other hand, both utilize a self-referential line of reasoning, whereby a system refers to itself, to reach a proof by contradiction.

Apparently, I was right. Turing, when he showed that a machine H that solves the Halting Problem cannot exist, said that “what I shall prove is quite different from the well-known results of Godel”, but this does not seem to be the case. Godel showed that if a system contains a statement that verifies the system’s own validity, then that system is either incomplete or inconsistent. Then in the halting problem we have a halting program (the system) which is able to encode for another halting program within in (that which should verify the system’s validity), hence the halting program is either incomplete or inconsistent.
Not only that, but it seems like the Halting Problem is only a piece of a larger puzzle. Specifically, it seems like the Halting Problem is only a case of Rice’s theorem. Rice’s theorem postulates that all behavioural properties of any given program are undecidable. Such behavioural properties include – you guessed it – whether a program terminates given certain inputs.

The Halting Problem: Not a Problem?

There are more contentious arguments to be made about the Halting Problem.

As I said above, if a system does encode for its own verifier, then we run into problems. But what if it doesn’t? In the language of computer science: if we have two different computer languages (or systems), do we run into problems then? Maybe not. It’s a lost cause to ask a program in Python to decide if all programs written in Python will either halt or not halt. But it might be possible to write a program in Java to decide if all programs written in Python will either halt or not halt – then the system itself does not contain its own verifier. Or at least it has not yet been proven impossible.

There seems to be yet another argument to be made: whether incomputability is not a meaningless statement altogether? One of the most fundamental principles in science, among other disciplines, is that a statement should be falsifiable. That is to say that a statement, if false, should have a way to be proven false; A test that cannot fail conveys no information. Then is there any point to say that a machine halts? How would we ever find out?

So What?

Apart from finding it extraordinarily stimulating to ponder such paradoxes, extreme cases and abstractions of all kinds, I also feel that it’s incredibly humbling and liberating to glimpse the possible limitations of human knowledge.

Mathematics, as Plato argued, was supposed to be that eternal, objective language of the universe, but Godel swiped that away. The hopes for computer science are that computers will do everything for us, and maybe they will do everything better than us – but will they? What if there are no universal truths? What if, as Piaget theorized at great length, we construct most of our reality in incredibly complex ways that we can barely comprehend, let alone build algorithms from them?

But then again, what if there are universal truths to be known? Physics and mathematics, among other disciplines, are deeply concerned with finding universals and explaining everything in neat equations. Are their efforts in vain? The position you take to the above two problems will give you an answer to that one too.

Yet again, there’s another option: we are simply not good enough to know whether there are universal truths, let alone what they are. This view, that humans are not evolved enough to comprehend certain phenomena (such as consciousness), is called Mysterianism. It seems about right to me. After all, we have only been doing this science thing for a couple hundred years, and we don’t seem to be very well designed to tackle abstract problems in such a rigorous way. One only needs to consider the range of biases that stumble us during the practice of science to realize how surprising and astonishing it is to have even achieved what we have.