Showing posts with label Alan Turing. Show all posts
Showing posts with label Alan Turing. Show all posts

2009-04-05

Halting "Problem" Redux

A while back I posted as an April fools joke: a Java version of the Turing Halting Problem. I recently learned that for Turing halting wasn't a "problem" at all, quite the opposite. The term was rather added by a later commentator (who's name I will post as soon as I look it up) who wanted to express the theorem in modern day computing terms, where infinite loops are generally considered bad things. For Turing, who's purpose was to determine the limits of decidability, that was no problem and in fact programs that didn't halt were considered "good" programs. If an algorithm could keep indefinitely chugging away at a problem and could give an answer, assuming infinite time, then said problem is "decidable". Computing pi, the square root of 2, 1/3... those are all decidable problems. The reason we can't get an exact answer is simply not having enough time (i.e. not having infinite time) to do so. It's not any fundamental problem with the calculation. The halting problem, on the other hand, is inherently "undecidable" since it embodies a paradox.

When Turing first posited the Universal Machine it was in order to construct an example of an undecidable problem; a Universal Machine can't determine whether or not it will halt. His purpose at the time was not to invent the computer. That use of the formulation was applied later (just a little) by many commentators, including Turing himself. So the invention of the Universal Turing Machine, of which all computers and programming languages are examples, was like so many other great discoveries an accident.

An accident that changed everything.

*Actually Turing's presentation was even more esoteric than that. He postulated that every machine could be designated a number that described it (not so far fetched since real computer programs are stored as binary numbers: computer programs are ultimately just numbers). So Turing was proving that no number can exist that determines if that number is itself "satisfactory", i.e. whether or not it can be computed. Remember, this was a mathematical, not a practical, problem. As fate has it, math problems have a tendency to become practical.

2009-03-31

EDVAC, von Neumann, Turing

The machine on the right is EDVAC, the first based on von Neumann's architecture ever designed.* I put this image here as a reference point for the rest of my blog. The speed with which that machine has evolved into what you're using right now, these gadgets that have infiltrated pretty much every aspect of our lives, is simply astounding. The most amazing thing is that the basic architecture hasn't changed much. You are using, essentially, a von Neumann machine right now. Sure you might have quad cores, fancy pipelines and other newfangled stuff in your CPU (which, admittedly, are modifications of VNA possibly making your computer slightly non-von Neumann) but beyond those details it's mostly a von Neumann machine. It's that big hunk of metal on the right, just (much) faster and smaller.

The main concept behind a von Neumann architecture is the stored program, the idea that data and instructions can be treated equivalently. Therefore programs can write, read and execute other programs by treating them as data. One thereby gets a kind of generality such that anything that can be made into instructions (really, anything that you could "instruct" somebody to do) can be done by such a machine by reading in those instructions as data and running them using it's internal (von Neumann) architecture designed to do exactly that.

In fact von Neumann wasn't the first to come up with this idea. The credit for the invention really goes to Alan Turing, a man to whom we owe a debt of gratitude for many reasons (including playing a huge role in helping defeat the Nazis through his code breaking work). His invention, purely a thought experiment until the 1940s, is now called the Universal Turning Machine. It introduced the concept stored program, that is the data/instruction equivalence, in mathematical terms. So the von Neumann architecture can be considered the first architectural implementation of a Universal Turing Machine.

We modern programmers have become so accustomed to our routine that we don't even stop to marvel. We write coded instructions as simple character data. That data is read by another program, itself made with similar data constructs, which either interprets our code or compiles into another instruction set and ultimately reduces our own instructions to a different set of (this time binary) instructions that a von Neumann machine can interpret and follow. This was Turing's idea in a nutshell, his purely mathematical concept, that millions of programmers now implement daily to do real work.

So what I'm trying to impress upon the reader is this: that all this stuff we take for granted, these online social networks, the Wii, your ipod, whatever, they all have their source with these guys and some other intellectual giants who came up with these ideas over a short period a little more than 50 years ago. It's an amazing story, and they are some fascinating characters to boot.

The question still to be answered is, how far will this VNA/UTM stuff take us? These men thought it would lead to genuinely intelligent machines with all the capacities of human thought. Whether this will happen is still unknown (I have opinions on this that I'll share later). It has certainly taken longer than they thought it would. Then again I don't think they envisioned ATMs, online shopping, Twittering, DVRs, Ipods, WoW, netbooks, texting, blogs... yikes.


* Although not the first built. Due to budgetary constraints that distinction goes to a British team that built the Manchester Mark I in 1949 [1]. The EDVAC itself became operational in 1951.