I’m taking Friedman’s Class (again), and it’s every bit as cool as they say.
The introducotry problem this year is Ackerman’s Function, and I’ve really enjoyed muddling over it the last couple of days. To help get my head ’round the significance, I thought I’d spell it out a bit in a post.
The significance of it is that it’s the first function discovered that was recursive but not primitive recursive.
The basis goes something like this. Suppose we want to define addition, but we only have simple increments and decrements as operations. Back in the days when Turing Machines were new, this wouldn’t have seemed so far-fetched. We have two numbers, n and m, and for one of them, say n, we need to increment m times. This can obviously be defined recursively: we simply add one to the result of calling the fuction again, but this time with a decremented m, until m is 0, when we return n.
add n, m := n if (m = 0)
:= increment (add n, (decrement m))
So if we want to add 3 and 2, first we check to see if 2 is zero. It isn’t, so we add one to the result of calling add again, this time with 3 and 1. Check again, m still isn’t 0, so we need to find the result of adding one to the result of calling add again, this time with 3 and 0. This time m is 0, so we return 3. Well, there was an add1 waiting on that, so it’s now 4, and another add1 waiting on that, so 5. The answer is 5.
Certainly not the most efficient way to do addition, but at least we know that this is a correct program that gets the right answer.
Ok, now suppose we try it with multiplication.
Well, this time, rather than adding 1 to the result of each recursive call (with the one decremented argument) what we’ll need to do is add n. Because really, when you think about it, multiplying two numbers n and m is the same as just adding n to itself m times. m is like a counter. Once m is 0, we return 0 (because we will have already added n to itself as many times as we need to.
mult n, m := 0 if (m = 0)
:= add n (mult n (decrement m)))
So again with 3 and 2. First we test if m is 0, and it’s not, so we add 3 to the result of calling mult again on 3 and 1 – and again m isn’t 0, so we add 3 again to the result of calling mult with a decremented m. This time it’s 0, so we return zero, to which we add 3 and then 3 again getting 6, which is the right answer. Again, not very efficient (especially since
we’re calling our already-known-to-be-inefficient add function at each iteration), but it’s a provably correct program.
Well, it works for exponentiation too:
exp n, m := 1 if (m = 0)
:= mult n (exp n (decrement m))
Same principle, only this time we return 1 in the base case because we’re multiplying with each iteration. Note that exponentiation calls mult, which in turn calls add, so we’re knee-deep in inefficiency, but never mind. The point is that we’re building provably correct arithmetic functions on top of other more primitive provably correct arithmetic functions.
Well, since everything here is ultimately defined in terms of addition, it sort of begs the question whether we could just have one function that does all of this upon request, based on the argument it’s passed. That is, could we have a function where if you pass it a 0, it returns a function that does addition, if you pass it a 1, it returns a function that does multiplication, and if you pass it a 2, it returns a function that performs exponentiation (and with a 3 double-exponentiation, etc.)? Intuitively, it seems likely.
[post under construction, to be modified later]