User:IssaRice/Computability and logic/Diagonalization lemma
The diagonalization lemma, also called the Godel-Carnap fixed point theorem, is a fixed point theorem in logic.
Rogers's fixed point theorem
Let be a total computable function. Then there exists an index such that .
Define (this is actually slightly wrong, but it brings out the analogy better).
Consider the function . This is partial recursive, so for some index .
Now since . This is equivalent to by definition of . Thus, we may take to complete the proof.
It looks like we have , i.e. . Is this right?
Repeatedly using the facts that (1) is an index for , and (2) , allows us to create an iteration effect:
(I'm wondering if there's some deeper meaning to this. So far it's just an interesting connection between diagonalization-based fixed points and iteration-based fixed points. I think there might be a connection between this and the fix function in Haskell.)
In the more rigorous/careful version of the proof, we use the s-m-n theorem to get an index of a function, , which is basically like . The difference is that might not be defined for all (actually it isn't, since some partial functions are always undefined) so is not total. On the other hand, is obtained via the s-m-n theorem so is total. When is undefined, gives an index of the always-undefined partial function. So says "this is undefined" in a defined way. Thanks to this property, the expression always makes sense, whereas sometimes doesn't make sense.
Let be a formula with one free variable. Then there exists a sentence such that iff .
Define to be where . In other words, given a number , the function finds the formula with that Godel number, then diagonalizes it (i.e. substitutes the Godel number of the formula into the formula itself), then returns the Godel number of the resulting sentence.
Let be , and let be .
Then is , by substituting in the definition of .
We also have by definition of . By definition of , this is , so we have .
To complete the proof, apply to both sides of the final equality to obtain iff ; this simplifies to iff .
Trying to discover the lemma
see Owings paper.
In the framework of this paper, we have a matrix where each entry is of a certain type. Then we apply the function to the diagonal. If the diagonal turns into one of the rows, has a fixed point.
So now the trick is to figure out what our should be, and also what our matrix should look like.
Picking the doesn't seem hard: we want a fixed point for the operation , so we can pick . One problem is that this might not be well-defined, but we can just go with this for now (it ends up not mattering, for reasons I don't really understand, but the Owings paper has another workaround, which is to use relations; I find that more confusing).
The matrix that works turns out to have entries . I'm not sure how one would have figured this out. One might also think would work, but notice that then we fail the type checking with (which takes a function, not a natural number).
So now we take the diagonal, which has entries , for , and apply . We get . But defined by is a recursive function, so the diagonal has turned into . Since a composition of recursive functions is itself recursive, is recursive. So we have some index for it, i.e. . So applied to the diagonal results in , which is one of the rows (the th row). This means has a fixed point, in the th entry, i.e. at . So we expect . Since , the "real" fixed point for the operator will be at . Indeed, .
Now we have to verify that doesn't need to be well-defined.
Take Cantor's theorem, generalize it to mention fixed points, then take the contrapositive. See the Yanofsky paper for details.
This version still has some mystery for me, e.g. replacing "the set has at least two elements" with "there is a function from the set to itself without a fixed point". The logical equivalence is easy to see, but getting the idea for rephrasing this condition to mention fixed points is not obvious at all.
The use of the s-m-n theorem also isn't obvious to me. Why use it at all? Why use it on ? Why do we care about the index of ?
It's also not clear to me why we use and . In some sense it does make sense, like the natural numbers are all the algorithms, and the set of computable functions are the "properties" (a.k.a. "the objects being named").
Some things to notice:
- The two theorems are essentially identical, with identical proofs, as seen by the matching rows. The analogy breaks down slightly at the very end, where we apply vs (the latter corresponds to until the very end).
- In the partial recursive functions world, it's easy to go from the index (e.g. ) to the partial function (). In the formulas world it's the reverse, where it's easy to go from a formula (e.g. ) to its Godel number ). I wonder if there is something essential here, or if it is simply some sort of historical accident in notation.
- For the diagonalization lemma, here we have done the semantic version (? I think...), but usually the manipulations are done inside a formal system with reference to some theory to derive a syntactic result (i.e. we have some theory that is strong enough to do all these manipulations within the object-level language). For partial recursive functions, as far as I know, there is no analogous distinction between semantics vs syntax.
- The diagonalization part is not completely correct/as strong as possible for both proofs. For the partial recursive functions side, we want to make sure that is actually defined in each case. For the logic side, I think often the diagonalization is defined as so that it is defined for all formulas, not just ones with one free variable. But the essential ideas are all present below, and since this makes the comparison easier, the presentation is simplified.
|Step||Rogers's fixed point theorem||Diagonalization lemma|
|Theorem statement (note: quantifiers are part of the metalanguage)|
|Definition of diagonal function|
|Composition of given mapping with diagonal function ()|
|Naming the composition||(name not given because compositions are easy to express outside a formal language)|
|Index of composition|
|Expanding using definition of diagonal|
|The composition applied to own index (i.e. diagonalization of the composition)|
|G defined||(no equivalent definition)||is|
|Leibniz law to previous row||Apply to obtain||Apply to obtain|
|Use definition of G|
|(Definition of G)?||is||is|
"All of these theorems tend to strain one's intuition; in fact, many people find them almost paradoxical. The most popular proofs of these theorems only serve to aggravate the situation because they are completely unmotivated, seem to depend upon a low combinatorial trick, and are so barbarically short as to be nearly incapable of rational analysis."
"This is just a lovely result, insightful in its concept and far reaching in its consequences. We’d love to say that the proof was also lovely and enlightening, but to be honest, we don’t have an enlightening sort of proof to show you. Sometimes the best way to describe a proof is that the argument sort of picks you up and shakes you until you agree that it does, in fact, establish what it is supposed to establish. That’s what you get here."
"The brevity of the proof does not make for transparency; it has the aura of a magician’s trick. How did Gödel ever come up with the idea? As a matter of fact, Gödel did not come up with that idea."
Questions/things to explain
- In Peter Smith's book, he defines Gdl(m,n) as Prf(m, diag(n)). What is the analogue of Gld for the Rogers fixed point theorem?
- I like the that begins this answer, but what is the analogue for partial functions? It seems like it is , which does exist (because we are allowed to have undefined values). So the motivation that works for the logic version doesn't work for the partial functions version, which bugs me.
- Haim Gaifman. "Naming and Diagonalization, from Cantor to Gödel to Kleene".
- James C. Owings, Jr. "Diagonalization and the Recursion Theorem". 1973.
- Christopher C. Leary; Lars Kristiansen. A Friendly Introduction to Mathematical Logic (2nd ed). p. 172.