# Difference between revisions of "User:IssaRice/Computability and logic/Diagonalization lemma"

(→Comparison table) |
(→Comparison table) |
||

Line 40: | Line 40: | ||

* In the partial recursive functions world, it's easy to go from the index (e.g. <math>e</math>) to the partial function (<math>\varphi_e</math>). In the formulas world it's the reverse, where it's easy to go from a formula (e.g. <math>A</math>) to its Godel number <math>\ulcorner A\urcorner</math>). I wonder if there is something essential here, or if it is simply some sort of historical accident in notation. | * In the partial recursive functions world, it's easy to go from the index (e.g. <math>e</math>) to the partial function (<math>\varphi_e</math>). In the formulas world it's the reverse, where it's easy to go from a formula (e.g. <math>A</math>) to its Godel number <math>\ulcorner A\urcorner</math>). I wonder if there is something essential here, or if it is simply some sort of historical accident in notation. | ||

* For the diagonalization lemma, here we have done the semantic version (? I think...), but usually the manipulations are done inside a formal system with reference to some theory <math>T</math> to derive a syntactic result (i.e. we have some theory that is strong enough to do all these manipulations within the object-level language). For partial recursive functions, as far as I know, there is no analogous distinction between semantics vs syntax. | * For the diagonalization lemma, here we have done the semantic version (? I think...), but usually the manipulations are done inside a formal system with reference to some theory <math>T</math> to derive a syntactic result (i.e. we have some theory that is strong enough to do all these manipulations within the object-level language). For partial recursive functions, as far as I know, there is no analogous distinction between semantics vs syntax. | ||

− | * The diagonalization part is not completely correct/as strong as possible for both proofs. For the partial recursive functions side, we want to make sure that <math>\varphi_{\varphi_x}</math> is actually defined in each case. For the logic side, I think often the diagonalization is defined as <math>\exists x(x = \ulcorner A\urcorner \wedge A)</math> so that it is defined for all formulas, not just ones with one free variable. | + | * The diagonalization part is not completely correct/as strong as possible for both proofs. For the partial recursive functions side, we want to make sure that <math>\varphi_{\varphi_x(x)}</math> is actually defined in each case. For the logic side, I think often the diagonalization is defined as <math>\exists x(x = \ulcorner A\urcorner \wedge A)</math> so that it is defined for all formulas, not just ones with one free variable. |

{| class="wikitable" | {| class="wikitable" |

## Revision as of 06:17, 8 February 2019

The **diagonalization lemma**, also called the Godel-Carnap fixed point theorem, is a fixed point theorem in logic.

## Rogers's fixed point theorem

Let be a total computable function. Then there exists an index such that .

(simplified)

Define (this is actually slightly wrong, but it brings out the analogy better).

Consider the function . This is partial recursive, so for some index .

Now since . This is equivalent to by definition of . Thus, we may take to complete the proof.

## Diagonalization lemma

(semantic version)

Let be a formula with one free variable. Then there exists a sentence such that iff .

Define to be where . In other words, given a number , the function finds the formula with that Godel number, then diagonalizes it (i.e. substitutes the Godel number of the formula into the formula itself), then returns the Godel number of the resulting sentence.

Let be , and let be .

Then is , by substituting in the definition of .

We also have by definition of . By definition of , this is , so we have .

To complete the proof, apply to both sides of the final equality to obtain iff ; this simplifies to iff .

^{[1]}

^{[2]}

## Comparison table

Some things to notice:

- The two theorems are essentially identical, with identical proofs, as seen by the matching rows. The analogy breaks down slightly at the very end, where we apply vs (the latter corresponds to until the very end).
- In the partial recursive functions world, it's easy to go from the index (e.g. ) to the partial function (). In the formulas world it's the reverse, where it's easy to go from a formula (e.g. ) to its Godel number ). I wonder if there is something essential here, or if it is simply some sort of historical accident in notation.
- For the diagonalization lemma, here we have done the semantic version (? I think...), but usually the manipulations are done inside a formal system with reference to some theory to derive a syntactic result (i.e. we have some theory that is strong enough to do all these manipulations within the object-level language). For partial recursive functions, as far as I know, there is no analogous distinction between semantics vs syntax.
- The diagonalization part is not completely correct/as strong as possible for both proofs. For the partial recursive functions side, we want to make sure that is actually defined in each case. For the logic side, I think often the diagonalization is defined as so that it is defined for all formulas, not just ones with one free variable.

Step | Rogers's fixed point theorem | Diagonalization lemma |
---|---|---|

Theorem statement (note: quantifiers are part of the metalanguage) | ||

Given mapping | ||

Definition of diagonal function | ||

Composition of given mapping with diagonal function () | ||

Naming the composition | (name not given because compositions are easy to express outside a formal language) | |

Index of composition | ||

Expanding using definition of diagonal | ||

The composition applied to own index (i.e. diagonalization of the composition) | ||

G defined | (no equivalent definition) | is |

Renaming index | ||

Leibniz law to previous row | Apply to obtain | Apply to obtain |

Use definition of G | ||

(Definition of G)? | is | is |