User:IssaRice/Computability and logic/Semantic completeness

From Machinelearning

Semantic completeness is sometimes written as: if Σϕ, then Σϕ.

Semantic completeness is the completeness that is the topic of Godel's completeness theorem.

Semantic completeness differs from negation completeness.

Semantic completeness is about the completeness of a logic (not about the completeness of a theory).

Definition

I want to make sure all these definitions are saying the same thing, so let me list some from several textbooks so I can explicitly compare.

Smith's definition: a logic is semantically complete iff for any set of wffs Σ and any sentence ϕ, if Σϕ then Σϕ.[1]

Leary/Kristiansen's definition: A deductive system consisting of logical axioms Λ and a collection of rules of inference is said to be complete iff for every set of nonlogical axioms Σ and every L-formula ϕ, if Σϕ, then Σϕ.[2]

Alternative formulation

It is possible to formulate completeness by saying that a consistent set of sentences is satisfiable. In other words, the following are equivalent:

  1. Let Γ be a set of sentences, and let ϕ be a sentence. If Γϕ, then Γϕ.
  2. Let Γ be a set of sentences. If Γ is consistent, then Γ is satisfiable (has a model).

Proof idea

The first trick in the proof is to notice that the Γ that appears in (1) is not the same as the Γ that appears in (2). Since Γ is an arbitrary set of sentences in each of (1) and (2) (i.e. each statement is surrounded by "for all Γ …"), we can use a different set of sentences than the arbitrary Γ we are given. If you try to use the same Γ in both places, your proof will go nowhere.

In other words, the statement of the proposition obfuscates the situation a bit by using the same symbol for "different things" in the proof. One might ask why one would obfuscate things this way; one response is that if you're just stating one of the formulations in isolation, you would just use your "default variable" for a set of sentences. For instance, if you tend to use Γ for a set of sentences, then if someone asked you to state (1), you would use Γ. Then if someone asked you to state (2) a few weeks later, you wouldn't use Δ or Σ unless you had (1) in mind.

The other trick is to figure out just what to use as Γ when going from (2) to (1), and to figure out what to do with ϕ when going from (1) to (2).

The rest of the proof is some routine manipulations.

Proof

Suppose (1) is true. We will prove the contrapositive of (2). Let Γ be a set of sentences, and suppose Γ is unsatisfiable. Let be a contradictory sentence, such as ϕ¬ϕ. Now, since Γ is unsatisfiable, there is no structure A such that AΓ. This means that vacuously, for every A such that AΓ, we have A. Thus we have Γ. Now by (1) (with serving the role of "ϕ"), this means Γ, which shows that Γ is inconsistent, as desired.

Now suppose (2) is true. Let Γ be a set of sentences, let ϕ be a sentence, and suppose Γϕ. We will first show that Γ{¬ϕ} is unsatisfiable. Suppose for the sake of contradiction that Γ{¬ϕ} is satisfiable. Then we would have a structure A such that AΓ{¬ϕ}. This means AΓ and A¬ϕ. Since Γϕ, this means Γϕ. So we have both Aϕ and A¬ϕ, a contradiction. This proves that Γ{¬ϕ} is unsatisfiable.

Since Γ{¬ϕ} is unsatisfiable, by (2) we have that Γ{¬ϕ} is inconsistent. This means Γ{¬ϕ}, so Γ¬ϕ, which means Γϕ, which is what we wanted.

Proof commentary

In proving (1) ⇒ (2), we only needed that "if Γ is unsatisfiable, then Γ". However, it is also possible to prove the converse, and show that if Γ is satisfiable, then Γ⊭. Some books factor out both sides of this implication as a lemma.

References

  1. Peter Smith. An Introduction to Godel's Theorems. p. 33.
  2. Leary; Kristiansen. A Friendly Introduction to Mathematical Logic. p. 74.