# User:IssaRice/Computability and logic/Semantic completeness

Semantic completeness is sometimes written as: if , then .

Semantic completeness is the completeness that is the topic of Godel's completeness theorem.

Semantic completeness differs from negation completeness.

Semantic completeness is about the completeness of a logic (not about the completeness of a theory).

## Contents

## Definition

I want to make sure all these definitions are saying the same thing, so let me list some from several textbooks so I can explicitly compare.

Smith's definition: a logic is semantically complete iff for any set of wffs and any sentence , if then .^{[1]}

Leary/Kristiansen's definition: A deductive system consisting of logical axioms and a collection of rules of inference is said to be complete iff for every set of nonlogical axioms and every -formula , if , then .^{[2]}

## Alternative formulation

It is possible to formulate completeness by saying that a consistent set of sentences is satisfiable. In other words, the following are equivalent:

- Let be a set of sentences, and let be a sentence. If , then .
- Let be a set of sentences. If is consistent, then is satisfiable (has a model).

### Proof idea

The first trick in the proof is to notice that the that appears in (1) is not the same as the that appears in (2). Since is an arbitrary set of sentences in each of (1) and (2) (i.e. each statement is surrounded by "for all …"), we can use a different set of sentences than the arbitrary we are given. If you try to use the same in both places, your proof will go nowhere.

In other words, the statement of the proposition obfuscates the situation a bit by using the same symbol for "different things" in the proof. One might ask why one would obfuscate things this way; one response is that if you're just stating one of the formulations in isolation, you would just use your "default variable" for a set of sentences. For instance, if you tend to use for a set of sentences, then if someone asked you to state (1), you would use . Then if someone asked you to state (2) a few weeks later, you wouldn't use or unless you had (1) in mind.

The other trick is to figure out just what to use as when going from (2) to (1), and to figure out what to do with when going from (1) to (2).

The rest of the proof is some routine manipulations.

### Proof

Suppose (1) is true. We will prove the contrapositive of (2). Let be a set of sentences, and suppose is unsatisfiable. Let be a contradictory sentence, such as . Now, since is unsatisfiable, there is no structure such that . This means that vacuously, for every such that , we have . Thus we have . Now by (1) (with serving the role of ""), this means , which shows that is inconsistent, as desired.

Now suppose (2) is true. Let be a set of sentences, let be a sentence, and suppose . We will first show that is unsatisfiable. Suppose for the sake of contradiction that is satisfiable. Then we would have a structure such that . This means and . Since , this means . So we have both and , a contradiction. This proves that is unsatisfiable.

Since is unsatisfiable, by (2) (with in place of "") we have that is inconsistent. This means , so by the deduction theorem, which means , which is what we wanted. Note that the exact details of the derivations in this paragraph will differ based on the logical system one is using. For instance, going from "" to "" is not automatic; one must provide an entirely syntactic derivation in the logical system one is using.

### Proof commentary

In proving (1) ⇒ (2), we only needed that "if is unsatisfiable, then ". However, it is also possible to prove the converse, and show that if is satisfiable, then . Some books factor out both sides of this implication as a lemma.

Similarly, in proving (2) ⇒ (1), we only needed that "if , then is unsatisfiable", but the converse holds as well.