程序代写代做代考 C graph go 􏰚 Worth quoting from previous lecture-PDF that we saw earlier:

􏰚 Worth quoting from previous lecture-PDF that we saw earlier:
0.0.1 Theorem. If Γ ⊢ X ≡ Y , then also Γ ⊢ (∀x)X ≡ (∀x)Y , as long as Γ does
not contain wff with x free.
0.0.2 Theorem. If ⊢ A ≡ B, then ⊢ (∀x)A ≡ (∀x)B.
0.0.3 Metatheorem. (Weak (1st-order) Leibniz —Acronym “WL”) If ⊢A≡B, then also ⊢C[p\A]≡C[p\B].
Proof. This generalises 0.0.2 repeated above, being a part of the previous “lectures- PDF” that we saw.
The metatheorem is proved by Induction on the wff C. Basis. Atomic case:
(1)C isp. Themetatheoremboilsdownto“if⊢A≡B,then⊢A≡B”, which trivially holds!
(2) C is NOT p —that is, it is q (other than p), or is ⊥ or ⊤, or is t = s, or it is φ(t1,…,tn).
Then our Metatheorem statement becomes “if ⊢ A ≡ B, then ⊢ C ≡ C”.
Given that ⊢ C ≡ C is correct by axiom 1, the “if” part is irrelevant. Done.
The complex cases.
(i) C is¬D. FromtheI.H.wehave⊢D[p\A]≡D[p\B], hence ⊢ ¬D[p \ A] ≡ ¬D[p \ B] by Post and thus
􏰚
1
since
CC
􏰣􏰦􏰥􏰤 􏰣􏰦􏰥􏰤
⊢ (¬D)[p \ A] ≡ (¬D)[p \ B] (¬D)[p\A] is the same wff as ¬D[p\A]
Basic Logic© by George Tourlakis

2
(ii) C is D ◦ E, where ◦ ∈ {∧, ∨, →, ≡}.
The I.H. yields ⊢ D[p\A] ≡ D[p\B] and ⊢ E[p\A] ≡ E[p\B] hence
⊢ D[p \ A] ◦ E[p \ A] ≡ D[p \ B] ◦ E[p \ B] by Post.
Thus
CC
􏰣 􏰦􏰥 􏰤 􏰣 􏰦􏰥 􏰤
⊢ (D ◦ E)[p \ A] ≡ (D ◦ E)[p \ B] due to the way substitution works, namely,
(D◦E)[p\A] is the same wff as D[p\A]◦E[p\A] (iii) C is (∀x)D. This is the “interesting case”.
From the I.H. follows ⊢ D[p \ A] ≡ D[p \ B].
From 0.0.2 we get ⊢ (∀x)D[p \ A] ≡ (∀x)D[p \ B], also written as
since
CC
􏰣 􏰦􏰥 􏰤 􏰣 􏰦􏰥 􏰤
⊢ 􏰜(∀x)D􏰝[p \ A] ≡ 􏰜(∀x)D􏰝[p \ B]
􏰜(∀x)D􏰝[p \ A] is the same wff as (∀x)D[p \ A]
Basic Logic© by George Tourlakis
􏰙

􏰚 WL is the only “Leibniz” we will need (practically) in our use of 1st-order logic.
Why “weak”? Because of the restriction on the Rule’s Hypothesis: A ≡ B must be an absolute theorem. (Recall that the Boolean Leibniz was not so restricted).
Why not IGNORE the restriction and “adopt” the strong rule (i) below?
Well, in logic you do NOT arbitrarily “adopt” derived rules; you prove them.
BUT, CAN I prove (i) below then?
NO, our logic does not allow it; here is why: If I can prove (i) then I can also prove STRONG generalisation (ii) from (i).
3
A ≡ B ⊢ C[p \ A] ≡ C[p \ B] strong generalisation: A ⊢ (∀x)A
Here is why (i) ⇒ (ii): So, assume I have “Rule” (i). THEN
(i) (ii)
(1) A
(2) A≡⊤
(3) (∀x)A ≡ (∀x)⊤
(4) (∀x)A ≡ ⊤
(5) (∀x)A
So if I have (i) I have (ii) too.
⟨hyp⟩
⟨(1) + Post⟩
⟨(2) + (i); “Denom:” (∀x)p⟩ ⟨(3) + ⊢ (∀x)⊤ ≡ ⊤ + Post⟩ ⟨(4) + Post⟩
􏰚
Question: Why is it ⊢ (∀x)⊤ ≡ ⊤? Answer: Ping-Pong, Plus Ax2 Ax3
􏰣 􏰦􏰥 􏰤 􏰣 􏰦􏰥 􏰤
(∀x)⊤ → ⊤ and ⊤ → (∀x)⊤
Basic Logic© by George Tourlakis

4
BUT: Here is an informal reason I cannot have (ii).
􏰚 It is a provable fact —this is 1st-order Soundness†— that all absolute theorems of 1st-order logic are true in every informal interpretation I build for them.
So IF I have (ii), then by the DThm I also have
⊢ A → (∀x)A
Interpret the above over the natural numbers as ⊢ x = 0 → (∀x)x = 0
􏰚
(1)
(2) By 1st-order Soundness, IF I have (1), then (2) is true for all values of (the free) x.
Well,tryx=0. Weget0=0→(∀x)x=0. Thelhsof“→”istruebuttherhs is false.
So I cannot have (ii) —nor (i), which implies it!
†For a proof wait until the near-end of the course Basic Logic© by George Tourlakis

We CAN have a MODIFIED (i) where the substitution into p is restricted. 0.0.4 Metatheorem. (Strong Leibniz —Acronym “SL”) A ≡ B ⊢ C[p :=
A] ≡ C[p := B]
􏰚 Goes without saying that if the rhs of ⊢ is NOT defined, then there is nothing to
prove since the expresion “C[p := A] ≡ C[p := B]” represents no wff.
Remember this comment during the proof! 􏰚
Proof. As we did for WL, the proof is an induction on the definition/formation of C.
Basis. C is atomic: subcases
• C is p. We need to prove A ≡ B ⊢ A ≡ B, which is the familiar X ⊢ X. •Cisnotp. ThemetatheoremnowclaimsA≡B⊢C≡Cwhichiscorrect
since C ≡ C is an axiom. The complex cases.
(i)Cis¬D. BytheI.H.wehaveA≡B⊢D[p:=A]≡D[p:=B],thus, A ≡ B ⊢ ¬D[p := A] ≡ ¬D[p := B] by Post.
We can rewrite the above as A ≡ B ⊢ (¬D)[p := A] ≡ (¬D)[p := B] since when substitution is allowed
C
􏰣􏰦􏰥􏰤
(¬D)[p := A] is the same as ¬D[p := A], etc. (ii) C isD◦E. BytheI.H.wegetA≡B⊢D[p:=A]≡D[p:=B]
and
A ≡ B ⊢ E[p := A] ≡ E[p := B].
Basic Logic© by George Tourlakis
5

6
Thus, by Post,
A ≡ B ⊢ D[p := A] ◦ E[p := A] ≡ D[p := B] ◦ E[p := B]
The way substitution works (when defined), the above says
CC
􏰣 􏰦􏰥 􏰤 􏰣 􏰦􏰥 􏰤
A ≡ B ⊢ (D ◦ E)[p := A] ≡ (D ◦ E)[p := B] (iii) C is (∀x)D. This is the “interesting case”.
From the I.H. we get
A ≡ B ⊢ D[p := A] ≡ D[p := B]
Now, since the expressions C[p := A] and C[p := B] ARE defined —else we wouldn’t be doing all this— the definition of conditional (restricted) substitu- tion implies that neither A nor B have any free occurrences of x.
Then x does not occur free in A ≡ B either. From 0.0.1 we get
A ≡ B ⊢ (∀x)D[p := A] ≡ (∀x)D[p := B] which —the way substitution works— is the same as
CC
􏰣 􏰦􏰥 􏰤 􏰣 􏰦􏰥 􏰤
A ≡ B ⊢ 􏰜(∀x)D􏰝[p := A] ≡ 􏰜(∀x)D􏰝[p := B]
Basic Logic© by George Tourlakis
􏰙

0.1. More Useful Tools 7 0.1. More Useful Tools
􏰚 Since
A1 ≡ A2,A2 ≡ A3,…,An−1 ≡ An |=taut A1 ≡ An holds in 1st-order logic, we also have by Post
A1 ≡A2,A2 ≡A3,…,An−1 ≡An ⊢A1 ≡An (1) As we know, (1) enables Equational proofs, including the fundamental metatheorem
for such proofs
0.1.1 Metatheorem. If each “Ai ≡ Ai+1” in (1) is a Γ-theorem, then we have Γ⊢A1 ≡An (this just repeats (1)) and Γ⊢A1 iff Γ⊢An.
Trivially, we also have
A1→ or ≡A2,A2→ or ≡A3,…,An−1→ or ≡An |=taut A1 → An
and thus, by Post,
A1→ or ≡A2,A2→ or ≡A3,…,An−1→ or ≡An ⊢ A1 → An (2)
The fundamental metatheorem for (2) is:
0.1.2 Metatheorem. If each “Ai→ or ≡Ai+1” in (2) is a Γ-theorem, then we
have Γ⊢A1 →An (this just repeats (2)) and IF Γ⊢A1 THEN Γ⊢An.
This last metatheorem extends Equational proofs so they can have a mix of → and ≡, BUT
• ALL → go in the same direction and
• ALL → are replaced by the conjunctional implication ⇒.
That is, unlike A → B → C that means A → (B → C) or A ∧ B → C,
A ⇒ B ⇒ C means A → B AND B → C. Basic Logic© by George Tourlakis

8
􏰚 The thus Extended Equational Proofs are called Calculational Proofs ([DS90, GS94, Tou08]) and have the following layout:
A1
◦ ⟨annotation⟩
A2
◦ ⟨annotation⟩
.
An−1
◦ ⟨annotation⟩
An
◦ ⟨annotation⟩
An+1
where “◦” here —in each line where it occurs— is one of ⇔ or ⇒. 􏰚
Basic Logic© by George Tourlakis

0.1. More Useful Tools 9 More Examples and “Techniques”.
0.1.3 Theorem. ⊢ (∀x)(A → B) ≡ (A → (∀x)B), as long as x has no free occur- rences in A.
Proof.
Ping-Pong using DThm. (→) I want
⊢ (∀x)(A → B) → (A → (∀x)B) Better still, let me do (DThm)
(∀x)(A → B) ⊢ A → (∀x)B and, even better, (DThm!) I will do
(∀x)(A → B), A ⊢ (∀x)B
(1) (∀x)(A → B)
(2) A
(3) A→B
(4) B
(5) (∀x)B
⟨hyp⟩
⟨hyp⟩
⟨(1) + spec⟩
⟨(2, 3) + MP⟩
⟨(4) + gen; OK: no free x in (1) or (2)⟩
Basic Logic© by George Tourlakis

10
(←) I want
or better still (DThm)
⊢ (A → (∀x)B) → (∀x)(A → B)
A → (∀x)B ⊢ (∀x)(A → B) (1)
Seeing that A → (∀x)B has no free x, I can prove the even easier
A → (∀x)B ⊢ A → B (2) and after the proof is done I can apply gen to A → B to get (∀x)(A → B).
OK! By DThm I can prove the even simpler than (2)
A → (∀x)B,A ⊢ B (3)
Here it is:
Basic Logic© by George Tourlakis
(1) A → (∀x)B (2) A
(3) (∀x)B
(4) B
⟨hyp⟩
⟨hyp⟩
⟨(1, 2) + MP⟩
⟨(3)+spec⟩ 􏰙

0.1. More Useful Tools
11
􏰚 As a curiosity, here is a Calculational proof of the → Direction: (→)
(∀x)(A → B) ⇒ ⟨Ax4⟩
(∀x)A → (∀x)B ⇒ ⟨Ax3 + Post⟩
A → (∀x)B
Do you buy the second step?
ThinkofAaspand(∀x)Aasq. Axiom3says“p→q”andIsay p→q|=taut (q→(∀x)B)→(p→(∀x)B)
Do you believe this? Exercise!
􏰚
Basic Logic© by George Tourlakis

12
Lecture # 17. Nov.13
0.1.4 Corollary. ⊢ (∀x)(A ∨ B) ≡ A ∨ (∀x)B, as long as x does not occur free in A.
Proof.
Basic Logic© by George Tourlakis
(∀x)(A ∨ B)
⇔ ⟨WL + ¬∨ ( = axiom!); “Denom:” (∀x)p⟩
(∀x)(¬A → B)
⇔ ⟨“forall vs arrow” (0.1.3)⟩
¬A → (∀x)B
⇔ ⟨tautoloy, hence axiom⟩
A ∨ (∀x)B 􏰙

0.1. More Useful Tools 13 􏰚 Most of the statements we prove in what follows have Dual counterparts obtained
by swapping ∀ and ∃ and ∨ and ∧.
Let us give a theorem version of the definition of ∃. This is useful in Equa-
tional/Calculational proofs. Definition (Recall):
theorem
(∃x)A is short for ¬(∀x)¬A (1) ¬(∀x)¬A ≡ ¬(∀x)¬A (2)
Next consider the axiom
Let me use the ABBREVIATION (1) ONLY on ONE side of “≡” in (2). I get the
So I can write the theorem without words:
⊢ (∃x)A ≡ ¬(∀x)¬A
I can apply (3) in Equational proofs —via WL— easily!
I will refer to (3) in proofs as “Def of E”.
(3)
􏰚
(∃x)A ≡ ¬(∀x)¬A
Basic Logic© by George Tourlakis

14
Here’s something useful AND good practise too!
0.1.5 Corollary. ⊢ (∃x)(A ∧ B) ≡ A ∧ (∃x)B, as long as x does not occur free in A.
􏰚 In annotaton we may call the above the “∃∧ theorem”. Proof.
(∃x)(A ∧ B)
⇔ ⟨Def of E⟩
¬(∀x)¬(A ∧ B)
⇔ ⟨WL + axiom (deM); “Denom:” ¬(∀x)p⟩
¬(∀x)(¬A ∨ ¬B)
⇔ ⟨WL + forall over or (0.1.4) —no free x in ¬A; “Denom:” ¬p⟩
¬(¬A ∨ (∀x)¬B)
⇔ ⟨Ax1⟩
A ∧ ¬(∀x)¬B
⇔ ⟨WL + Def of E; “Denom:” A∧p⟩
A ∧ (∃x)B
􏰚
Basic Logic© by George Tourlakis
􏰙

Bibliography
[DS90] Edsger W. Dijkstra and Carel S. Scholten, Predicate Calculus and Program Semantics, Springer-Verlag, New York, 1990.
[GS94] David Gries and Fred B. Schneider, A Logical Approach to Discrete Math, Springer-Verlag, New York, 1994.
[Tou08] G. Tourlakis, Mathematical Logic, John Wiley & Sons, Hoboken, NJ, 2008.
Basic Logic© by George Tourlakis