CS计算机代考程序代写 Haskell First Order Logic – COMP1600 / COMP6260

First Order Logic – COMP1600 / COMP6260

First Order Logic
COMP1600 / COMP6260

Dirk Pattinson Victor Rivera
Australian National University

Semester 2, 2021

First Order Natural Deduction: Example

1 ∀x(elephant(x)→ happy(x))

2 elephant(Appu)

3 elephant(Appu)→ happy(Appu) ∀-E, 1

4 happy(Appu) →-E, 2, 3

2 / 50

Natural deduction in first-order logic

Proof rules for propositional natural deduction + quantifier rules:

∀-E universal elimination;
∀-I universal introduction;
∃-E existential elimination;
∃-I existential introduction;

Proof in first-order logic is usually based on these rules, together with the
rules for propositional logic.

3 / 50

Elimination

∀-E (universal elimination)

∀x . P(x)

P(a)

n ∀x . Fish(x)→ HasFins(x)

m Fish(a)→ HasFins(a) ∀-E, n

If a predicate is true for all members of a domain, then it is also true for a
specific one (a must be a member of the domain).

4 / 50

Introduction

∀-I (universal introduction)
P(a) (a arbitrary, a variable)

∀x . P(x)

n a


m Cat(a)→ EatsFish(a)

m + 1 ∀x . Cat(x)→ EatsFish(x)

The a on the left of the bar is a guard which reminds us that:
I this variable is local to the inner derivation, and
I it cannot be free in an assumption

It is like an “assumption” that a is an arbitrary member of the
domain.

That is, the proof from lines n to m must work for anything in place
of a.

5 / 50

Free and bound variables

Bound: Every occurrence of variable x in ∀ x . p(x) and in ∃ x . p(x)
is bound.

Free: Every occurrence of a variable that is not bound is free.

Example.

(∀ x . x + y = y + x) ∧ (∀ x . ∃ y . y > x + z)

Q. Which occurrences of variables are free and which are bound?

A. All occurrences of x are bound; none of z are; and just the last 2
occurrences of y are bound.
Hence the instance of z is free, as are the first two occurrences of y .

6 / 50

Breaching the arbitrariness requirement

When we generalise for a variable a, the same proof steps must be possible
for all members of the domain.

1 (Cat(kitty)→ HasFur(kitty)) ∧ Cat(kitty)

2 Cat(kitty)→ HasFur(kitty) ∧-E, 1

3 Cat(kitty) ∧-E, 1

4 HasFur(kitty) →-E, 2, 3

5 ∀x . HasFur(x) WRONG ∀-I, 4

WRONG because kitty appears in an assumption (step 1) (and step 4 is
still in the scope of that assumption)

7 / 50

Example

(∀x∀yP(x , y)) ↔ (∀y∀xP(x , y))

1 ∀x∀yP(x , y)

2 b a ∀yP(a, y) ∀-E, 1

3 P(a, b) ∀-E, 2

4 ∀xP(x , b) ∀-I, 3

5 ∀y∀xP(x , y) ∀-I, 4

Exercise: also show the reverse to get equivalence, ↔

8 / 50

Example

(∀x∀yP(x , y)) ↔ (∀y∀xP(x , y))

1 ∀x∀yP(x , y)

2 b a ∀yP(a, y) ∀-E, 1

3 P(a, b) ∀-E, 2

4 ∀xP(x , b) ∀-I, 3

5 ∀y∀xP(x , y) ∀-I, 4

Exercise: also show the reverse to get equivalence, ↔

8 / 50

Example

(∀x∀yP(x , y)) ↔ (∀y∀xP(x , y))

1 ∀x∀yP(x , y)

2 b a ∀yP(a, y) ∀-E, 1

3 P(a, b) ∀-E, 2

4 ∀xP(x , b) ∀-I, 3

5 ∀y∀xP(x , y) ∀-I, 4

Exercise: also show the reverse to get equivalence, ↔

8 / 50

Example

(∀x∀yP(x , y)) ↔ (∀y∀xP(x , y))

1 ∀x∀yP(x , y)

2 b a ∀yP(a, y) ∀-E, 1

3 P(a, b) ∀-E, 2

4 ∀xP(x , b) ∀-I, 3

5 ∀y∀xP(x , y) ∀-I, 4

Exercise: also show the reverse to get equivalence, ↔

8 / 50

Introduction

∃-I (existential introduction)

P(a)

∃xP(x)

n Dog(fido)


m ∃xDog(x) ∃-I, n

9 / 50

An invalid argument

This argument is invalid if the domain is empty.

1 ∀xP(x)

2 P(a) ∀-E, 1

3 ∃xP(x) ∃-I, 2

Which step is invalid ??

10 / 50

Elimination

∃xP(x)

[P(a)]

q (a arbitrary, a variable)

q (a not free in q)

Rationale: if P(x) holds for some individual x ,

let that individual be called a (so P(a) holds)

prove that q follows

as q doesn’t involve our choice of a,
q holds regardless of which individual has P true

The proof of q from P(a) must work for any individual in place of a

11 / 50

Example

Prove
∃xElephant(x) ∀xElephant(x)→ Huge(x)

∃xHuge(x)
1 ∃xElephant(x)

2 ∀xElephant(x)→ Huge(x)

3 a Elephant(a)

4 Elephant(a)→ Huge(a) ∀-E, 2

5 Huge(a) →-E, 3, 4

6 ∃xHuge(x) ∃-I, 5

7 ∃xHuge(x) ∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such
that Elephant(x), assume that individual is a

12 / 50

Example

Prove
∃xElephant(x) ∀xElephant(x)→ Huge(x)

∃xHuge(x)
1 ∃xElephant(x)

2 ∀xElephant(x)→ Huge(x)

3 a Elephant(a)

4 Elephant(a)→ Huge(a) ∀-E, 2

5 Huge(a) →-E, 3, 4

6 ∃xHuge(x) ∃-I, 5

7 ∃xHuge(x) ∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such
that Elephant(x), assume that individual is a

12 / 50

Example

Prove
∃xElephant(x) ∀xElephant(x)→ Huge(x)

∃xHuge(x)
1 ∃xElephant(x)

2 ∀xElephant(x)→ Huge(x)

3 a Elephant(a)

4 Elephant(a)→ Huge(a) ∀-E, 2

5 Huge(a) →-E, 3, 4

6 ∃xHuge(x) ∃-I, 5

7 ∃xHuge(x) ∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such
that Elephant(x), assume that individual is a

12 / 50

Example

Prove
∃xElephant(x) ∀xElephant(x)→ Huge(x)

∃xHuge(x)
1 ∃xElephant(x)

2 ∀xElephant(x)→ Huge(x)

3 a Elephant(a)

4 Elephant(a)→ Huge(a) ∀-E, 2

5 Huge(a) →-E, 3, 4

6 ∃xHuge(x) ∃-I, 5

7 ∃xHuge(x) ∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such
that Elephant(x), assume that individual is a

12 / 50

Swapping the order of existential quantifiers

(∃x∃yP(x , y)) ↔ (∃y∃xP(x , y))

1 ∃x∃yP(x , y)

2 a ∃yP(a, y)

3 b P(a, b)

4 ∃xP(x , b) ∃-I, 3

5 ∃y∃xP(x , y) ∃-I, 4

6 ∃y∃xP(x , y) ∃-E, 2, 3–5

7 ∃y∃xP(x , y) ∃-E, 1, 2–6

Exercise: also show the converse to get equivalence, ↔

13 / 50

Swapping the order of existential quantifiers

(∃x∃yP(x , y)) ↔ (∃y∃xP(x , y))

1 ∃x∃yP(x , y)

2 a ∃yP(a, y)

3 b P(a, b)

4 ∃xP(x , b) ∃-I, 3

5 ∃y∃xP(x , y) ∃-I, 4

6 ∃y∃xP(x , y) ∃-E, 2, 3–5

7 ∃y∃xP(x , y) ∃-E, 1, 2–6

Exercise: also show the converse to get equivalence, ↔

13 / 50

Swapping the order of existential quantifiers

(∃x∃yP(x , y)) ↔ (∃y∃xP(x , y))

1 ∃x∃yP(x , y)

2 a ∃yP(a, y)

3 b P(a, b)

4 ∃xP(x , b) ∃-I, 3

5 ∃y∃xP(x , y) ∃-I, 4

6 ∃y∃xP(x , y) ∃-E, 2, 3–5

7 ∃y∃xP(x , y) ∃-E, 1, 2–6

Exercise: also show the converse to get equivalence, ↔

13 / 50

Swapping the order of existential quantifiers

(∃x∃yP(x , y)) ↔ (∃y∃xP(x , y))

1 ∃x∃yP(x , y)

2 a ∃yP(a, y)

3 b P(a, b)

4 ∃xP(x , b) ∃-I, 3

5 ∃y∃xP(x , y) ∃-I, 4

6 ∃y∃xP(x , y) ∃-E, 2, 3–5

7 ∃y∃xP(x , y) ∃-E, 1, 2–6

Exercise: also show the converse to get equivalence, ↔

13 / 50

Swapping the order of existential and universal quantifiers

Proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

1 ∃x∀yP(x , y)

2 b a ∀yP(a, y)

3 P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∃xP(x , b) ∃-E, 1, 2–4

6 ∀y∃xP(x , y) ∀-I, 5

14 / 50

Swapping the order of existential and universal quantifiers

Proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

1 ∃x∀yP(x , y)

2 b a ∀yP(a, y)

3 P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∃xP(x , b) ∃-E, 1, 2–4

6 ∀y∃xP(x , y) ∀-I, 5

14 / 50

Swapping the order of existential and universal quantifiers

Proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

1 ∃x∀yP(x , y)

2 b a ∀yP(a, y)

3 P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∃xP(x , b) ∃-E, 1, 2–4

6 ∀y∃xP(x , y) ∀-I, 5

14 / 50

Swapping the order of existential and universal quantifiers

Proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

1 ∃x∀yP(x , y)

2 b a ∀yP(a, y)

3 P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∃xP(x , b) ∃-E, 1, 2–4

6 ∀y∃xP(x , y) ∀-I, 5

14 / 50

Swapping the order of existential and universal quantifiers

Another proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

We got the previous proof by first looking at the goal, (∀y . . . .), so using
∀-I. Here we first look at what we have, (∃x . . . .), and so use ∃-E.

1 ∃x∀yP(x , y)

2 a ∀yP(a, y)

3 b P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∀y∃xP(x , y) ∀-I, 4

6 ∀y∃xP(x , y) ∃-E, 1, 2–5

15 / 50

Swapping the order of existential and universal quantifiers

Another proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

We got the previous proof by first looking at the goal, (∀y . . . .), so using
∀-I. Here we first look at what we have, (∃x . . . .), and so use ∃-E.

1 ∃x∀yP(x , y)

2 a ∀yP(a, y)

3 b P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∀y∃xP(x , y) ∀-I, 4

6 ∀y∃xP(x , y) ∃-E, 1, 2–5

15 / 50

Swapping the order of existential and universal quantifiers

Another proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

We got the previous proof by first looking at the goal, (∀y . . . .), so using
∀-I. Here we first look at what we have, (∃x . . . .), and so use ∃-E.

1 ∃x∀yP(x , y)

2 a ∀yP(a, y)

3 b P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∀y∃xP(x , y) ∀-I, 4

6 ∀y∃xP(x , y) ∃-E, 1, 2–5

15 / 50

Swapping the order of existential and universal quantifiers

Another proof of
∃x∀yP(x , y)
∀y∃xP(x , y)

We got the previous proof by first looking at the goal, (∀y . . . .), so using
∀-I. Here we first look at what we have, (∃x . . . .), and so use ∃-E.

1 ∃x∀yP(x , y)

2 a ∀yP(a, y)

3 b P(a, b) ∀-E, 2

4 ∃xP(x , b) ∃-I, 3

5 ∀y∃xP(x , y) ∀-I, 4

6 ∀y∃xP(x , y) ∃-E, 1, 2–5

15 / 50

Can quantifiers always be swapped?

∃x∀yEats(x , y) → ∀y∃xEats(x , y)
There is an animal All foods can be eaten

that can eat all foods. by some animal.

∀y∃xEats(x , y) → ∃x∀yEats(x , y)
All foods can be eaten There is an animal

by some animal. that can eat all foods.

Is this second version true? Try to prove it. What happens?

16 / 50

The “quantifier negation” equivalence

(∀x . ¬P(x)) ↔ ¬(∃x . P(x))

Prove
¬(∃x . P(x))
∀x . ¬P(x)

1 ¬(∃x . P(x))

2 a P(a)

3 ∃x . P(x) ∃-I, 2

4 F ¬E, 1, 3

5 ¬P(a) ¬I, 2–4

6 ∀x . ¬P(x) ∀-I, 5

17 / 50

The “quantifier negation” equivalence

(∀x . ¬P(x)) ↔ ¬(∃x . P(x))

Prove
¬(∃x . P(x))
∀x . ¬P(x)

1 ¬(∃x . P(x))

2 a P(a)

3 ∃x . P(x) ∃-I, 2

4 F ¬E, 1, 3

5 ¬P(a) ¬I, 2–4

6 ∀x . ¬P(x) ∀-I, 5

17 / 50

The “quantifier negation” equivalence

(∀x . ¬P(x)) ↔ ¬(∃x . P(x))

Prove
¬(∃x . P(x))
∀x . ¬P(x)

1 ¬(∃x . P(x))

2 a P(a)

3 ∃x . P(x) ∃-I, 2

4 F ¬E, 1, 3

5 ¬P(a) ¬I, 2–4

6 ∀x . ¬P(x) ∀-I, 5

17 / 50

The “quantifier negation” equivalence

(∀x . ¬P(x)) ↔ ¬(∃x . P(x))

Prove
¬(∃x . P(x))
∀x . ¬P(x)

1 ¬(∃x . P(x))

2 a P(a)

3 ∃x . P(x) ∃-I, 2

4 F ¬E, 1, 3

5 ¬P(a) ¬I, 2–4

6 ∀x . ¬P(x) ∀-I, 5

17 / 50

The “quantifier negation” equivalence

Proof of the converse:

1 ∀x . ¬P(x)

2 ∃x . P(x)

3 a P(a)

4 ¬P(a) ∀-E, 1

5 F ¬E, 3, 4

6 F ∃-E, 2, 3–6

7 ¬(∃x . P(x)) ¬I, 2–7

18 / 50

The “quantifier negation” equivalence

Proof of the converse:

1 ∀x . ¬P(x)

2 ∃x . P(x)

3 a P(a)

4 ¬P(a) ∀-E, 1

5 F ¬E, 3, 4

6 F ∃-E, 2, 3–6

7 ¬(∃x . P(x)) ¬I, 2–7

18 / 50

The “quantifier negation” equivalence

Proof of the converse:

1 ∀x . ¬P(x)

2 ∃x . P(x)

3 a P(a)

4 ¬P(a) ∀-E, 1

5 F ¬E, 3, 4

6 F ∃-E, 2, 3–6

7 ¬(∃x . P(x)) ¬I, 2–7

18 / 50

The “quantifier negation” equivalence

Proof of the converse:

1 ∀x . ¬P(x)

2 ∃x . P(x)

3 a P(a)

4 ¬P(a) ∀-E, 1

5 F ¬E, 3, 4

6 F ∃-E, 2, 3–6

7 ¬(∃x . P(x)) ¬I, 2–7

18 / 50

Again: Two sides of (the same?) Coin

Validity. A formula is valid (in all structures).

Provability. A formula is provable (via natural deduction).

Recall propositional Logic

a formula is provable in natural deduction iff it’s true for all
truth-value assignments

Soundness. All provable formulae are valid.

(As application of proof rules maintains validity of formulae)

Completeness. All valid formulae are provable

(Difficult proof, via so-called “Henkin Models”.)

Soundness and completeness is the glue between valid and provable.

19 / 50

Metalogic of first order logic

First order natural deduction is sound and complete

So you can find a proof of any valid statement

But the truth-tables aren’t finite — you can’t actually prove or
disprove a statement by truth-tables

If there is a proof you can find it by mindlessly trying all sequences of
rules of length 1, length 2, . . .

But if you don’t find a proof, you haven’t established anything

Small Gain.

checking for validity in all models discounts infinite models

trying all proofs may yield a proof.

First order logic is semi-decidable (later in the course)

20 / 50

Structural Induction

So Far.

the “mechanics” of reasoning

fully generic, applies to all sets

Now. Induction Principles

allow us to prove properties about “special” sets

can be thought of as additional axioms to natural deduction

but will leave natural deduction implicit from now on.

In more detail.

Induction on the natural numbers: review

Structural induction over Lists

Structural induction over Trees

The principle that: the structural induction rule for a particular data
type follows from its definition

21 / 50

Natural Number Induction

This is the induction you already know.

To prove a property P for all natural numbers:

Prove it for 0

Prove that, if it is true for n it is true for n + 1.

The principle is usually expressed as a rule of inference:

P(0) ∀n.P(n)→ P(n + 1)
∀n.P(n)

It is an additional principle that allows us to prove facts.

22 / 50

Why does it Work?

The natural numbers are an inductively defined set:

1. 0 is a natural number;

2. If n is a natural number, so is n + 1;

No object is a natural number unless justified by these clauses.

From the assumptions:

P(0) ∀n. P(n)→ P(n + 1)

we get a sequence of deductions:

P(0), P(1), P(2), P(3), …

which justifies the conclusion for any n you choose:

P(0) is given

obtain P(0)→ P(1) by (∀E ), and then get P(1) using (→ E )
obtain P(2), P(3), . . . in the same way.

23 / 50

Example of Mathematical Induction

Let’s prove this property of natural numbers:

n∑
i=0

i =
n × (n + 1)

2

◦ ◦ ◦ ◦ •
◦ ◦ ◦ • •
◦ ◦ • • •
◦ • • • •

First the Base case P(0)

0∑
i=0

i =
0× (0 + 1)

2

This is obviously true because both sides equal 0

24 / 50

The Step Case

The step case. is of the of form ∀n.P(n)→ P(n + 1).

Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1 a P(a)

2 horrendous proof

3 P(a + 1)

4 P(a)→ P(a + 1) →-I, 1–3

5 ∀n.P(n)→ P(n + 1) ∀-I, 3

25 / 50

The Step Case

The step case. is of the of form ∀n.P(n)→ P(n + 1).

Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1 a P(a)

2 horrendous proof

3 P(a + 1)

4 P(a)→ P(a + 1) →-I, 1–3

5 ∀n.P(n)→ P(n + 1) ∀-I, 3

25 / 50

The Step Case

The step case. is of the of form ∀n.P(n)→ P(n + 1).

Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1 a P(a)

2 horrendous proof

3 P(a + 1)

4 P(a)→ P(a + 1) →-I, 1–3

5 ∀n.P(n)→ P(n + 1) ∀-I, 3

25 / 50

The Step Case

The step case. is of the of form ∀n.P(n)→ P(n + 1).

Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1 a P(a)

2 horrendous proof

3 P(a + 1)

4 P(a)→ P(a + 1) →-I, 1–3

5 ∀n.P(n)→ P(n + 1) ∀-I, 3

25 / 50

The Step Case, Again

The step case. is of the of form ∀n.P(n)→ P(n + 1).

Q. How do we prove a formula of this form?

A2. What the natural deduction proof really means

pick an arbitrary variable a

assume that P(a) and prove P(a + 1)

this amounts to P(a)→ P(a + 1)
as a was arbitrary, this amounts to ∀n.P(n)→ P(n + 1)

26 / 50

How the Step Case Plays Out

Recall. Want to prove ∀n.
n∑

i=0

i =
n × (n + 1)

2︸ ︷︷ ︸
P(n)

Step case.

∀n.

(
n∑

i=0

i =
n × (n + 1)

2

)

(
n+1∑
i=0

i =
(n + 1)× (n + 1 + 1)

2

)

Let a be arbitrary and assume P(a), i.e.

(IH)
a∑

i=0

i =
a× (a + 1)

2︸ ︷︷ ︸
P(a)

The assumption (IH) is called the induction hypothesis. Need to use it to
prove P(a + 1).

27 / 50

Step Case – Detailed Proof

Assume P(a), that is
a∑

i=0

i =
a× (a + 1)

2
.

Prove P(a + 1), that is,
a+1∑
i=0

i =
(a + 1)× ((a + 1) + 1)

2
.

a+1∑
i=0

i =
a∑

i=0

i + (a + 1)

=
a × (a + 1)

2
+ (a + 1) (by IH)

=
a × (a + 1)

2
+

2 × (a + 1)
2

=
(a + 2) × (a + 1)

2

=
(a + 1) × (a + 2)

2
28 / 50

Wrapping up the proof

Recall. Proof rule for induction over natural numbers:

P(0) ∀n.P(n)→ P(n + 1)
∀n.P(n)

We have proved both premises of the induction rule

P(0) and ∀n.P(n)→ P(n + 1)
so that applying the rule gives ∀n.P(n).

We have demonstrated this for a particular P(n), i.e.

P(n) =
n∑

i=0

i =
n × (n + 1)

2

in both the base case and the induction step.

29 / 50

Back to Programs

Q. How would we implement summation, e.g. in Haskell?

A. For example, like so:

sfz :: Int -> Int

sfz 0 = 0

sfz n = n + sfz (n-1)

Similarity to induction proofs

prove that ∀n.P(n)
prove that P(0)
prove that P(a)→ P(a + 1)

Slogan.

Recursive definitions ≈ inductive proofs

30 / 50

Example: Proofs about a Program

Given. The definition of the program, in our case:

sfz :: Int -> Int

sfz 0 = 0 — SFZ0

sfz n = n + sfz (n-1) — SFZ1

Goal. To prove that ∀n.sfz(n) = 1
2

(n × (n + 1)).

Strategy. Induction on n.

Base Case.

sfz(0) = 0 =
1

2
(0× (0 + 1)) (by SFZ0)

31 / 50

Example: Proofs about a Program

Given. The definition of the program, in our case:

sfz :: Int -> Int

sfz 0 = 0 — SFZ0

sfz n = n + sfz (n-1) — SFZ1

Step Case. Pick an arbitrary a and assume

(IH) sfz a =
1

2
(a× (a + 1))

Goal. Show that sfz(a + 1) =
1

2
((a + 1)(a + 1 + 1)).

sfz(a + 1) = (a + 1) + sfz(a + 1− 1) (by SFZ1)
= (a + 1) + sfz(a) (by arithmetic)

= (a + 1) +
1

2
(a× (a + 1)) (by IH)

=
1

2
((a + 1)× (a + 1 + 1)) (arithmetic, see before)

32 / 50

Basic Anatomy of an Induction Proof

Base Case (n = 0).

usually trivial

uses base case of recursive definition

Step Case a→ a + 1
assume the property for an arbitrary a – the induction hypothesis (IH)

massage the goal (the property for a + 1) so that (IH) applies

this usually uses the recursive step in the definition

apply (IH) to prove the step case – the property for a + 1

Justification.

simple facts (e.g. arithmetic) can be justified by saying just that

applied equations need to be justified explicitly.

33 / 50

Why do we care?

Program Correctness.

have formal proof that a function computes what it should

function is operational whereas property is descriptive

two different angles on the same function.

Optimisation.

given: slow implementation of a function – say slow

hypothesis: faster implementation – say fast does the same job

proof of ∀n.slow(n) = fast(n) allows us to swap slow for fast

34 / 50

Another Example

Given.

sumodd :: Int -> Int

sumodd 0 = 0

sumodd n = (2 * n – 1) + sumodd (n-1)

Q. What does this function do?

Answer. It computes the square of n, for n ≥ 0.

35 / 50

Inductive Proof of sumodd

Given.

sumodd :: Int -> Int

sumodd 0 = 0 — SO1

sumodd n = (2 * n – 1) + sumodd (n-1) — SO2

Goal. ∀n.sumodd n = n2.

Base Case. Show that sumodd 0 = 02.

sumodd 0 = 0 = 02 (by SO1 and arithmetic)

36 / 50

Inductive Proof of sumodd

Given.

sumodd :: Int -> Int

sumodd 0 = 0 — SO1

sumodd n = (2 * n – 1) + sumodd (n-1) — SO2

Step Case.

assume (IH) : sumodd a = a2

prove that sumodd (a + 1) = (a + 1)2.

sumodd (a + 1) = 2 ∗ (a + 1)− 1 + sumodd (a + 1− 1) (by SO2)
= 2a + 1 + sumodd (a) (arithmetic)

= 2a + 1 + a2 (by IH)

= (a + 1)2 (arithmetic)

37 / 50

Optimisation Example: Towers of Hanoi

Rules.

three poles with disks of varying sizes

larger disks may never be on top of smaller ones

may only move one disk at a time.

Q. How many moves to get n discs from pole one to pole 3?

A. Here’s a program!

t :: Int -> Int

t 0 = 0

t n = t (n-1) + 1 + t (n-1)

38 / 50

Critique 1: This is super inefficient

Compare the two programs:

t :: Int -> Int

t 0 = 0

t n = t (n-1) + 1 + t (n-1)

tb :: Int -> Int

tb 0 = 0

tb n = 2*tb (n-1) + 1

Clearly the left one is bogged down by two identical recursive calls!

Show that ∀a.t (a) = tb (a) by induction:

Base Case. t (0) = 0 = tb (0)

Step Case. If t (a) = tb (a), then t (a + 1) = tb (a + 1):

t (a + 1) = t (a) + 1 + t (a) (def’n of t)

= 2 ∗ t (a) + 1 (arith)
= 2 ∗ tb (a) + 1 (IH)
= tb (a + 1) (def’n of tb)

39 / 50

Critique 2: Even tb is not tail recursive

Compare the two programs:

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Observation. tt is even better (faster) than tb. Let’s see if we can prove
it . . .

Goal. ∀n.tb (n) = tt (n).

40 / 50

Health Warning

The following slides contain lots of attempts of failed proofs.

don’t be discouraged: the example is difficult by design

it’s intended to demonstrate how things can be fixed

41 / 50

Proof Take 1: Let’s just do it!

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Base Case. tb (0) = 0 (def’n of tb) = ta 0 0 (def’n of ta) = tt 0
(def’n of tt)

Step Case. Assume that tb (n) = tt (n), prove tb (n + 1) = tt (n + 1)

tb (n + 1) = 2 ∗ tb (n) + 1 (def’n of tb)
= 2 ∗ tt (n) + 1 (IH)
= 2 ∗ ta n 0 (def’n of tt)
=??? (we’re stuck!)

= ta (n + 1) 0

= tt (n + 1) (def’n of tt)
42 / 50

Analysis of Failure

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Step Case. Assume that tb (n) = tt (n), prove tb (n + 1) = tt (n + 1)

Failure. We couldn’t go

from 2 ∗ ta n 0 (which we have obtained by applying IH)
to ta (n + 1) (which is equal to tt (n + 1))

Analysis.

the recursion really happens in ta

so maybe need a statement that relates tb and ta?

43 / 50

Proof Take 2: Relate ta and tb

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Show. ∀n.tb (n) = ta (n) (0).

Base Case. Left as exercise ©

Step Case. Assume tb n = ta n 0, prove tb (n + 1) = ta (n + 1) 0

tb(n + 1) = 2 ∗ tb (n) + 1 (def’n of tb)
= 2 ∗ ta n 0 + 1 (IH)
= ??? (stuck again!)

= ta n (2 ∗ 0 + 1)
= ta (n + 1) 0 (def’n of ta)

44 / 50

Analysis of Failure, Again . . .

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

We wanted. 2 ∗ ta n 0 + 1 = ta n (2 ∗ 0 + 1).

Problem. The second argument of ta changes!

Solution. Find a property that involves the second argument of ta.

45 / 50

Experiments

tb 3 = 7

ta 3 0 = 7 ta 3 1 = 15 ta 3 2 = 23 ta 3 3 = 31

tb 4 = 15

ta 4 0 = 15 ta 4 1 = 31 ta 4 2 = 47 ta 4 3 = 63

Wild Guess. How about ta n a = (tb n) + a ∗ (tb n + 1) ??

This would give

tb n = (tb n) + 0 ∗ (tb n + 1) = ta n 0 = tt 0

so would solve our problem.

46 / 50

Proof Take 3: Stronger Property

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Show. ∀n.∀a.ta n a = tb n + a ∗ (tb n + 1).

Base Case.

ta 0 a = a (def’n ta)

= 0 + a ∗ (0 + 1) (arith)
= (tb 0) + a ∗ ((tb 0) + 1) (def’n of tb)

so base case still works.

47 / 50

Proof Take 3: Stronger Property

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Step Case.

Assume ta n a = tb n + a ∗ (tb n + 1)
Show that ta (n + 1) a = tb (n + 1) + a ∗ (tb (n + 1) + 1)

ta (n + 1) a = ta n (2 ∗ a + 1) (def’n of ta)
= tb n + (2 ∗ a + 1)(tb n + 1) (IH)
= 2 ∗ tb n + 1 + 2 ∗ a ∗ (mathtttb n + 1) (lots of arith)
= tb (n + 1) + a ∗ (tb (n + 1) + 1) (def’n of tb)

so step case also works!
48 / 50

Finally: Wrapping Up!

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Show. ∀n.tb n = tt n.

tt n = ta n 0 (def’n of tt)

= (tb n) + 0 ∗ (tb n + 1) (we have now!)
= tb n (arith)

49 / 50

Conceptual Digression

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

Changing Arguments.

ta is a two-place (binary) function

recursion is on first argument (n)

but second argument (a) is not constant!

Solution.

find a stronger property that involves the second argument

usually: universally quantified

Example.
P(n) = ∀a.ta n a = tb n + a ∗ (tb n + 1)

as a is universally quantified, property holds for all a

even if a changes in recursive call!
50 / 50

Rules
Universal quantification
Existential quantification

Examples
Induction on natural numbers