COMP3702 Artificial Intelligence – Module 2: Reasoning and planning with certainty
COMP3702
Artificial Intelligence
Module 2: Reasoning and planning with certainty
Dr Archie Chapman
Semester 2, 2021
The University of Queensland
School of Information Technology and Electrical Engineering
Thanks to Dr Alina Bialkowski and Dr Hanna Kurniawati
Table of contents
1. Constraint Satisfaction Problems
2. Backtracking algorithms
3. Consistency algorithms
4. Arc-consistency
5. Propositions and Inference
6. Validity
7. Conjunctive Normal Form (CNF)
8. Satisfiability (again)
1
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint satisfaction problems (CSPs) are a subset of search problems. They have the same
assumptions about the world:
• a single agent,
• deterministic actions,
• fully observed state,
• (typically) discrete state space
2
Constraint Satisfaction Problems
CSP are specialised to identification problems, or to provide assignments to variables:
• The goal itself is important, not the path
• All paths at the same depth (for most formulations)
At the end of the first part of this module you should be able to:
• recognise and represent constraint satisfaction problems
• show how constraint satisfaction problems can be solved with search
• implement and trace arc-consistency of a constraint graph
3
Constraint Satisfaction Problems: Definition
A Constraint Satisfaction Problem (CSP) is given by:
• A set of variables, V1,V2, . . . ,Vn.
• Each variable Vi has an associated domain, domVi , of possible values.
• A set of constraints on various subsets of the variables which are logical predicates
specifying legal combinations of values for these variables.
• A model of a CSP is an assignment of values variables that satisfies all of the constraints
There are also constraint optimisation problems, in which there is a function that gives a cost for each
assignment of a value to each variable:
• A solution is an assignment of values to the variables that minimises the cost function.
• We will skip constraint optimisation problems (despite your lecturer’s keen interest in them).
4
Example: Scheduling activities
• Variables: X = {A,B,C ,D,E} that represent the starting times of various activities.
• Domains: Four start times for the activities
domA = {1, 2, 3, 4}, domB = {1, 2, 3, 4}
domC = {1, 2, 3, 4}, domD = {1, 2, 3, 4}
domE = {1, 2, 3, 4}
• Constraints: represent illegal conflicts between variables
(B 6= 3) (C 6= 2) (A 6= B) (B 6= C )
(C < D) (A = D) (E < A) (E < B)
(< C ) (E < D) (B 6= D)
5
Example: Graph colouring
Problem: assign each state (and territory) a colour such that no two adjacent states have the
same colour.
• Variables: ?
• Domains: ?
• Constraints: ?
6
Example: Graph colouring
Problem: assign each state (and territory) a colour such that no two adjacent states have the
same colour.
• Variables: X = {NSW,VIC,QLD,WA,SA,TAS,NT}
• Domains: domx = {r , g , b} for each x ∈ X .
• Constraints: (WA 6= SA) ∧ (WA 6= NT) ∧ (NT 6= QLD) ∧ . . .
7
RiPPLE Activity: Soduko
2 5 1 9
8 2 3 6
3 6 7
1 6
5 4 1 9
2 7
9 3 8
2 8 4 7
1 9 7 6
Variables:
Domains:
Constraints:
8
Generate-and-Test Algorithm
• Generate the assignment space dom = domV1 × domV2 × . . .× domVn (i.e. Cartesian
product).
• Test each assignment with the constraints.
• How many assignments need to be tested for n variables each with domain size d?
• Example:
How many leaf nodes are expanded in the worst case? 37 = 2187.
9
Naively apply DFS to a CSP
• States defined by the values assigned so far (partial assignments)
• Initial state: the empty assignment, {}
• Successor function: assign a value to an unassigned variable
• Goal test: the current assignment is complete and satisfies all constraints
What can go wrong?
10
Backtracking algorithms
Backtracking Algorithms
• Systematically explore dom by instantiating the variables one at a time
• evaluate each constraint predicate as soon as all its variables are bound
• any partial assignment that doesn’t satisfy the constraint can be pruned.
Scheduling example: Assignment A = 1 ∧ B = 1 is inconsistent with constraint A 6= B
regardless of the value of the other variables.
11
Backtracking Algorithms: Graph colouring example
12
Backtracking Algorithms: Graph colouring example
13
CSP as Graph Searching
A CSP can be solved by graph-searching with variable-ordering and fail-on-violation
• A node is an assignment values to some of the variables.
• Suppose node N is the assignment X1 = v1, . . . ,Xk = vk . Select a variable Y that isn’t
assigned in N (using your favourite graph search algorithm).
For each value yi ∈ dom(Y )
X1 = v1, . . . ,Xk = vk ,Y = yi is a neighbour if it is consistent with the constraints.
• The start node is the empty assignment.
• A goal node is a total assignment that satisfies the constraints.
14
Recursive implementation of Backtracking Search
15
Worked example: Soduko
4 2 6 5 7 1 3 9 8
8 5 7 2 3 6
3 6 7
1 6
5 4 1 9
2 7
9 3 8
2 8 4 7
1 9 7 6
Partial solution
(2,5)
(2,7)
(2,8)
...
16
Consistency algorithms
Consistency Algorithms: Prune the search space
Idea: prune the domains as much as possible before selecting values from them.
A variable is domain consistent (or 1-consistent) if no value of the domain of the node is ruled
impossible by any of the constraints.
Example: Is the scheduling example domain consistent?
Variables: X = {A,B,C ,D,E} that represent the starting times of various activities.
Domains: dom1 = {1, 2, 3, 4},∀i ∈ X
Constraints: (B 6= 3), (C 6= 2), (A 6= B), (B 6= C ), (C < D)
(A = D), (E < A), (E < B), (E < C ), (E < D), (B 6= D)
No, domB = {1, 2, 3, 4} isn’t domain consistent as B = 3 violates the constraint B 6= 3.
Even better, we can propagate this information to other unassigned variables.
17
Constraint Network: a bipartite graph
• There is a circle or oval-shaped node for each variable.
• There is a rectangular node for each constraint.
• There is a domain of values associated with each variable node.
• There is an arc from variable X to each constraint that involves X .
Example binary constraint network:
Variables: {A,B,C}, Domains: (not specified)
Constraints: r1(A,B) = (A < B), r2(B,C ) = (B < C )
Arcs: 〈A, r1(A,B)〉, 〈B, r1(A,B)〉,. . .
18
Constraint Network: Scheduling example
19
Arc-consistency
Recap: Forward checking
Idea:
• Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values
This is embedded in ”vanilla” backtracking search
20
Constraint propagation
Forward checking propagates information from assigned to unassigned variables, but doesn’t
provide early detection for all failures:
• NT and SA cannot both be blue!
• Constraint propagation algorithms repeatedly enforce constraints locally. . .
21
Arc Consistency
Arc consistency is the simplest form of constraint propagation, which repeatedly enforces
constraints locally.
Arc consistency
An arc
〈
X , r(X ,Y )
〉
is arc consistent if, for each value x ∈ dom(X ), there is some value
y ∈ dom(Y ) such that r(x , y) is satisfied. A network is arc consistent if all its arcs are arc
consistent.
• What if arc
〈
X , r(X ,Y )
〉
is not arc consistent?
• All values of X in dom(X ) for which there is no corresponding value in dom(Y ) can be
deleted from dom(X ) to make the arc
〈
X , r(X ,Y )
〉
consistent.
• This removal can never rule out any models (do you see why?)
22
Arc Consistency
Arc consistency is the simplest form of constraint propagation, which repeatedly enforces
constraints locally.
An arc
〈
X , r(X ,Y )
〉
is consistent if for every value x of X there is some allowed y.
23
Arc Consistency
Arc consistency is the simplest form of constraint propagation, which repeatedly enforces
constraints locally.
An arc
〈
X , r(X ,Y )
〉
is consistent if for every value x of X there is some allowed y.
24
Arc Consistency
Arc consistency is the simplest form of constraint propagation, which repeatedly enforces
constraints locally.
An arc
〈
X , r(X ,Y )
〉
is consistent if for every value x of X there is some allowed y.
• If X loses a value, neighbors of X need to be rechecked
25
Arc Consistency
Arc consistency is the simplest form of constraint propagation, which repeatedly enforces
constraints locally.
An arc
〈
X , r(X ,Y )
〉
is consistent if for every value x of X there is some allowed y.
• If X loses a value, neighbors of X need to be rechecked
• Arc consistency detects failure earlier than forward checking
• Can be run as a preprocessor or after each assignment (e.g. within backtracking search).
26
Arc Consistency Algorithm
• The arcs can be considered in turn making each arc consistent.
• When an arc has been made arc consistent, does it ever need to be checked again? An arc〈
X , r(X ,Y )
〉
needs to be revisited if the domain of one of the Y ’s is reduced.
• Regardless of the order in which arcs are considered, we will terminate with the same
result: an arc consistent network.
• Three possible outcomes when all arcs are made arc consistent: (Is there a solution?)
• One domain is empty ⇒ no solution
• Each domain has a single value ⇒ unique solution
• Some domains have more than one value ⇒ there may or may not be a solution
Need to solve this new (usually simpler) CSP: same constraints, domains have been reduced
27
Implementation of Arc-Consistency — AC-3 algorithm
28
Complexity of the arc consistency algorithm
Worst-case complexity of this procedure:
• let the max size of a variable domain be d
• let the number of constraints be e
• complexity is O(ed3)
Some special cases are faster:
• e.g. if the constraint graph is a tree, arc consistency is O(ed)
29
Finding solutions when AC finishes
If some variable domains have more than one element ⇒ search
• Advanced alternatives (beyond this course):
• Variable and arc ordering heuristics speed up arc consistency and search (by an order of
magnitude).
• Split a domain, then recursively solve each part.
• Use conditioning (fix a variable and prune its neighbours’ domains) or cutset conditioning
(instantiate, in all ways, a set of variables such that the remaining constraint graph is a
tree).
• Many other heuristics for exploiting graph structure.
30
Final note on hard and soft constraints
• Given a set of variables, assign a value to each variable that either
• satisfies some set of constraints: satisfiability problems — “hard constraints”
• minimises some cost function, where each assignment of values to variables has some cost:
optimisation problems — “soft constraints”
• For soft constraint optimisation problems, value propagation algorithms are used in the same
way that constraint propagation algorithms can be used.
• In fact, the abstract algebra underpinning these two methods — the generalised distributive
law applied to c-semirings — is identical, and the same as belief propagation algorithms use
in Bayes-nets, message passing schemes used for turbo-codes, and Nash propagation
algorithms used in graphical games,. . .
• Many problems are a mix of hard and soft constraints
(called constrained optimisation problems).
31
Propositions and Inference
Propositions and Inference
• What is logic?
• Propositional logic: Syntax and Semantics
• Example of using logic to represent a problem
• Two types of problems:
• Validity
• (Back to) satisfiability
32
What is logic?
• A formal language to represent sets of states
• A convenient abstraction for dealing with many states
• Regardless of whether there’s a natural notion of “near” or not (i.e., not a metric space),
we can use logic to group different states together
• E.g.:
I have a laptop ⇒ includes any brand and model
There is a laptop on the table ⇒ can be at any position on the table
33
Propositions
Some definitions:
• An interpretation is an assignment of values to all variables.
• A model is an interpretation that satisfies the constraints (as in CSPs).
• Often we don’t want to just find a model, but want to know what is true in all models.
• A proposition is statement that is true or false in each interpretation.
34
Why propositions?
• Specifying logical formulae is often more natural than filling in tables
• It is easier to check correctness and debug formulae than tables (think of checking a set of
linear algebraic expressions versus matrices containing a set of linear equations)
• We can exploit the Boolean nature for efficient reasoning (pruning, consistency)
We need a language for asking queries (of what follows in all models) that may be more
complicated than asking for the value of a variable
• You may only have partial knowledge of some constraints
• It is easy to incrementally add formulae
• It can be extended to infinitely many variables with infinite domains (using logical
quantification)
35
A formal language
The formal language representation and reasoning system is made up of:
• syntax: specifies the legal sentences
E.g.: if x in input:
• semantics: specifies the meaning of the symbols
• Atoms are logical formulae with no deeper propositional structure, i.e. the primitive
components of the formal language.
These are usually associated with a reasoning theory or proof procedure: a (nondeterministic)
specification of how an answer can be produced.
Many types of logic:
• Propositional logic
• Predicate / first order logic
• High order logic
36
Semantics — Human’s view of semantics
Step 1 Begin with a task domain.
Step 2 Choose atoms in the computer to denote propositions.. These atoms have
meaning to the knowledge base designer.
Step 3 Tell the system knowledge about the domain.
Step 4 Ask the system questions.
• The system can tell you whether the question is a logical consequence.
• You can interpret the answer with the meaning associated with the atoms.
37
Electrical Environment
38
Role of semantics
In computer:
• light1 broken⇐ sw up
∧power ∧ unlit light1
• sw up.
• power ⇐ lit light2
• unlit light1
• lit light2
In user’s mind:
• light1 broken: light #1 is broken
• sw up: switch is up
• power : there is power in the building
• unlit light1: light #1 isn’t lit
• lit light2: light #2 is lit
Conclusion: light1 broken
• The computer doesn’t know the meaning of the symbols
• The user can interpret the symbol using their meaning
39
Simple language: propositional definite clauses
• An atom is a symbol (starting with a lower case letter)
• A sentence (or body) is an atom or is of the form s1 ∧ s2 where s1 and s2 are sentences.
• A definite clause is an atom or is a rule of the form h⇐ s where h is an atom and s is a
sentence.
• A knowledge base is a set of definite clauses
40
Propositional logic –– Syntax
An atomic proposition, or just an atom, is a symbol.
We use the convention that propositions consist of letters, digits and the underscore ( ) and
start with a lower-case letter.
Atoms are known to either be true or false
RiPPLE activity: Are these propositions?
• What is the distance between Mars and Earth?
• x + 2 = 2x
• x + 2 = 2x when x = 1
• 2x < 3x
Atoms are often represented with a symbol called a propositional variable, e.g. p, q (see the
notes in Assignment 0).
41
Propositional logic — Syntax
Complex propositions, or sentences, can be built from simpler propositions using logical
connectives:
Bracket( () ), negation (¬), and (∧), or (∨), implication (⇒), biconditional (⇔)
¬p “not p” negation
p ∧ q “p and q” conjunction
p ∨ q “p or q or (p and q)” disjunction
p ⇒ q “if p then q” implication
p ⇔ q “p iff q” biconditional or equivalence
42
Rules for evaluating truth
¬p is TRUE iff p is FALSE
p ∧ q is TRUE iff p is TRUE and q is TRUE
p ∨ q is TRUE iff p is TRUE or q is TRUE
p ⇒ q is TRUE iff p is FALSE or q is TRUE
p ⇒ q is FALSE iff p is TRUE and q is FALSE
p ⇔ q is TRUE iff p ⇒ q and q ⇒ p
43
Semantics
• An interpretation I assigns a truth value to each atom.
• A sentence s1 ∧ s2 is true in I if s1 is true in I and s2 is true in I .
• A rule h⇐ b is false in I if b is true in I and h is false in I . The rule is true otherwise.
• A knowledge base KB is true in I if and only if every clause in KB is true in I .
44
Models and Logical Consequence
• A model of a set of clauses is an interpretation in which all the clauses are TRUE.
• If KB is a set of clauses and g is a conjunction of atoms, g is a logical consequence of
KB, written KB � g , if g is TRUE in every model of KB.
• That is, KB � g if there is no interpretation in which KB is TRUE and g is FALSE.
45
Simple Example
KB =
p ⇐ q
q
r ⇐ s
p q r s Is a model?
I1 TRUE TRUE TRUE TRUE
I2 FALSE FALSE FALSE FALSE
I3 TRUE TRUE FALSE FALSE
I4 TRUE TRUE TRUE FALSE
I5 TRUE TRUE FALSE TRUE
46
Simple Example
KB =
p ⇐ q
q
r ⇐ s
p q r s Is a model?
I1 TRUE TRUE TRUE TRUE is a model of KB
I2 FALSE FALSE FALSE FALSE not a model of KB
I3 TRUE TRUE FALSE FALSE is a model of KB
I4 TRUE TRUE TRUE FALSE is a model of KB
I5 TRUE TRUE FALSE TRUE not a model of KB
RIPPLE activity: Which of p, q, r , s logically follow from KB?
47
User’s view of Semantics — How logic is used to represent a problem
Formulate information as propositional logic sentences, to create a knowledge base (KB)
1. Choose a task domain: intended interpretation.
2. Associate an atom with each proposition you want to represent.
3. Tell the system clauses that are true in the intended interpretation: axiomatizing the
domain.
4. Ask questions about the intended interpretation.
5. If KB � g , then g must be true in the intended interpretation.
6. Users can interpret the answer using their intended interpretation of the symbols.
48
Computer’s view of semantics
• The computer doesn’t have access to the intended interpretation.
• All it knows is the knowledge base.
• The computer can determine if a formula is a logical consequence of KB.
• If KB � g then g must be true in the intended interpretation.
• If KB 2 g then there is a model of KB in which g is false. This could be the intended
interpretation.
Deduction: Sentences entailed by the current KB can be added to the KB
49
Electrical Environment
50
Representing the Electrical Environment
light l1.
light l2.
down s1.
up s2.
up s3.
ok l1.
ok l2.
ok cb1.
ok cb2.
live outside.
lit l1 ⇐ live w0 ∧ ok l1
live w0 ⇐ live w1 ∧ up s2.
live w0 ⇐ live w2 ∧ down s2.
live w1 ⇐ live w3 ∧ up s1.
live w2 ⇐ live w3 ∧ down s1.
lit l2 ⇐ live w4 ∧ ok l2.
live w4 ⇐ live w3 ∧ up s3.
live p1 ⇐ live w3.
live w3 ⇐ live w5 ∧ ok cb1.
live p2 ⇐ live w6.
live w6 ⇐ live w5 ∧ ok cb2.
live w5 ⇐ live outside.
51
Terminology
• A sentence is valid: Its truth value is TRUE for all possible interpretations
E.g. p ∨ ¬p
• A sentence is satisfiable: Its truth value is TRUE for at least one of the possible
interpretations. E.g. ¬p
Everything that’s valid is also satisfiable
• A sentence is unsatisfiable: Its truth value is FALSE for all possible interpretations. E.g.
p ∧ ¬p
• For propositional logic, we can always decide if a sentence is valid/satisfiable/unsatisfiable
in finite time (decidable problems).
52
Validity
Terminology
A model of a sentence: An interpretation that makes the sentence to be true
A sentence A entails another sentence B (denoted as A � B) iff every model of A is also a
model of B (A⇒ B is valid)
53
(Simple) Model checking
KB:
R1: ¬P1,1
R2: B1,1 ⇔ (P1,2 ∨ P2,1)
R3: B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
R4: ¬B1,1
R5: B2,1
Check if ¬P1,2 is entailed by KB
54
(Simple) Model checking
Enumerate the models:
• All true/false values for P1,1,B1,1,P1,2,P2,1,B2,1,P2,2,P3,1
• Check if ¬P1,2 is true in all models where the knowledge base (R1 ∧ R2 ∧ R3 ∧ R4 ∧ R5) is
true
55
(Simple) Model checking
Sound: The result is correct
Complete: It always gives an answer
Complexity: n is # propositional variables
• Time: O(2n) ⇐Bad!
• Space: O(n)
56
Theorem proving — Check validity without checking all models
Use logical equivalences:
α, β sentences (atomic or complex)
Two sentences are logically equivalent iff true in the same models: α ≡ β iff α � β and β � α
57
Theorem proving — Check validity without checking all models
Transformations for logical expressions:
Modus ponens (mode that affirms):
α⇒ β
α
β
Modus tollens (mode that denies):
α⇒ β
¬β
¬α
And-elimination (for a conjunction, any of the conjuncts can be inferred):
α ∧ β
α
58
Theorem proving — Natural deduction
KB:
S1: ¬P1,1
S2: B1,1 ⇔ (P1,2 ∨ P2,1)
S3: B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
S4: ¬B1,1
S5: B2,1
Check if ¬P1,2 is entailed by KB?
59
Theorem proving — Natural deduction
KB:
S1: ¬P1,1
S2: B1,1 ⇔ (P1,2 ∨ P2,1)
S3: B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
S4: ¬B1,1
S5: B2,1
Check if ¬P1,2 is entailed by KB?
60
Theorem proving —- Natural deduction using search
State space: All possible sets of sentences
Action space: All inference rules (see more soon)
World dynamics: Apply the inference rule to all sentences that match the above the line part
of the inference rule. Become the sentence that lies below the line of the
inference rule
Initial state: Initial knowledge base
Goal state: The state contains the sentence we’re trying to prove
This procedure is sound, but it may not be complete, depending on whether we can provide a
complete list of inference rules. However, when we can, the branching factor can be very high.
61
Resolution: A single inference rule (action)
Given α, β and γ sentences (atomic or complex):
α ∨ β
¬β ∨ γ
α ∨ γ
This single inference rule is sound and complete only when applied to propositional logic
sentences written in Conjunctive Normal Form (CNF)
62
Conjunctive Normal Form (CNF)
Conjunctive Normal Form (CNF)
Conjunctions of disjunctions, e.g. (¬A ∨ B) ∧ (C ∨ D) ∧ (E ∨ F )
Some terminology:
• Clause: a disjunction of literals. e.g., (¬A ∨ B)
• Literals: variables or the negation of variables, e.g., A,¬A,B
CNF is quite useful for model checking and for solving satisfiability problems too — in fact,
we’ve seen it before! Where?
63
Every sentence in propositional logic can be written in CNF
Three step conversion:
1. Eliminate arrows using definitions
2. Drive in negations using De Morgan’s Laws
3. Distribute OR over AND
E.g. Convert (A ∨ B)⇒ (C ⇒ D) to CNF
1. Eliminate arrows: ¬(A ∨ B) ∨ (¬C ∨ D)
2. Drive in negations: (¬A ∧ ¬B) ∨ (¬C ∨ D)
3. Distribute OR over AND: (¬A ∨ ¬C ∨ D) ∧ (¬B ∨ ¬C ∨ D)
64
Automated theorem proving – Resolution refutation
Three steps:
1. Convert all sentences into CNF
2. Negate the desired conclusion
3. Apply resolution rule until: (i) derive false (a contradiction), or (ii) can’t apply the rule
anymore
Sound and complete (for propositional logic):
• If we derive a contradiction, the conclusion follows from the axioms
• If we can’t apply any more, the conclusion cannot be proved from the axioms
65
Resolution refutation — Examples
KB: (P ∨ Q) ∧ (P ⇒ R) ∧ (Q ⇒ R)
Does KB � R?
KB: (P ∧ ¬P)
Does KB � R?
66
Satisfiability (again)
Davis-Putnam-Logemann-Loveland (DPLL) tree search algorithm
An assign-and-simplify strategy
• Consider a search tree where at each level we consider the possible assignments to one
variable, say V
• On one branch, we assume V is FALSE and on the other that it is TRUE
• Given an assignment for a variable, we can simplify the sentence and then repeat the
process for another variable
• DPLL is a backtracking algorithm for systematically doing this
67
Davis-Putnam-Logemann-Loveland (DPLL) tree search algorithm
The algorithm is building a solution while trying assignments — you have a partial solution
which might prove successful or not as you go.
Provides an ordering of which variables to check first — this is the key to its efficiency
Definitions:
• Clause: a disjunction of literals
• Literals: variables or the negation of variables
• Unit clause: clause which has exactly one literal which is still unassigned and the other
(assigned) literals are all assigned to FALSE
• Pure variable: appears as positive only or negative only in S
If the current assignment is valid, you can determine the value of the unassigned pure variable
in a unit clause, because the literal must be true
E.g. if we have (X1 ∨ X2 ∨ X3) ∧ ( X1 ∨ X4 ∨ X5) ∧ ( X1 ∨ X4) and we have already
assigned X1 = FALSE: then choose X4 and set to TRUE
68
Davis-Putnam-Logemann-Loveland (DPLL) tree search algorithm
Provides an ordering of which variables to check first
DPLL(Sentence S)
If S is empty, return TRUE
If S has an empty clause, return FALSE
If S has a unit clause U, return DPLL(S(U))
Unit clause: consists of only 1 unassigned literal
S(U) means a simplified S after a value is assigned to U
If S has a pure variable U, return DPLL(S(U))
Pure variable: appears as positive only or negative only in S
Pick a variable v ,
If DPLL(S(v)) then return TRUE
Else return DPLL(S(¬v))
Heuristics to pick the variable: Max #occurrences, Min size clauses. Basically, pick the most
constrained variable first (if it’s going to fail, better fail early)
69
DPLL
Sound (the result is correct)
Complete (it always gives an answer)
Speed and memory consumption depends a lot on:
• Which symbol is being assigned first
• Which assignment is being followed first
A lot of other methods, heuristics, and meta heuristics have been proposed for SAT problems,
e.g. http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/
70
http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/
Attributions and References
Thanks to Dr Alina Bialkowski and Dr Hanna Kurniawati for their materials.
Many of the slides in this course are adapted from David Poole and Alan Mackworth, Artificial Intelligence:
foundations of computational agents, 2E, CUP, 2017 http://https://artint.info/. These materials are
copyright c© Poole and Mackworth, 2017, licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International License.
Other materials derived from Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3E,
Prentice Hall, 2009.
Some materials are derived from Chris Amato (Northeastern University), Tuomas Sandholm (CMU) and Kevin
Leyton-Brown (UBC).
All remaining errors are Archie’s — please email if you find any: archie. .au
http://https://artint.info/
Constraint Satisfaction Problems
Backtracking algorithms
Consistency algorithms
Arc-consistency
Propositions and Inference
Validity
Conjunctive Normal Form (CNF)
Satisfiability (again)
Appendix