Unit3-Abduction
Course: C231 Introduction to AI
Abductive Inference
© Alessandra Russo Unit 3 – Abductive Inference, slide 1
• Informal definition
• Formalizing the task
• Algorithm
• Semantic properties
• Example applications
» Diagnosis problems
» Automated Planning
Course: C231 Introduction to AI
Abductive Inference
© Alessandra Russo Unit 3 – Abductive Inference, slide 2
When Newton saw the apple falling down, he must have done an
abductive inference and came up with the theory of gravity.
Ø Apple fell down.
Ø If earth pulled everything towards it, then of course,
apple too would fall down.
Ø So earth is pulling everything towards it.
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 3© Alessandra Russo
Handling incomplete Information
“If I push the switch button, the light in my room will switch on”
“The light does not switch on! The lightbulb must be broken”
Default reasoning:
reasoning about “normal circumstances”, by making
assumptions on what is false.
Abductive reasoning:
reasoning about possible explanations, making
assumptions on what might be false and might be true.
Given a theory and an observation, find an explanation such that
theory∪ explanation ⊨ observation
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 4
Desirable properties of explanations
© Alessandra Russo
◉ Explanations should be basic
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
theory
observation flies(tweety)
E1: {sparrow(tweety)} E2: {bird(tweety)}
non basic explanationbasic explanation
They are restricted to abducibles, ground literals with
predicates that are not defined in the theory.
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 5
Desirable properties of explanations
© Alessandra Russo
◉ Explanations should be minimal
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
bird(X) ← woodpecker(X)
theory
observation flies(tweety)
E1: {sparrow(tweety)} E2: {sparrow(tweety), woodpecker(tweety)}
non minimal explanationminimal explanation
Should not be subsumed by any other explanation.
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 6© Alessandra Russo
Desirable properties of explanations
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
bird(X) ← woodpecker(X)
← woodpecker(tweety)
theory
observation flies(tweety)
◉ Explanations should be consistent with the theory
E1: {sparrow(tweety)} E2: {woodpecker(tweety)}
non consistent explanationconsistent explanation
Theory∪E2 is inconsistent
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 7© Alessandra Russo
Defining abductive reasoning
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
bird(X) ← woodpecker(X)
← woodpecker(tweety)
theory
observation flies(tweety)
Given a theory, BK, a set of integrity constraints IC, a set of abducibles
A, abductive reasoning framework is the tuple
includes ground literals whose predicate names are not defined in BK.
constraints
A = {sparrow(tweety), penguin(tweety), woodpecker(tweety)}
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 8
Formal definition of abduction
© Alessandra Russo
An abductive logic program, for a given problem domain, is:
KB = set of normal clauses
A = set of ground undefined literals
IC = set of normal denials
Given an abductive framework, an abductive solution, called explanation, for a
given goal G, is a set ∆ of ground literals such that:
Ø ∆ ⊑A
Ø KB ∪∆ ⊨ G
Ø KB ∪∆ ⊭ ⊥
Ø KB ∪∆ ⊨ IC
belong to the predefined language of abducibles
provide missing information needed to solve the goal
is consistent with the knowledge base
it satisfies the integrity constraints
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 9
Abduction: extending SLD
© Alessandra Russo
What happens when our knowledge base is incomplete?
likes(peter, S) ← studentOf(S, peter)
likes(X, Y) ← friend(Y, X)
KB
⊨ likes(peter, paul)
SLD would fail, due to lack of
information.
We could instead assume (as
possible explanations) what is
not known.
Multiple equally good explanations:
∆1 = {studentOf(paul, peter)}
∆2 = {friend(paul, peter}
Abductive reasoning computes
explanations of observations
with respect to given KB
← likes(peter, paul)
← studentOf(paul, peter) ← friend(paul, peter)
? ?
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 10
Abduction: what about NAF?
© Alessandra Russo
How do we guarantee consistency with KB when we have NAF?
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
KB
⊨ flies(tweety)
Multiple explanations:
∆ 1 = {penguin(tweety),
not abnormal(tweety)}
∆ 2 = {sparrow(tweety),
not abnormal(tweety)}
← flies(tweety)
← bird(tweety), not abnormal(tweety)
?
← penguin(tweety),
not abnormal(tweety)
← sparrow(tweety),
not abnormal(tweety)
?
Ø ∆1 is inconsistent with KB
Ø not abnormal is not abducible
We need to reason with
not explicitly
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 11
Abduction: explaining NAF
© Alessandra Russo
How to maintain consistency when negated literals are assumed?
flies(X) ← not abnormal(X), bird(X)
abnormal(X) ← penguin(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
KB
⊨ flies(tweety)
← flies(tweety)
∆ 1 = {not abnormal(tweety)}
← not abnormal(tweety), bird(X)
← abnormal(tweety)
← penguin(tweety)
← not penguin(tweety)
?
← not abnormal(tweety)
?
∆ 1 = {not abnormal(tweety), not penguin(tweety)}
Explicit explanation for not abnormal(tweety)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 12
Abduction: explaining NAF
© Alessandra Russo
Consider the following example:
flies(X) ← not abnormal(X), bird(X)
abnormal(X) ← penguin(X)
abnormal(X) ← dead(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
KB ⊨ flies(tweety)
← flies(tweety)
← abnormal(tweety)
← penguin(tweety)
← not penguin(tweety)
← not abnormal(tweety)
← not abnormal(tweety), bird(tweety)
∆ 1 = {not abnormal(tweety)}
∆ 1 = {not abnormal(tweety),
not penguin(tweety)}
?
← abnormal(tweety)
← dead(tweety))
← not dead(tweety)
?
∆ 1 = {not abnormal(tweety),
not penguin(tweety),
not dead(tweety),
sparrow(tweety)}
← bird(tweety)
← sparrow(tweety) ← penguin(tweety)
?
?
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 13
Abduction: explaining NAF
© Alessandra Russo
The order in which positive and negated literals appear in a
clause only influences the order in which abducibles are
added to the explanation, but not the explanation itself.
flies(X) ← not abnormal(X), bird(X)
abnormal(X) ← penguin(X)
abnormal(X) ← dead(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
flies(X) ← bird(X), not abnormal(X)
abnormal(X) ← penguin(X)
abnormal(X) ← dead(X)
bird(X) ← penguin(X)
bird(X) ← sparrow(X)
∆1 = { not abnormal(tweety),
not penguin(tweety),
not dead(tweety),
sparrow(tweety)}
? flies(tweety)
∆1 = { sparrow(tweety),
not abnormal(tweety),
not penguin(tweety),
not dead(tweety)}
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 14
Abduction: satisfying constraints?
© Alessandra Russo
Consider the following example:
headache(X) ← overworked(X)
headache(X) ← migrane(X)
migrane(X) ← wrongdiet(X)
migrane(X) ← jetlag(X)
KB
headache(jane)
Ab G
overworked(jane)
wrongdiet(jane)
jetlag(jane)
∆1 = {overworked(jane)}
∆2 = {wrongdiet(jane)}
∆3 = {jetlag(jane)}
Alternative explanations
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 15© Alessandra Russo
Consider the following example:
headache(X) ← overworked(X)
headache(X) ← migrane(X)
migrane(X) ← wrongdiet(X)
migrane(X) ← jetlag(X)
student(jane)
← student(X), overworked(X)
KB Ab G
∆1 = {overworked(jane)}
∆2 = {wrongdiet(jane)}
∆3 = {jetlag(jane)}
Constraints may eliminate
explanations
overworked(jane)
wrongdiet(jane)
jetlag(jane)
headache(jane)
Abduction: satisfying constraints?
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 16© Alessandra Russo
KB Ab G
Abduction: satisfying constraints?
headache(X) ← overworked(X)
headache(X) ← migrane(X)
migrane(X) ← wrongdiet(X)
migrane(X) ← jetlag(X)
student(jane)
← student(X), overworked(X)
overworked(jane)
wrongdiet(jane)
jetlag(jane)
headache(jane)
← headache(jane)
← overworked(jane)∆1= {}
∆1 = {overworked(jane)}
← migrane(jane)
∆2 = {wrongdiet(jane)}
← jetlag(jane)← wrongdiet(jane)← student(jane)
∆3 = {jetlag(jane)}
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 17© Alessandra Russo
Consider the following example:
KB Ab G
∆1 = {overworked(jane), jetlag(jane)}
∆2 = {wrongdiet(jane)}
Constraints may force
abducibles in explanations
Abduction: satisfying constraints?
headache(X) ← overworked(X)
headache(X) ← migrane(X)
migrane(X) ← wrongdiet(X)
← not jetlag(X), overworked(X)
overworked(jane)
wrongdiet(jane)
jetlag(jane)
headache(jane)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 18© Alessandra Russo
KB Ab G
Abduction: satisfying constraints?
headache(X) ← overworked(X)
headache(X) ← migrane(X)
migrane(X) ← wrongdiet(X)
← not jetlag(jane)), overworked(X)
overworked(jane)
wrongdiet(jane)
jetlag(jane)
headache(jane)
← headache(jane)
← overworked(jane)
∆1 = {overworked(jane)}
∆2 = {wrongdiet(jane)}
← wrongdiet(jaen)
← not jetlag(jane)
← jetlag(jane)∆1 = {overworked(jane),
jetlag(jane)}
← migrane(jane)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 19
Abductive proof procedure
© Alessandra Russo
Two reasoning phases:
q Abductive derivation: it proceeds similarly to SLDNF
resolution, but with the additional feature of assuming
abducibles where, encountered as sub-goals to be proved.
q Consistency derivation: resolves the assumed literal with all
relevant integrity constraints and prove that each of the
resolvants fails (possibly adding more assumptions if needed).
Note:
1. All negated literals are considered to be abducibles.
2. IC implicitly contains ← P, not P (for every predicate P)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 20
Abductive Proof Procedure
Let
clausal logic and let G be a ground observation:
Abductive phase
( G1, ∆1 )
( G2, ∆2 )
( ∅ , ∆n )
…
Select a subgoal L from Gi ; let Gi’ = Gi – {L}
A1. L ∉A and L is a positive atom
if H←B in KB such that L = Hθ
Gi+1 = Bθ∪Gi’ and ∆i+1= ∆i
A2. L ∈ ∆i then Gi+1 = Gi’ and ∆i+1= ∆i
A3. L ∈A and L ∉ ∆ i and not L ∉ ∆ I , then
if there is a successful consistency derivation with
∆i∪ {L}, then Gi+1 = Gi’ and ∆i+1= ∆’
G1 = G, initially ∆= {}
Consistency phase
( L, ∆i∪ {L}) …
( ∅, ∆’ )
(∆i+1= ∆’)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 21
Abductive Proof Procedure
Consistency phase
( B, ∆i∪ {B})
( F1, ∆1 )
( Fn, ∆n )
…
F1 all denials in IC resolved with B
Select a denial ← φi in F1 and a literal L from it;
let φi’ = φi – {L} and F1’= F1 – {← φi}.
C1. L ∉A, perform SLDNF failure with L as a subgoal
C2. L ∈ ∆i , then Fi+1 = {← φi’}∪Fi’ and ∆i+1 =∆i.
C3. L ∈A and not L ∈ ∆ i then Fi+1 = Fi’ and ∆ i+1 =∆i.
C4. L ∈A and L ∉ ∆ i and not L ∉ ∆ i, then
if there is a successful abductive derivation of not L,
then Fi+1 = Fi- and ∆i+1= ∆’,
otherwise Fi+1 = {← φi’}∪Fi’ and ∆i+1 =∆i.
Abductive phase
(not L, ∆ i)…
( ∅, ∆’ )
(∆i+1= ∆’)
Let
clausal logic and let G be a ground observation:
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 22
Example of an abductive proof
p(X)← not q(X)
q(X)← b(X)
KB
p(a)
A G
b(a)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 23
Example of an abductive proof
p(X)← not q(X)
q(X)← b(X)
←not p(X), p(X)
←not q(X), q(X)
←not b(X), b(X)
KB
p(a)
A G
b(a), not b(a)
not p(a)
not q(a)
← p(a) (A1)
abductive phase
← b(a) (C4)
← not b(a) (A3)
abductive phase ∆1 = {not q(a)}
∆2 = {not q(a), not b(a)}
consistency phase
← b(a)
Abductive solution
∆ = {not q(a), not b(a)}
← q(a) (C1)
consistency phase
← not q(a) (A3)∆1 = {not q(a)}
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 24© Alessandra Russo
Semantic properties
• Knowledge assimilation through abduction
– Addition of new information to KB:
explanation of new information computed abductively, and
adding to the KB.
p ← q
p
r ← q
r ← s
KB
r (new information)
Two possible explanations
∆1 = {q}
∆2 = {s}
Sometime q is preferred as it
allows the inference of more
information
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 25
• Abduction is non-monotonic
– default reasoning as abduction :
new information can invalidate previous conclusions, when
these are based on unproven assumption that are contradicted
by the new information.
© Alessandra Russo
Making assumptions about applicability of a default rule is a form of abduction.
bird(X) ← penguin(X)
¬ fly(X) ← penguin(X)
penguin(tweety)
bird(john)
fly(X) ← bird(X)
F
D
bird(X) ← penguin(X)
not_fly(X) ← penguin(X)
penguin(tweety)
bird(john)
fly(X) ← bird(X), birdsFly(X)
← birdsFly(X), not_fly(X)
KB
∆ birdsFly(john)
IC
Semantic properties
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 26© Alessandra Russo
• Abduction is non-monotonic
– abductive interpretation of NAF shows even further the
suitability of abduction for default reasoning.
Default assumptions expressed as abductive hypothesis on not abnormality.
bird(X) ← penguin(X)
¬ fly(X) ← penguin(X)
penguin(tweety)
bird(john)
fly(X) ← bird(X)
F
D
bird(X) ← penguin(X)
abnormal(X) ← penguin(X)
penguin(tweety)
bird(john)
fly(X) ← bird(X), not abnormal(X)
∅
KB
IC
∆ not abnormal(john)
Semantic properties
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 27© Alessandra Russo
• Similarity between abduction and NAF
– NAF as abduction: negative literals can be seen as abducibles,
and can be assumed to be true provided that, together with the
program, do not violate integrity constraints.
Integrity constraints play an important role in capturing semantics of NAF.
p(X) ← q*(X)
q(X) ← b(X)KB* IC*
A* { q*(X), p*(X), b*(X), b(X) }
Semantic properties
KB ⊢ Q iff Q has abductive solution in
p(X) ← not q(X),
q(X) ← b(X)KB
← q*(X), q(X)
← p*(X), p(X)
← b*(X), b(X)
∆ = {q*(a), b*(a)}
G = p(a)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 28
Applications of Abduction
© Alessandra Russo
• Diagnosis problems
– Medical diagnosis: background knowledge is the doctor’s
expertise, goals to explain are patient’s symptoms, abducibles
are all possible medical conditions, and abductive solutions are
assumptions on specific medical conditions that explain
patient’s symptoms.
– Fault diagnosis: find explanations for system’s wrong
behaviors. Goals are faulty traces of the system, knowledge
base is description of the system behavior, integrity constraints
are relevant constraint that the system has to maintain,
abducibles are system’s events.
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 29
Fault Diagnosis: example
© Alessandra Russo
xor(0,1,1).
xor(1,0,1).
xor(1,1,0).
xor(0,0,0).
and(1,0,0).
and(0,1,0).
and(0,0,0).
and(1,1,1).
xorg(N, X,Y,Z):- xor(X,Y,Z).
xorg(N,0,0,1) :- fault(N, s1).
xorg(N,0,1,0) :- fault(N, s0).
xorg(N,1,0,0) :- fault(N, s0).
xorg(N,1,1,1) :- fault(N, s1).
andg(N, X,Y, Z):- and(X,Y,Z).
andg(N, 0,0, 1):- fault(N, s1).
andg(N, 1,0, 1):- fault(N, s1).
andg(N, 0,1, 1):- fault(N, s1).
andg(N, 1,1, 0):- fault(N, s0).
and1
xor1
adder(N, A, B, Sum, Carry):-
xorg(N-xor1, A, B, Sum),
andg(N-and1, A, B, Carry).
G1= adder(half_add,0,0,1,0) ∆1 = {[fault(half_add, s1)]}
A = {fault(N, s0), fault(N, s1)}
G2= adder(half_add,0,1,0,1) ∆2 = { [fault(half_add, s1),
fault(half_add, s0)]) }
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 30
Abduction for planning
© Alessandra Russo
← hire(X), no_have_money
← hire(car), not own(driving_licence)
have(X) ← buy(X)
have(X) ← hire(X)
have(X) ← borrow(X)
KB
IC
have(car)G ∆1 =
{hire(car),
own(driving_licence)}
(plan1)
∆2 = {borrow(car)} (plan2)
A = {buy(_), hire(_), borrow(_)}
∆2 = {buy(car)} (plan3)
Course: C231 Introduction to AI
Unit 3 – Abductive Inference, slide 31
Summary
© Alessandra Russo
Ø Introduced the notion of abductive reasoning
Ø Desirable properties
o Consistency of assumptions
oMinimality of Explanation
Ø Algorithm
Ø Semantic Properties
o Default reasoning
o Abductive interpretation of NAF
Ø Some applications of abduction