程序代写代做代考 AI Excel CM3112 Artificial Intelligence Probability theory:

CM3112 Artificial Intelligence Probability theory:
Probabilities in AI

Steven Schockaert
SchockaertS1@cardiff.ac.uk
School of Computer Science & Informatics Cardiff University

Probabilities in AI
Many planning problems involve actions with uncertain effects If I leave for the airport 150 minutes before the time of my
flight, there is a 70% chance that I will get there in time. There are many situation in which an agent is uncertain about the
state of the world
If there is a breeze in one cell of the Wumpus world, there is
a 40% chance that one of the adjacent cells contains a pit What do these numbers mean?

Probabilities in AI
The classical interpretation of probabilities is frequentist:
probability = the limit of the relative frequency of occurrence of a given outcome, as the number of trials tends to infinity
Example: when we observe a sufficiently high number of coin flips, the percentage of times we see heads will tend to 50%. Hence we say that the probability of heads as the outcome of a coin flip is 0.5 (or 50%)
We need a different interpretation to make sense of statements like “there is a 50% chance of rain tomorrow”:
‣ They are not about experiments that can be repeated ‣ They are subjective

Probabilities in AI
To express degrees of belief we need a subjective interpretation of probability
The probability p of event E is the value [0,1] for which
‣A transaction where we pay £(1-p) if the event E occurs and receive £p otherwise would be considered fair
‣A transaction where we receive £(1-p) if the event E occurs and pay £p otherwise would be considered fair
Probabilities = fair betting odds
Bruno de Finetti

Probabilities in AI
In AI, we are often interested in assigning probabilities to
propositional formulas
The sample space in such a case is the set of all possible worlds
(i.e. propositional interpretations)
We are interested in reasoning about our degrees of belief in states of the world, and in how such beliefs are affected by available evidence
In some sense, probability theory can thus be seen as a generalisation of propositional logic

L
|
P(B)
et
t
Let the sample space S be the set of all propositional
P ( A ) = P ( A | B i ) · P ( B i ) k
i=1
P ( A ) = P⇥( A | B i ) · P ( B i )
Logical formulas i=1 k
P(A1 ⌅ … ⌅ Ak) = P(Ai|A1 ⌅ … ⌅ Ai1)
i=1 interpretations over the set of atoms {a,b}
A = { | |= a } Let the events A and B be defined as:
k ⇥i=1
P(A ⌅…⌅A )= P(A|A ⌅…⌅A ) 1 k i1 i1
B = { | |= b } A = { | |= a }
B = { | |= b }
A ⌅ B = { | |= a ⇧ b }
We will write expressions like P(a⋀b) and P(a⋁b) as a shorthand for coA = {| |= ¬a}
P(A⋂B) and P(A⋃B)
Then
A ⇤ B = { | |= a ⌃ b } A ⌅ B = { | |= a ⇧ b }
coA = {| |= ¬a} A ⇤ B = { | |= a ⌃ b }
3

P (sti-neck meningitis) = 0.8
|
Random variables
P (sti-neck) = 0.1
P (meningitis) = 0.0001
P (sti-neck|meningitis) · P (meningitis) P (sti-neck|meningitis) = 0.8
P (meningitis|sti-neck) =
= 0.8 · 0.0001 = 0.0008
0.1
X=a , where X is a variable that can take values from a fixed domain
P (sti-neck)
The kind of atomic propositions we’re interested in often take the form
1 P (meningitis|sti-neck) = {a1,a2,…,an}
P (sti-neck|meningitis) · P (meningitis) P (sti-neck)
0.8 · 0.0001
P (Functionality = excellent ⇤ Design = good)
= 0.1 = 0.0008
Such variables are called random variables and probability distributions
can express our beliefs of how likely each of their possible values are
P (Functionality = excellent ⇤ Design = good) Remark: Often we write a conjunction ⋀ as a comma, e.g.
P (Marks = high|Functionality = excellent, Design = good)
= P (Marks = high|Functionality = excellent ⇥ Design = good)
P (rain, sunny) = P (rain ⇥ sunny)