CS计算机代考程序代写 chain Bayesian decision tree Hidden Markov Mode AI Bayesian network algorithm 10_Review.dvi

10_Review.dvi

COMP9414 Review 1

Lectures

� Artificial Intelligence and Agents

� Problem Solving and Search

� Constraint Satisfaction Problems

� Logic and Knowledge Representation

� Reasoning with Uncertainty

� Machine Learning

� Natural Language Processing

� Knowledge Based Systems

� Neural Networks and Reinforcement Learning

UNSW ©W. Wobcke et al. 2019–2021

COMP9414: Artificial Intelligence

Lecture 10: Review

Wayne Wobcke

e-mail:w. .au

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 3

Environment Types

Fully Observable vs Partially Observable

Agent’s sensors give access to complete state of environment (no

internal state required)

Deterministic vs Stochastic

Next state of environment determined only by current state and

agent’s choice of action

Episodic vs Sequential

Agent’s experience divided into “episodes”; agent doesn’t need

to think ahead in episodic environment

Static vs Dynamic

Environment changes while agent deliberates

Discrete vs Continuous

Limited number of distinct, clearly defined percepts and actions

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 2

What is an Agent?

An entity

� situated: operates in a dynamically changing environment

� reactive: responds to changes in a timely manner

� autonomous: can control its own behaviour

� proactive: exhibits goal-oriented behaviour

� communicating: coordinate with other agents??

Examples: humans, dogs, …, insects, sea creatures, …, thermostats?

Where do current robots sit on the scale?

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 5

Example Agents

Agent Type Percepts Actions Goals Environment

Medical

diagnosis

system

Symptoms,

findings, pa-

tient responses

Questions,

tests, treat-

ments

Healthy patient,

minimise costs
Patient, hospital

Satellite im-

age system

Pixels of vary-

ing intensity,

colour

Print cate-

gorisation of

scene

Correct cate-

gorisation

Images from or-

biting satellite

Automated

taxi driver

Cameras,

speedometer,

GPS, sonar,

microphone

Steer, acceler-

ate, brake, talk

to passenger

Safe, fast, legal,

comfortable

trip, maximise

profits

Roads, other

traffic, pedestri-

ans, customers

Robocup

robot

Camera im-

ages, laser

range finder

readings, sonar

readings

Move motors,

“kick” ball
Score goals

Playing field

with ball and

other robots

Based on Russell and Norvig (2010) Figure 2.5.

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 4

Specifying Agents

� percepts: inputs to the agent via sensors

� actions: outputs available to the agent via effectors

� goals: objectives or performance measure of the agent

� environment: world in which the agent operates

Most generally, a function from percept sequences to actions

Ideally rational agent does whatever action is expected to maximize some

performance measure – the agent may not know the performance measure

(Russell and Norvig 2010)

Resource bounded agent must make “good enough” decisions based on its

perceptual, computational and memory limitations (design tradeoffs)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 7

Example Problem – Romania Map

Bucharest

Giurgiu

Urziceni

Hirsova

Eforie

Neamt
Oradea

Zerind

Arad

Timisoara

Lugoj
Mehadia

Dobreta
Craiova

Sibiu

Fagaras

Pitesti
Rimnicu Vilcea

Vaslui

Iasi

Straight−line distance
to Bucharest

0

160

242

161

77

151

241

366

193

178

253

329

80

199

244

380

226

234

374

98

Giurgiu

Urziceni
Hirsova

Eforie

Neamt

Oradea

Zerind

Arad

Timisoara

Lugoj

Mehadia

Dobreta

Craiova

Sibiu Fagaras

Pitesti

Vaslui

Iasi

Rimnicu Vilcea

Bucharest

71

75

118

111

70

75

120

151

140

99

80

97

101

211

138

146 85

90

98

142

92

87

86

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 6

State Space Search Problems

� State space — set of all states reachable from initial state(s) by any

action sequence

� Initial state(s) — element(s) of the state space

� Transitions

◮ Operators — set of possible actions at agent’s disposal; describe

state reached after performing action in current state, or

◮ Successor function — s(x)= set of states reachable from state x by

performing a single action

� Goal state(s) — element(s) of the state space

� Path cost — cost of a sequence of transitions used to evaluate

solutions (apply to optimization problems)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 9

A∗ Search

� Idea: Use both cost of path generated and estimate to goal to order

nodes on the frontier

� g(n) = cost of path from start to n; h(n) = estimate from n to goal

� Order priority queue using function f (n) = g(n) + h(n)

� f (n) is the estimated cost of the cheapest solution extending this path

� Expand node from frontier with smallest f -value

� Essentially combines uniform-cost search and greedy search

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 8

Summary — Blind Search

Criterion Breadth Uniform Depth- Depth- Iterative Bidirectional

First Cost First Limited Deepening

Time bd bd bm bl bd b
d
2

Space bd bd bm bl bd b
d
2

Optimal Yes Yes No No Yes Yes

Complete Yes Yes No Yes, if l ≥ d Yes Yes

b — branching factor

d — depth of shallowest solution

m — maximum depth of tree

l — depth limit

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 11

Forward Checking

Idea: Keep track of remaining legal values for unassigned variables

Terminate search when any variable has no legal values

WA NT Q NSW V SA T

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 10

Constraint Satisfaction Problems

� Constraint Satisfaction Problems are defined by a set of variables Xi,

each with a domain Di of possible values, and a set of constraints C

� Aim is to find an assignment to each the variables Xi (a value from

the domain Di) such that all of the constraints C are satisfied

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 13

Hill Climbing by Min-Conflicts

2

2

1

2

3

1

2

3

3

2

3

2

3

0

� Variable selection: randomly select any conflicted variable

� Value selection by min-conflicts heuristic

◮ Choose value that violates fewest constraints

◮ Can (often) solve n-Queens for n≈ 10,000,000

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 12

Arc Consistency

Simplest form of constraint propagation is arc consistency

Arc (constraint) X → Y is arc consistent if

for every value x in dom(X) there is some allowed y in dom(Y )

WA NT Q NSW V SA T

Make X → Y arc consistent by removing any such x from dom(X)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 15

Truth Table Semantics

� The semantics of the connectives can be given by truth tables

P Q ¬P P∧Q P∨Q P→ Q P↔ Q

True True False True True True True

True False False False True False False

False True True False True True False

False False True False False True True

� One row for each possible assignment of True/False to variables

� Important: P and Q are any sentences, including complex sentences

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 14

Propositional Logic

� Use letters to stand for “basic” propositions; combine them into more

complex sentences using operators for not, and, or, implies, iff

� Propositional connectives:

¬ negation ¬P “not P”

∧ conjunction P∧Q “P and Q”

∨ disjunction P∨Q “P or Q”

→ implication P→ Q “If P then Q”

↔ bi-implication P↔ Q “P if and only if Q”

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 17

Conversion to Conjunctive Normal Form

� Eliminate↔ rewriting P↔ Q as (P→ Q)∧ (Q→ P)

� Eliminate→ rewriting P→ Q as ¬P∨Q

� Use De Morgan’s laws to push ¬ inwards (repeatedly)

◮ Rewrite ¬(P∧Q) as ¬P∨¬Q

◮ Rewrite ¬(P∨Q) as ¬P∧¬Q

� Eliminate double negations: rewrite ¬¬P as P

� Use the distributive laws to get CNF [or DNF] – if necessary

◮ Rewrite (P∧Q)∨R as (P∨R)∧ (Q∨R) [for CNF]

◮ Rewrite (P∨Q)∧R as (P∧R)∨ (Q∧R) [for DNF]

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 16

Definitions

� A sentence is valid if it is True under all possible assignments of

True/False to its variables (e.g. P∨¬P)

� A tautology is a valid sentence

� Two sentences are equivalent if they have the same truth table, e.g.

P∧Q and Q∧P

◮ So P is equivalent to Q if and only if P↔ Q is valid

� A sentence is satisfiable if there is some assignment of True/False to

its variables for which the sentence is True

� A sentence is unsatisfiable if it is not satisfiable (e.g. P∧¬P)

◮ Sentence is False for all assignments of True/False to its variables

◮ So P is a tautology if and only if ¬P is unsatisfiable

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 19

Applying Resolution Refutation

� Negate query to be proven (resolution is a refutation system)

� Convert knowledge base and negated query into CNF

� Repeatedly apply resolution until either the empty clause (contradic-

tion) is derived or no more clauses can be derived

� If the empty clause is derived, answer ‘yes’ (query follows from

knowledge base), otherwise answer ‘no’ (query does not follow from

knowledge base)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 18

Resolution Rule of Inference

A1∨·· ·∨Am∨B ¬B∨C1∨·· ·∨Cn

A1∨·· ·∨Am∨C1∨·· ·∨Cn






❧❧




✱✱

where B is a propositional variable and Ai and C j are literals

� B and ¬B are complementary literals

� A1∨·· ·∨Am∨C1∨·· ·∨Cn is the resolvent of the two clauses

� Special case: If no Ai and C j, resolvent is empty clause, denoted �

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 21

Conditional Probability by Enumeration

cavity

L

toothache

cavity

catch catch

L

toothache

L

catch catch

L

.108 .012

.016 .064

.072

.144

.008

.576

P(¬cavity|toothache) =
P(¬cavity∧ toothache)

P(toothache)

=
0.016+0.064

0.108+0.012+0.016+0.064
= 0.4

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 20

Random Variables

� Propositions are random variables that can take on several values

P(Weather = Sunny) = 0.8

P(Weather = Rain) = 0.1

P(Weather = Cloudy) = 0.09

P(Weather = Snow) = 0.01

� Every random variable X has a domain of possible values

〈x1, x2, · · · ,xn〉

� Probabilities of all possible values P(Weather)= 〈0.8, 0.1, 0.09, 0.01〉

is a probability distribution

� P(Weather, Appendicitis) is a combination of random variables

represented by cross product (can also use logical connectives

P(A∧B) to represent compound events)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 23

Bayesian Networks

� Example (Pearl, 1988)

Burglary
P(Burglary)
0.001

Earthquake
P(Earthquake)
0.002

Alarm
Burglary
True
True
False
False

Earthquake
True
False
True
Flase

P(Alarm)
0.95
0.94
0.29
0.001

JohnCalls MaryCalls
Alarm
True
False

P(JohnCalls)
0.90
0.05

Alarm
True
False

P(MaryCalls)
0.70
0.01

� Probabilities summarize potentially infinite set of possible circum-

stances

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 22

Bayes’ Rule

P(B|A) =
P(A|B)P(B)

P(A)

� AI systems abandon joint probabilities and work directly with

conditional probabilities using Bayes’ Rule

� Deriving Bayes’ Rule:

P(A∧B) = P(A|B)P(B) (Definition)

P(B∧A) = P(B|A)P(A) (Definition)

So P(A|B)P(B) = P(B|A)P(A) since P(A∧B) = P(B∧A)

Hence P(B|A) =
P(A|B)P(B)

P(A)
if P(A) 6= 0

� Note: If P(A) = 0, P(B|A) is undefined

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 25

Bigram Model

Maximize P(w1, · · · ,wn|t1, · · · , tn).P(t1, · · · , tn)

� Apply independence assumptions (Markov assumptions)

◮ P(w1, · · · ,wn|t1, · · · , tn) = ΠP(wi|ti)

◮ Observations (words) depend only on states (tags)

◮ P(t1, · · · , tn) = P(tn|tn−1). · · · .P(t0|φ), where φ = start

◮ Bigram model: state (tag) depends only on previous state (tag)

� Estimate probabilities

◮ P(ti|t j) = #((t j,ti occurs)/#(t j starts a bigram)

◮ Choose tag sequence that maximizes ΠP(wi|ti).P(ti|ti−1)

◮ Parts of speech generated by finite state machine

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 24

Example – Causal Inference

� P(JohnCalls|Burglary)

� P(J|B) = P(J|A∧B).P(A|B)+P(J|¬A∧B).P(¬A|B)

= P(J|A).P(A|B)+P(J|¬A).P(¬A|B)

= P(J|A).P(A|B)+P(J|¬A).(1−P(A|B))

� Now P(A|B) = P(A|B∧E).P(E|B)+P(A|B∧¬E).P(¬E|B)

= P(A|B∧E).P(E)+P(A|B∧¬E).P(¬E)

= 0.95×0.002+0.94×0.998 = 0.94002

� Therefore P(J|B) = 0.90×0.94002+0.05×0.05998 = 0.849017

� Fact 3: P(X |Z) = P(X |Y ∧Z).P(Y |Z)+P(X |¬Y ∧Z).P(¬Y |Z), since

X ∧Z⇔ (X ∧Y ∧Z)∨ (X ∧¬Y ∧Z) (conditional version of Fact 2)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 27

Viterbi Algorithm

1. Sweep forward (one word at a time) saving only the most likely

sequence (and its probability) for each tag ti of wi

2. Select highest probability final state

3. Follow chain backwards to extract tag sequence

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 26

Hidden Markov Model for POS Tagging

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 29

Restaurant Training Data

Alt Bar F/S Hun Pat Price Rain Res Type Est Wait?

X1 T F F T Some $$$ F T French 0–10 T

X2 T F F T Full $ F F Thai 30–60 F

X3 F T F F Some $ F F Burger 0–10 T

X4 T F T T Full $ F F Thai 10–30 T

X5 T F T F Full $$$ F T French >60 F

X6 F T F T Some $$ T T Italian 0–10 T

X7 F T F F None $ T F Burger 0–10 F

X8 F F F T Some $$ T T Thai 0–10 T

X9 F T T F Full $ T F Burger >60 F

X10 T T T T Full $$$ F T Italian 10–30 F

X11 F F F F None $ F F Thai 0–10 F

X12 T T T T Full $ F F Burger 30–60 T

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 28

Supervised Learning

� Given a training set and a test set, each consisting of a set of items for

each item in the training set, a set of features and a target output

� Learner must learn a model that can predict the target output for any

given item (characterized by its set of features)

� Learner is given the input features and target output for each item in

the training set

◮ Items may be presented all at once (batch) or in sequence (online)

◮ Items may be presented at random or in time order (stream)

◮ Learner cannot use the test set at all in defining the model

� Model is evaluated by its performance on predicting the output for

each item in the test set

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 31

Information Gain

None Some Full

Patrons?

French Italian Thai Burger

Type?

For Patrons, Entropy =
1

6
(0)+

1

3
(0)+

1

2

[


1

3
log(

1

3
)−

2

3
log(

2

3
)
]

= 0+0+
1

2

[1

3
(1.585)+

2

3
(0.585)

]

= 0.459

For Type, Entropy =
1

6
(1)+

1

6
(1)+

1

3
(1)+

1

3
(1) = 1

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 30

Choosing an Attribute to Split

None Some Full

Patrons?

French Italian Thai Burger

Type?

Patrons is a “more informative” attribute than Type, because it splits the

examples more nearly into sets that are “all positive” or “all negative”

This notion of “informativeness” can be quantified using the mathematical

concept of “entropy”

A parsimonious tree can be built by minimizing the entropy at each step

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 33

Laplace Error and Pruning

Following Ockham’s Razor, prune branches that do not provide much

benefit in classifying the items (aids generalization, avoids overfitting)

For a leaf node, all items will assigned the majority class at that node.

Estimate error rate on the (unseen) test items using the Laplace error

E = 1−
n+1

N + k

N = total number of (training) items at the node

n = number of (training) items in the majority class

k = number of classes

If the average Laplace error of the children exceeds that of the parent node,

prune off the children

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 32

Induced Decision Tree

No Yes

Fri/Sat?

None Some Full

Patrons?

No Yes

Hungry?

Type?

French Italian Thai Burger

F T

T F

F

T

F T

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 35

Bernoulli Model

Maximize P(x1, · · · ,xn|c).P(c)

� Features are presence or absence of word wi in document

� Apply independence assumptions

◮ P(x1, · · · ,xn|c) = P(x1|c). · · · .P(xn|c)

◮ Probability of word w (not) in class c independent of context

� Estimate probabilities

◮ P(w|c) = #(w in document in class c)/#(documents in class c)

◮ P(¬w|c) = 1−P(w|c)

◮ P(c) = #(documents in class c)/#documents

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 34

Text Classification

� Input: A document (e-mail, news article, review, tweet)

� Output: One class drawn from a fixed set of classes

◮ So text classification is a multi-class classification problem

◮ . . . and sometimes a multi-label classification problem

� Learning Problem

◮ Input: Training set of labelled documents {(d1, c1), · · ·}

◮ Output: Learned classifier that maps d to predicted class c

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 37

Bag of Words Model

I love this movie! It’s sweet, but with satirical

humor. The dialogue is great and the adventure

scenes are fun… It manages to be whimsical and

romantic while laughing at the conventions of

the fairy tale genre. I would recommend it to

just about anyone. I’ve seen it several times,

and I’m always happy to see it again whenever

I have a friend who hasn’t seen it yet!

it 6
I 5
the 4
to 3
and 3
seen 2
yet 1
would 1
whimsical 1
times 1
sweet 1
satirical 1
adventure 1
genre 1
fairy 1
humor 1
have 1
great 1

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 36

Naive Bayes Classification

w1 w2 w3 w4 Class

1 0 0 1 1

0 0 0 1 0

1 1 0 1 0

1 0 1 1 1

0 1 1 0 0

1 0 0 0 0

1 0 1 0 1

0 1 0 0 1

0 1 0 1 0

1 1 1 0 0

Class = 1 Class = 0

P(Class) 0.40 0.60

P(w1|Class) 0.75 0.50

P(w2|Class) 0.25 0.67

P(w3|Class) 0.50 0.33

P(w4|Class) 0.50 0.50

To classify document with w2, w3, w4

• P(Class = 1|¬w1,w2,w3,w4)

≈ ((1−0.75)∗0.25∗0.5∗0.5)∗0.4
= 0.00625

• P(Class = 0|¬w1,w2,w3,w4)

≈ ((1−0.5)∗0.5∗0.67∗0.33)∗0.6
= 0.03333

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 39

Natural Languages – Ambiguity

� Natural languages exhibit ambiguity

“The fisherman went to the bank” (lexical)

“The boy saw a girl with a telescope” (structural)

“Every student took an exam” (semantic)

“The table won’t fit through the doorway because it is too

[wide/narrow]” (pragmatic)

� Ambiguity makes it difficult to interpret meaning of phrases/sentences

◮ But also makes inference harder to define and compute

� Resolve ambiguity by mapping to unambiguous representation

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 38

MNB Example

Words Class

d1 Chinese Beijing Chinese c

d2 Chinese Chinese Shanghai c

d3 Chinese Macao c

d4 Tokyo Japan Chinese j

d5 Chinese Chinese Chinese Tokyo Japan ?

P(Chinese|c) = (5+1)/(8+6) = 3/7

P(Tokyo|c) = (0+1)/(8+6) = 1/14

P(Japan|c) = (0+1)/(8+6) = 1/14

P(Chinese| j) = (1+1)/(3+6) = 2/9

P(Tokyo| j) = (1+1)/(3+6) = 2/9

P(Japan| j) = (1+1)/(3+6) = 2/9

To classify document d5

• P(c|d5) ∝ [(3/7)
3 . 1/14 . 1/14] . 3/4

≈ 0.0003

• P( j|d5) ∝ [(2/9)
3 . 2/9 . 2/9] . 1/4

≈ 0.0001

• Choose Class c

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 41

Syntactic Structure

Syntactically ambiguous = more than one parse tree

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 40

Typical (Small) Grammar

S→ NP VP

NP→ [Det] Adj∗ N [AP | PP | Rel Clause]∗

VP→ V [NP] [NP] PP∗

AP→ Adj PP

PP→ P NP

Det→ a | an | the | . . .

N→ John | park | telescope | . . .

V→ saw | likes | believes | . . .

Adj→ hot | hotter | . . .

P→ in | . . .

Special notation: ∗ is “0 or more”; [ . . ] is “optional”

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 43

Example Chart

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 42

Chart Parsing

� Use a chart to record parsed fragments and hypotheses

� Hypotheses N → α • β where N → αβ is a grammar rule means

“trying to parse N as αβ and have so far parsed α”

� One node in chart for each word gap, start and end

� One arc in chart for each hypothesis

� At each step, apply fundamental rule

◮ If chart has N→ α•Bβ from n1 to n2 and B→ γ• from n2 to n3
add N→ αB•β from n1 to n3

� Accept sentence when S→ α• is added from start to end

� Can produce any sort of derivation

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 45

Converting English into First-Order Logic

� Everyone likes lying on the beach — ∀x likes lying on beach(x)

� Someone likes Fido – ∃x likes(x, Fido)

� No one likes Fido – ¬∃x likes(x, Fido) (or ∀x¬likes(x, Fido))

� Fido doesn’t like everyone – ¬∀x likes(Fido, x)

� All cats are mammals – ∀x(cat(x)→ mammal(x))

� Some mammals are carnivorous – ∃x(mammal(x)∧ carnivorous(x))

� Note: ∀xA(x)⇔¬∃x¬A(x), ∃xA(x)⇔¬∀x¬A(x)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 44

First-Order Logic

� Terms: constants, variables, functions applied to terms (refer to

objects)

◮ e.g. a, f (a), mother of (Mary), . . .

� Atomic formulae: predicates applied to tuples of terms

◮ e.g. likes(Mary, mother of (Mary)), likes(x, a)

� Quantified formulae:

◮ e.g. ∀x likes(x, a), ∃x likes(x, mother of (y))

◮ here the second occurrences of x are bound by the quantifier (∀ in

the first case, ∃ in the second) and y in the second formula is free

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 47

First-Order Resolution

A1∨·· ·∨Am∨B ¬B
′∨C1∨·· ·∨Cn

(A1∨·· ·∨Am∨C1∨·· ·∨Cn)θ






❧❧




✱✱

where B, B′ are positive literals, Ai, C j are literals, θ is an mgu of B and B

� B and ¬B′ are complementary literals

� (A1∨·· ·∨Am∨C1∨·· ·∨Cn)θ is the resolvent of the two clauses

� Special case: If no Ai and C j, resolvent is empty clause, denoted �

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 46

Defining Semantic Properties

Brothers are siblings

∀x∀y(brother(x, y)→ sibling(x, y))

“Sibling” is symmetric

∀x∀y(sibling(x, y)↔ sibling(y, x))

One’s mother is one’s female parent

∀x∀y(mother(x, y)↔ (female(x)∧parent(x, y))

A first cousin is a child of a parent’s sibling

∀x∀y(firstcousin(x, y)↔∃p∃sparent(p, x)∧ sibling(p, s)∧parent(s, y)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 49

Examples

� {P(x,a),P(b,c)} is not unifiable

� {P( f (x),y),P(a,w)} is not unifiable

� {P(x,c),P(b,c)} is unifiable by {x/b}

� {P( f (x),y),P( f (a),w)} is unifiable by

σ = {x/a,y/w}, τ = {x/a,y/a,w/a}, υ = {x/a,y/b,w/b}

Note that σ is an mgu and τ = σθ where θ = . . .?

� {P(x),P( f (x))} is not unifiable (c.f. occur check!)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 48

Unification

� A unifier of two atomic formulae is a substitution of terms for

variables that makes them identical

◮ Each variable has at most one associated term

◮ Substitutions are applied simultaneously

� Unifier of P(x, f (a), z) and P(z, z, u) : {x/ f (a), z/ f (a), u/ f (a)}

� Substitution σ1 is a more general unifier than a substitution σ2 if for

some substitution τ, σ2 = σ1τ (i.e. σ1 followed by τ)

� Theorem. If two atomic formulae are unifiable, they have a most

general unifier (mgu).

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 51

Perceptron Learning Rule

Adjust the weights as each input is presented

Recall s = w1x1 +w2x2 +w0

if g(s) = 0 but should be 1,

wk ← wk +ηxk

w0 ← w0 +η

so s ← s+η(1+∑
k

x
2
k )

if g(s) = 1 but should be 0,

wk ← wk−ηxk

w0 ← w0−η

so s ← s−η(1+∑
k

x
2
k )

otherwise weights are unchanged (η > 0 is called the learning rate)

Theorem: This will eventually learn to classify the data correctly,

as long as they are linearly separable

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 50

McCulloch & Pitts Model of a Single Neuron

x1

x2

Σ ✲ g ✲

1

✟✟
✟✟

✟✟
✟✯

❍❍❍❍❍❍❍❥





✁✕

w1

w2

w0 =−θ

s
g(s)

s = w1x1 +w2x2−θ

= w1x1 +w2x2 +w0

x1, x2 are inputs

w1, w2 are synaptic weights

θ is a threshold

w0 is a bias weight

g is transfer function

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 53

Reinforcement Learning Framework

� Agent interacts with its environment

� There is a set S of states and a set A of actions

� At each time step t, the agent is in some state st and must choose an

action at , whereupon it goes into state st+1 = δ(st ,at) and receives

reward r(st ,at)

� In general, r() and δ() can be multi-valued, with a random element

� The aim is to find an optimal policy π : S→ A which maximizes the

cumulative reward

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 52

Multi-Layer Neural Networks

XOR

NOR

AND NOR

−1

+1

+1 −1
−1.5

−1

−1

+0.5

+0.5

Question: Given an explicit logical function, we can design a multi-layer

neural network by hand to compute that function – but if we are just given

a set of training data, can we train a multi-layer network to fit this data?

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 55

Calculation

Theorem: In a deterministic environment, for an optimal policy, the value

function V ∗ satisfies the Bellman equations: V ∗(s) = r(s,a)+ γV ∗δ(s,a)

where a = π∗(s) is the optimal action at state s.

Let δ∗(s) be the transition function for π∗(s) and suppose γ = 0.9

1. Suppose δ∗(s1) = s1. Then V
∗(s1) = 5+0.9V

∗(s1) so V
∗(s1) = 50

Suppose δ∗(s2) = s2. Then V
∗(s2) = 10+0.9V

∗(s2) so V
∗(s2) = 100

2. Suppose δ∗(s1) = s2. Then V
∗(s1) = 2+ 0.9V

∗(s2) so V
∗(s1) = 92

Suppose δ∗(s2) = s2. Then V
∗(s2) = 10+0.9V

∗(s2) so V
∗(s2) = 100

3. Suppose δ∗(s1) = s2. Then V
∗(s1) = 2+0.9V

∗(s2) so V
∗(s1) = 81.6

Suppose δ∗(s2) = s1. Then V
∗(s2) = 15+0.9V

∗(s1) so V
∗(s2) = 88.4

So 2 is the optimal policy

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 54

Example: Delayed Rewards

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 57

Examination Conditions

1. The exam opens at 2pm and closes at 5pm on the day of the exam;

2. You have 2 hours, 10 minutes to complete the exam (a timer will count down the

remaining time);

3. You may start the exam at any time after 2pm, but you must finish within 2 hours, 10

minutes of starting and by 5pm;

4. For ELS students there is a different closing time, but you must finish by that time;

5. Remember to submit your answers – the latest answers should be submitted

automatically;

6. Submit answers at the end of exam by clicking on “Finish attempt …” then “Submit all

and finish”;

7. You are not permitted to make copies of the questions on the exam;

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 56

Examination Format

1. READING TIME – 10 MINUTES

2. TIME ALLOWED – 2 HOURS

3. THIS EXAMINATION COUNTS FOR 60% OF THE FINAL MARK

4. A MARK OF 40% (24 OUT OF 60) IS REQUIRED TO PASS THE

COURSE

5. TOTAL NUMBER OF QUESTIONS – 40

6. ALL QUESTIONS CARRY 1 MARK – NO QUESTIONS ARE OPTIONAL

7. CHOOSE AT MOST ONE ANSWER PER QUESTION

8. INCORRECT ANSWERS RECEIVE A REDUCTION OF 0.1 MARK

For any queries during the exam, contact the Course Convenor (w. .au)

or Course Admin ( .au). Any announcements during the exam

will be sent to students by e-mail using the course e-mail alias.

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 59

Examination Rules

Fit to Sit Rule: By sitting this exam, you are declaring that you are fit to do so and cannot

later apply for Special Consideration. If, during the exam, you feel unwell to the point that

you cannot continue with the exam, you should take the following steps:

1. Stop working on the exam and take note of the time;

2. Contact the Course Convenor (w. .au) or Course Admin

( .au) immediately by e-mail and advise them that you are unwell;

3. Immediately submit a Special Consideration application saying that you felt ill during

the exam and were unable to continue;

4. Obtain a doctor’s certificate within 24 hours and attach it to the Special Consideration

application;

5. If you were able to advise the Course Convenor or Course Admin of the illness

during the exam, attach screenshots of this conversation to the Special Consideration

application.

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 58

Examination Conditions

8. You are not permitted to communicate (by any means including e-mail, phone,

chat, talk, etc.) with anyone during this exam except course staff (in particular, no

communication with other students, tutors or contract cheating agencies is allowed);

9. The exam must be all your own work – you are not permitted to receive help from

anyone except course staff during the exam;

10. During and even after you finish the exam, do not communicate exam questions or

answers to anyone at any time, including distributing anywhere on the Internet, and

make sure that no other person (including those in your household) can access your

work;

11. Do not disclose your zpass to any other person; if you have disclosed your zpass to

another person, change it immediately;

12. Deliberate violation of these exam conditions and other conditions contained in the

Student Code of Conduct will be referred to the Student Conduct and Integrity Unit as

serious misconduct.

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Review 60

Examination Rules

Technical Issues: If you experience a technical issue during the exam, take the

following steps:

1. Take screenshots of as many of the following as possible (all screenshots

must include the date and time the issue occurred):

• error messages

• screen(s) not loading

• timestamped speed tests

• power outage maps

• messages or information from your internet provider regarding the issues

experienced

2. Contact the Course Convenor (w. .au) or Course Admin

( .au) by e-mail as soon as possible to advise them of the

issue;

3. Submit a Special Consideration application immediately after the exam,

including all appropriate screenshots.

UNSW ©W. Wobcke et al. 2019–2021