CS计算机代考程序代写 algorithm COMP3702 Artificial Intelligence – Module 1: Search – Part 1: Blind/Uninformed Search

COMP3702 Artificial Intelligence – Module 1: Search – Part 1: Blind/Uninformed Search

COMP3702

Artificial Intelligence

Module 1: Search – Part 1: Blind/Uninformed Search

Dr Alina Bialkowski

Semester 2, 2021

The University of Queensland

School of Information Technology and Electrical Engineering

Table of contents

1. Agent design components

2. Introduction to search problems

3. Formulating a problem as a search problem

4. General structure of search algorithms

5. Blind search methods

6. Search with edge costs: Uniform cost search

1

Agent design components

Intelligent agents

A computer agent gathers information about an environment, and takes actions autonomously

based on that information.

• Perceives the environment (e.g. using sensors).
• Maintains models/representations of the environment and uses them for reasoning.
• Performs actions (e.g. via actuators) that change the state of the environment to achieve

its goal.

• May learn from data.

2

Recall our goal: To build a useful, intelligent agent

In this class, goal is to design agents that act rationally.

To achieve our goal, we need to define our “agent” in a way that we can program it:

• The problem of constructing an agent is usually called the agent design problem
• Simply, it’s about defining the components of the agent, so that when the agent acts

rationally, it will accomplish the task it is supposed to perform, and do it well.

3

Agent design components

The following components are required to solve an agent design problem:

• Action Space (A): The set of all possible actions the agent can perform.
• Percept Space (P): The set of all possible things the agent can perceive.
• State Space (S): The set of all possible configurations of the world the agent is operating

in.

• World Dynamics/Transition Function (T : S × A→ S ′): A function that specifies how
the configuration of the world changes when the agent performs actions in it.

• Perception Function (Z : S → P): A function that maps a state to a perception.
• Utility Function (U : S → R): A function that maps a state (or a sequence of states) to

a real number, indicating how desirable it is for the agent to occupy that state/sequence

of states.

4

The agent design components

Recall:

• Best action: the action that maximizes a given performance criteria
• A rational agent selects an action that it believes will maximize its performance criteria,

given the available knowledge, time, and computational resources.

Utility function, U : S → R:

• A function that maps a state (or a sequence of states) to a real number, indicating how
desirable it is for the agent to occupy that state/sequence of states.

• Crafting the utility function is a key step in the agent design process.

5

Defining the environment

Properties about the environment itself or the agent’s knowledge about the environment:

• Discrete vs. continuous
• Are the state / action / percept spaces finite?

• Fully observable vs. partially observable
• Does the agent know the state of the world/itself exactly?
• Is the percept function (Z) a bijection? [one-to-one mapping]

• Deterministic vs. stochastic/non-deterministic
• Does the agent always know exactly which state it will be in after performing an action from

a state?

• Is the world dynamics (T) a function, i.e. does it map each (state, action) pair to exactly
one next state?

• Single agent vs. multiple agents
• Are there other agents interacting?

• Static vs. dynamic
• Can the world change while the agent is “thinking”?

6

Example: 8-puzzle

• Environment type: flat, discrete, static
• Sensing uncertainty: fully observable vs partially observable
• Effect uncertainty: deterministic vs stochastic
• Number of agents: single agent vs multiple agents


Initial state Goal state

A classic search problem

7

Example: Noughts-and-crosses or Tic-tac-toe

• Environment type: Flat, discrete, static
• Sensing uncertainty: fully observable vs partially observable
• Effect uncertainty: deterministic vs stochastic
• Number of agents: single agent vs multiple agents

• Note: classification sometimes depends on time discretization
• Here, 1 time step = a single move by the agent & the opponent

Adversarial search problem or zero-sum game – covered in Module 5
8

Introduction to search problems

Introduction to search

• In general the agent design problem is to find a mapping from sequences of percepts to
action (P → A) that maximises the utility function (U)

• Given the sequences of percepts or stimuli it has seen so far, what should the agent do
next, so that the utility function can be maximized?

• Search is a way to solve this problem

9

When to apply search methods?

When we have:

• Sensing uncertainty: fully observable
• Effect uncertainty: deterministic
• Number of agents: single agent

then the agent design only needs to consider:

• Action Space (A)
• Percept Space (P)
• State Space (S)
• Transition Function (T : S × A→ S ′)
• Perception Function (Z : S → P)
• Utility Function (U : S → R) . . . WHY?

10

What is search?

A general framework to solve problems by exploring possibilities:

• Have multiple possible paths or options,
• Possibilities come from knowledge about the problem

11

What is search?

Using world dynamics, we can foresee future paths of different actions

Goal: How to find the solution with the smallest exploration cost?

12

Types of search methods

Two types:

• Blind search: Do not use any additional information to “guess” cost of moving from
current node to goal

• DFS, BFS, iterative-deepening DFS, and uniform cost search

• Informed search: Use additional information to “guess” the cost of moving from current
node to goal and decide where to explore next using this information

• Greedy best-first search and A* search

13

Formulating a problem as a

search problem

Formulating a problem as a search problem

Problem: For a given problem/system, find a sequence of actions to move the agent from

being in the initial state to being in the goal state, such that the utility is maximised (or cost of

moving is minimised)

Design task is to map this problems to the agent components:

• Action Space (A)
• Percept Space (P)
• State Space (S)
• Transition Function (T : S × A→ S ′)
• Perception Function (Z)
• Utility Function (U : S → R) or cost function
• Initial & goal state

14

Example: 8-puzzle

• Action Space (A): {up, down, left, right} for the empty cell
• State Space (S): All possible permutations
• Transition Function (T : S × A→ S ′): Given by tile moves, initial state is given
• Utility: +1 for goal state, 0 for all other states, i.e. agent’s objective is to find a solution,

goal state is given


Initial state Goal state

15

How to do the search?

We want agent to be able to search for its own solution.

We need to embed this problem definition into a representation that is easy to search.

• Overview (discrete space)
• State graph representation
• General structure of search algorithms
• Uninformed (blind) search
• Informed search

16

State graph representation

• A way to represent the problem concretely in programs

• Also a way of thinking about the problem

• We may or may not explicitly represent the state graph

• In problems with continuous or very large state space, state graph is often used as a
compact representation of the state space (more on this later)

17

State graph representation

• Graph: (V ,E )
• Vertex (V) represent states
• Edges (E) represent world dynamics

Each edge ss ′ is labelled with the cost to move from s to s ′. It may also be labelled by the

action to move from state s to s ′.

• Initial & goal state: Initial & goal vertices
• A solution is a path from initial to goal vertices in the state graph
• Cost: the sum of the cost associated with each edge in the path
• Optimal solution is the shortest (lowest cost) path

18

Example: 8-puzzle

19

State graph representation

The state graph may have multiple connected components

• Connected component: sub-graph where there is at least one path in the sub-graph
from each vertex in the sub-graph to any other vertex in the sub-graph.

• Reachability: If I’m in a particular state, can I reach another state?
• Example: In state graph representation of 8-puzzle, there are 2 connected components. If

the initial & goal are in different connected component, there’s no solution. We can check

for this using a parity check, in Tutorial 2.

20

General structure of search

algorithms

General structure of search algorithms

Generic search algorithm

• Given a graph, start and goal nodes, incrementally explore paths from the start nodes.
• Maintain a frontier (fringe) of paths from the start node that have been explored.
• As search proceeds, the frontier expands into the unexplored nodes until a goal node is

encountered.

• The way in which the frontier is expanded defines the search algorithm or search strategy.

21

Generic search algorithm (P&M Figure 3.3)

22

https://artint.info/2e/html/ArtInt2e.Ch3.S4.html

General structure of search algorithms

Put initial vertex in a “container” of states to be expanded

Loop:

1. Select a vertex, v , from the “container”

2. If v is the goal vertex, then return

3. Expand v (i.e., put the results of successor(v) to the “container”)

successor(v) is a function that:

• Takes a vertex v as input
• Outputs the set of immediate next vertices that can be visited from v (i.e., the end-point

of out-edges from v)

23

Search tree

An abstract representation of the visited nodes (expanded + container):

State Space Graph → Search Tree (first 3 levels)

c h

b

s

z

d a

f

s

c b

s h

c b f

s

c b

z

a d

If states can be revisited, the search tree may be infinite, even though the state graph (and

state space) is finite.

24

The container is the fringe of the tree

Fringe or frontier: List of nodes in the search tree that have not been expanded yet

25

General structure of search algorithms with a search tree

Put the initial vertex as root of the search tree

Loop:

1. Select a node, t, from the fringe

2. If t is the goal vertex, then return

3. Expand t (i.e., put the results of successor(t) to the “container”)

Consider successor(t) again: Now vertex v of the state graph corresponds to a node t in the

search tree, so successor(t) are children of t in the search tree.

26

General structure of search algorithms with a search tree

Various search methods differ in:

• Which node to expand next (i.e. retrieval order of the container), and

• How to expand the fringe.

• In Tutorial 2 (week 3) you will implement breadth-first search (BFS) and depth-first search
(DFS).

27

Tree search and graph search algorithms

Source: R&N Figure 3.7
28

Performance measures for search algorithms

Completeness

• Complete: The algorithm will find the solution whenever one exists.

Optimality

• Optimal: Return a minimum cost path whenever one exists.

Complexity

• Time (#steps) and space (memory) complexity
• Complexity analysis informs us of how the required time and memory needed to solve the

problem increase as the input size increases

• Input size: Size of the state and action spaces of the search problem
• In state graph representation: Size of the graph
• Use computational complexity notation (e.g. Big-O)

29

Performance measures for search algorithms

Big-O notation

• Suppose f (n) is the required time/space required to solve the problem if the input size is n
• Then, we say f (n) is of complexity O(g(n)) if there is a constant k and n0 such that:

0 ≤ f (n) ≤ k g(n) for all n0

where n0 is an arbitrary input size beyond which the highest order term g(n) dominates

the growth in required time or space, i.e. asymptotic analysis.

Branching factor: used to characterise graph topologies

30

Blind search methods

Blind search algorithms

• Blind/uninformed search: does not estimate the cost from the current node to the goal

• Examples:
• Breadth first search (BFS)
• Depth first search (DFS)
• Iterative deepening DFS (IDDFS)
• Uniform cost search (UCS)

• First 3 don’t use any info about cost (unweighted graph)
• Uniform cost – uses edge weights

31

Breadth-first search

• Breadth-first search treats the frontier as a first-in first-out queue.

• It always selects one of the earliest elements added to the frontier.

• If the list of paths on the frontier is [p1, p2, . . . , pn]:
• p1 is selected. Its neighbours are added to the end of the queue, after pn.
• p2 is selected next.

32

Illustrative Graph — Breadth-first Search

33

Example — Navigating UQ

Ignore costs on these edges

34

Breadth-first search: Properties and analysis

Parameters: b, branching factor; d , depth of shallowest goal node

Complete? Will BFS find a solution?

• Complete if b is finite

Generate optimal solution? Does BFS guarantee to find the path with fewest edges?

• Yes in #steps

Complexity:

• Time: O(bd) ⇐ 1 + b + b2 + . . .+ bd = b
d+1−1

b−1

• Space: O(bd) nodes to remember: O(bd−1) explored nodes + O(bd) unexplored nodes
• Finds minimum step path, but requires exponential space!

35

Let’s get some intuition: Practical numbers for BFS

36

Bidirectional Strategy

• 2 search trees and hence 2 fringe queues:

• Time and space complexity is : O(bd/2) << O(bd) 37 Depth-first search • Depth-first search treats the frontier as a stack, or a last-in first-out queue. • It always selects one of the last elements added to the frontier. i.e. the most recently added. • If the list of paths on the frontier is [p1, p2, . . . , pn−1, pn] • pn is selected. Paths that extend pn are added to the end of the stack. • pn−1 is only selected when all paths from pn have been explored. 38 Illustrative Graph — Depth-first Search 39 Depth-first search: Properties and analysis Parameters: b, branching factor; m, maximum depth; d , depth of shallowest goal node Complete? Will DFS find a solution? • Complete, if m and b are finite and nodes are not revisited Generate optimal solution? Does DFS guarantee to find the path with fewest edges? • No Complexity: • Time: O(bm) ⇐ 1 + b + b2 + . . .+ bm = b m+1−1 b−1 • Space: Can be implemented using O(bm), or O(m) using backtracking DFS. But be careful of revisiting vertices (states)! • Efficient in use of space 40 Example — Navigating UQ Ignore costs on these edges 41 Iterative deepening DFS (IDDFS) BFS: Finds minimum step path, but requires exponential space. DFS: Efficient in space, but no path length guarantee. Iterative deepening DFS • Multiple DFS with increasing depth-cutoff until the goal is found. • For k = 1, 2, . . .: Perform DFS with depth cutoff k. • Only generates nodes with depth ≤ k . 42 IDDFS: Properties and analysis Parameters: b, branching factor; m, maximum depth; d , depth of shallowest goal node Complete? Will IDDFS find a solution? • Complete if b is finite Generate optimal solution? Does IDDFS guarantee to find the path with fewest edges? • Yes in #steps Complexity: • Time: O(bd) ⇐ db + (d − 1)b2 + . . .+ (1)bd • Space: O(bd) • Finds minimum step path, and doesn’t require exponential space! 43 IDDFS: Time complexity - expanding the geometric sequence = db + (d − 1)b2 + (d − 2)b3 + · · · + (1)bd = b d ( 1 + 2 b + 3 b2 + · · · + + d bd−1 ) = b d ∞∑ i=1 i bi−1 = b d b ∞∑ i=1 i bi = b d b 1/b (1 − 1/b)2 = b d b 1 b b2 (b − 1)2 = b d b ( b b − 1 )2 44 Search with edge costs: Uniform cost search Search with edge costs: Uniform cost search • Sometimes there are costs associated with edges. • The cost of a path is the sum of the costs of its edges: cost(n0, . . . , nk) = k∑ i=1 cost(ni−1, ni ) • An optimal solution is one with minimum cost. Uniform cost search • At each stage, uniform-cost search selects a path on the frontier with lowest cost. • The first path to a goal is a least-cost path to a goal node. • When edge costs are equal ⇒ breadth-first search. 45 Uniform cost Search • Uniform cost search treats the frontier as a priority queue ordered by path cost. • It always selects one of the highest-priority vertices added to the frontier. • If the list of paths on the frontier is [p1, p2, . . .]: • p1 is selected to be expanded. • Its successors are inserted into the priority queue. • The highest-priority vertex is selected next (and it might be a newly expanded vertex). 46 UCS: Properties and analysis Parameters: b, branching factor; m, maximum depth; d , depth of shallowest goal node C∗: Cost of optimal solution, �: minimum cost of a step Complete? Will UCS find a solution? • Complete if b is finite and all edges have a cost ≥ � > 0

Generate optimal solution? Does UCS guarantee to find the path with the lowest cost?

• Yes if all edges have positive cost

Complexity:

• Time and Space: O(b1+b
C∗
� c)

47

Example — Navigating UQ

48

Blind search: Summary

• Breadth-first search

• Depth-first search

• Iterative-deepening depth-first search

• Uniform cost search

49

Attributions and References

Thanks to Dr Archie Chapman and Dr Hanna Kurniawati for their materials.

Many of the slides are adapted from David Poole and Alan Mackworth, Artificial Intelligence: foundations of

computational agents, 2E, CUP, 2017 http://https://artint.info/. These materials are copyright c© Poole
and Mackworth, 2017, licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0

International License.

Other materials derived from Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3E,

Prentice Hall, 2009.

http://https://artint.info/

Agent design components
Introduction to search problems
Formulating a problem as a search problem
General structure of search algorithms
Blind search methods
Search with edge costs: Uniform cost search
Appendix