1. Introduction and JUnit
Search
1
A key perspective for AI an application is to interpret it as searching for a solution. For example, answering the question “How should I furnish my living room?”
A systematic search algorithm that’s certain to yield all solutions is called brute force. Such algorithms do not, in general, account for efficiency. So they may be not only inefficient, but also unending. AI approaches (an many non-AI approaches, for that matter) are created to avoid suc disadvantages.
1
Learning Objectives
Identify the role of searching in AI applications
Apply heuristics and greedy methods
— or—
Solve problems via constraints
— or—
Apply A* algorithms
— or—
Introduce adversarial search
2
Many problems of “intelligence” can be thought of as searching for a solution to a give problem. These are common steps to carry this out. We will elaborate on the terms used.
2
Agenda: Search
3
Agent Searching
Greedy
Constraint Satisfaction
A*
Adversarial Search
In this section, we will view the search process from the perspective of an agent.
An Agent Searching
4
Adapted from: Russel & Norvig
As an example, suppose that we want a route to the town of Eforie (Romania). We can view this as a car agent that’s “finding” a destination.
4
What screen should Matt show next?
Example of Search: AI Math Tutor (Matt)
5
Based on: Russel & Norvig p67
Another example of search is an intelligent Math tutor. This can be reduced to searching for the next screen to show the student.
5
/*
* Preconditions: (1) this (agent) has state theState (2) aPercept!=null
* Postcondition: Appropriate actions were taken
*/
//Outcome 1: theState known
theState = updateState(theState, aPercept);
//O2: actionSeq is the sequence of actions that accomplish agentGoal
if(actionSeq empty){
agentGoal = formGoal(goalState);
problem = formProb(theState, agentGoal); // as in “obstacle”
actionSeqG = search(problem);} // ignore failure for now
//O3 = Postcondition
actionSeq = removeFirst(actionSeq);
execute( …(actionSeq) );
simple-problem-solving-agent (Perception aPercept){
6
Based on: Russel & Norvig p67
A common answer to the question “what (or who) is searching?” is … an agent.
The figure shows an outline of a search process. The assumptions are that the agent (this object) is in state theState, and the parameter aPrecept is not empty. The postcondition—the desired outcome—can’t be more specific than shown, in general.
The algorithm outline is in the form of accumulating code goals (specific objectives G1, G2, …). “Accumulating” means that, in fulfilling a code (implementation) goal, we maintain all those already attained.
The first is that theState reflects the true current state. The second code goal is that actionSeq = the sequence of actions that accomplish agentGoal, the agent’s next goal. This involves performing a search, given the problem formed from the current state and the agent’s goal. The second is to fulfill the postcondition by carrying out the actions in the sequence.
6
Classification Example 1/2
7
aPerception: file encountered (or given)
Constraint: actions are EXAMINE, CLASSIFY, and EXTRACT
Postcondition: Appropriate actions executed
theState == …
actionSeq == {…}
agentGoal == …
In this code outline, we explore the preceding code in the case that the percept is the appearance of a file
7
Classification Example 2/2
8
aPerception: file encoutered (or given)
Postcondition: Appropriate actions executed
theState == EXAMINING (or CLASSIFYING, EXTRACTING)
actionSeq == {CLASSIFY X, EXAMINE X, EXTRACT X}
agentGoal == A COMPLETE CLASSIFICATION
This figure shows possible values. These could be simple constants, depending on the depth of the application.
8
Agenda: Search
9
Agent Searching
Greedy
Constraint Satisfaction
A*
Adversarial Search
In this section, we will explore greedy search. You can think of these are local, opportunistic steps with an overall objective in mind.
Definition of Greedy Algorithms
Goal: “optimizing” solutions
not necessarily true optimization (AI)
Use available (state) information
(e.g., do not rely on recorded sub-problem solutions as in dynamic programming)
Greedy algorithms strive for optimizing (tending to optimization) rather than optimized (perfectly efficient) steps. Greedy algorithms do not attempt to retain past information to decide on the next step
10
Example: Where Greed …
Start
Finish
Greedy selection
(“available
information”)
Greedy algorithms are local, and like all locally-made decisions, they can go seriously wrong. This is illustrated here by a car, that takes to fork pointing best to the destination.
11
Example: Where Greed Does Not Produce Optimality
Start
Finish
Selection
for
optimal
result
The red route is not, after all, the best.
However, despite this danger, greedy search can be surprisingly successful in many cases.
12
Greedy Algorithms
SolutionType getOptimizingSolution(…)
GOAL a (Parts): returnS is part of a solution
[GOAL b] (Greed used): All changes to returnS have been additive
and optimizing relative to the available information
GOAL c (Complement): returnS is complete
The figure shows the structure of greedy algorithms. The first goal (a, which is maintained throughout) expresses that a solution is being built. The second goal (b) says that all changes to this solution-under-construction have been additive (i.e., there have been no subtractions or alterations to what’s present). Goal c expresses completion.
13
Example: Requested Meetings
Available time
0–[====)———————–[=============)
0—–[============requested============)——
0—–[=======)——————-[=========)–
0———–[====)——[=========)————
Example: Requested Meetings
0–[====)———————–[=============)
0—–[============requested============)——
0—–[=======)——————-[=========)–
0———–[====)——[=========)————
Available time
Maximize number of meetings
A classic example of a greedy search is to maximize the number of meetings in a room given a set of requests like those shown in the figure.
15
Optimizing Solutions
16
Pick meeting that ends earliest—
optimizes number of meetings
Or heuristic (AI):
“most of the important meetings”
… “most?” “important?” optimizing
The question is what is “greed” in picking the next meeting? Is it “pick the one that starts the soonest from now” or something else? In short, there may be more than one greedy algorithm for a given problem. For this problem, it turns out that picking the meeting that ends earliest actually optimizes the solution. In general, greedy search may not be optimizing.
16
Agenda: Search
17
Agent Searching
Greedy
Constraint Satisfaction
A*
Adversarial Search
Constraint satisfaction is a practical way to view search.
Using Constraints to Search
18
Use constraints to increasingly bound the problem.
The idea of constraint satisfaction is to recognize and leverage the limits (constraints) of the needed search in the hope that they narrow the process down to a manageable number of alternatives. It’s the AI version of “we have no choice but to …”
18
Satisfying Constraints ≡ Sub- … -sub states
19
Constraint 1
Constraint 2
Constraint 3 …
A constraint = a set of relationships between variables.
One can visualize a constraint by the set of states (situations) that satisfy it. And then implementing constraint satisfaction as an intersection of all the constraints, as in the figure.
19
Example Constraints
20
Nature-like
Contains trees
Not very busy …
Computer splash screen for user preferences.
An example is setting an AI application to insert a splash screen based on user preferences such as Nature-like AND Contains trees AND Note very busy (i.e., with details). Imposing these successively focuses the search.
20
Constraint 1: Student knows linear equations
Constraint 2: Student knows exponents
Constraint 3: Student does not know quadratic equations
…
What screen(s) satisfy these?
Constraint Satisfaction Applied to Matt
21
Based on: Russel & Norvig p67
Recall the search example discussed earlier: AI Math Tutor (Matt)—and the problem what screen should Matt show next?
21
8-Queens Example 1
22
A classic toy example of constraint-based search is to find a stable deployment of 8 queens on a chessboard. “Stable” means that no queen threatens any other. The figure shows only three such queens.
22
8-Queens Example 2
23
This figure shows7 such queens. It is natural to think of searching for “the next queen for a given partial deployment.” However, this is not necessarily the most appropriate framing of the problem.
A good framing of the search is to search for all stable configurations on a limited board. This has the added advantage of recognizing that there is more than one stable configuration.
This is an example of how framing the problem is itself a key element of AI. Frame it one way, and the problem seem intractable; frame it another, and the problem seems solvable.
23
8-Queens Example 3: Classical Solution
24
Outcome 1 (On partial board):
stable_n = all stable n-queen configurations on n-by-8 board.
O 2 (Complement): n = 8
To fulfill outcome 1 (easily), use n=1
These goals will be successful if a solution exists because …
if s is a stable configuration of k queens on an n-by-8 chessboard, there is a stable (k-1)-sized subset of s on an (n-1)-by-8 chess board
We thus make the first code goal to possess all stable n-queen configurations on n-by-8 board. By selecting 1 for n, this is easy to accomplish. A goal can be thought of as a constraint—as in the process is constrained by having to have all stable n-queen configurations on n-by-8 board.
The second goal, n=8, is accomplished by incrementing n and exploring every configuration in stable_n and every possible placement of a queen in the new row. The next figure shows this for n=5, one of the known stable configurations on 5-by-8, and the new row.
24
8-Queens Example 4: nx8 Board
25
n=5
25
Agenda: Search
26
Agent Searching
Greedy
Constraint Satisfaction
A*
Adversarial Search
A* is a common approach to many searches.
Classic Tree Search (non-AI)
Breadth-first
Depth-first
…
27
We view the search space as a tree. Data structures already provide various brute-force searches such as breadth-first and depth first.
27
Bidirectional
28
If a goal state can be radiated backwards, then it is possible to generate paths from both directions. Unless we apply intelligent methods to this, however, they remain brute force. The best they can do is to reduce solution time by a half.
28
Tree Search
29
Ideal but …
Problem may not be well-defined
Tree search may be unacceptably slow
In theory, tree searches are “perfect.” However, they assume a well-defined objective, which may not be the case in the real world.
29
Example Objective
i
E
c
B
f
h
k
12
14
11
13
15
13
19
17
17
6
Shortest path BE
Consider the problem of finding the shortest path from node A to node B in a weighted, directed graph. This formulation expresses many problems, not just roads or flights. Nodes can, for example, be mental states for a recovering addict, and the cost of an edge the average number of months to get from one to the other.
30
Objective
29
35
11
B
12
23
12
11
11
17
6
i
c
B
f
h
k
12
14
11
13
15
11
19
17
17
6
i
E
c
f
h
26
15
k
E
A* is a type of search in that selects among paths—more precisely, among path families defined by the root of a subtree.
The figure shows a search from a beginning node B, intended to end at node E. You can think of this as selecting from all root-to-leaf paths in the tree (which is conceptual—you don’t actually build it).
31
The Idea of A*
Utilize heuristic
As is common in AI, a heuristic is used. But this kind of heuristic is somewhat unusual in that we can prove something about its effectiveness—and that’s what makes A* noteworthy.
32
The Idea of A*
i
E
c
B
f
h
k
12
14
11
13
15
13
19
17
17
6
Utilize heuristic (estimate).
The heuristic used is an estimate of the remaining cost of getting to the end node. A graphic example of this, when the nodes are physical points in two dimensions, is the crow-flies distance to the destination. Two examples are shown in the figure. Observe that crow-flies distances are underestimates.
33
The Idea of A*
34
Begin
Current node n
End
Known cost to here = g(n)
Estimated cost from n to end = h(n)
E
B
We define notations for the cost, g(n) to get from node B to node n, and the (heuristic) cost estimate h(n) of getting from n to E. These are shown in the figure.
34
The Idea of A*
35
Estimated cost of cheapest through n:
f(n) = g(n) + h(n)
Known cost to here = g(n)
Begin
End
E
B
Current node n
Estimated cost from n to end = h(n)
These are used to formulate f(n), the estimated cheapest cost from Begin to End via node n:
f(n) = g(n) + h(n)
35
The A* Algorithm
i
E
c
B
f
h
k
12
14
11
13
15
13
19
17
17
6
c
B
f
12 + 6 = 18
13 + 14 = 27
i
h
28 + …
13
12
11
Compare all of these
Compare all of these
Compare all of these
See Russel & Norvig p 87 for complete example.
Upon each expansion, the node is expanded with the lowest value of f().
36
A* Theorem:
37
A* produces optimal solution if h() …
… never overestimates
—(“admissible”)
… satisfies triangle inequality
a
b
c
a + b c
The tree-search version of A* states that if h() underestimates the true cost, then expanding the node with the lowest value of f() produces an optimal solution.
37
A* Algorithm Pre- and Postconditions
38
Here is the signature of aStar.
38
A* Algorithm Outline
39
If we implement these two goals, we’ll be done.
39
A* Algorithm Outline
40
This statement fulfills Goal 1.
Implementing the first goal is straightforward. (It involved altering returnT.)
40
A* Algorithm Outline
41
Fulfilling Goal 2 while maintaining goal 1 can be accomplished as shown, but this requires proof that Goal 1 is indeed kept valid once the operations are complete. This follows.
41
Agenda: Search
42
Agent Searching
Greedy
Constraint Satisfaction
A*
Adversarial Search
Adversarial Search: MinMax Algorithm
43
Russell & Norvig
“My” most advantageous move(s)
“My” turn
Adversarial search is the kind that takes place in the presence of an agent that’s trying to thwart your objectives. This is the case in games like tic-tac-toe.
43
44
Russell & Norvig
“Opponent’s” most advantageous move(s)
Adversarial Search: MinMax Algorithm
We assume that the adversary will do the best we can imagine.
44
Adversarial Search: MinMax Algorithm
45
Russell & Norvig
(from “my” perspective)
This is the essence of the MinMax algorithm (minimum penalty for me at my turn; maximum at my adversary’s turn).
45
MinMax Without Pruning
46
Russell & Norvig
Points “I” would get
if adversary moved best for them.
My move options
His move options
Alpha-Beta Pruning
47
Problem with minimax:
exponential in depth of tree.
Can reduce by half:
Prune to eliminate large parts of tree from consideration—branches that can’t influence final decision.
Russell & Norvig
When you apply this to a real-world problem—or even a nontrivial board game such as chess—the adversarial possibilities snowball. Pruning is a necessity, certainly for nodes that will ultimately have no influence.
47
With Alpha-Beta Pruning
Notation: [min I can get, max I can get]
48
Russell & Norvig
In alpha-beta pruning, we bracket each potential move with the minimum I can get from the move and the maximum.
48
With Alpha-Beta Pruning…[min I can get, max I can get]
49
Russell & Norvig
Prune
We then stop considering moves (and their descendants) where the bracket indicates no advantage. For example, there is no point in considering a node with [-, 2] (unlimited downside + max of 2) when I already have a [3, 3] course of action.
49
With Alpha-Beta Pruning…[min I can get, max I can get]
50
Russell & Norvig
The figure shows the pruning process completed.
50
Search Summary
AI: scientific/empirical duality
Greedy searching “powered by” local heuristics
Constraint satisfaction “boxes solutions in”
A* tree/graph search relies on optimistic heuristic
Adversarial searches can possibly be pruned
51
Tree aStar( Graph aGraph, Vertex aBeg, Vertex anEnd )
/*
Precondition 1: aGraph has only non-negative (edge) weights
Pre 2: There is an admissible heuristic h()
Pre 3: Triangle condition
Post: Tree returnT reflects the cheapest paths from aBeg
*/
Tree aStar( Graph aGraph, Vertex aBeg , Vertex anEnd )
/*
Precondition 1: aGraph has only non-negative (edge) weights
Pre 2: There is an admissible heuristic h()
Pre 3: Triangle condition
Post: Tree returnT reflects the cheapest paths from aBeg
*/
//—-Outcome 1 (Globally Optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal relative to aGraph
//—-O2 (Complete)
// returnT contains anEnd
//—-Outcome 1 (Globally Optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal relative to aGraph
//—-O2 (Complete)
// returnT contains anEnd
//—-Outcome 1 (Globally Optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal relative to aGraph
returnT = {cheapest f(n), n connecting to aBeg}
//—-Outcome 1 (Globally Optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal relative to aGraph
returnT = {cheapest f(n), n connecting to aBeg}
//—- Outcome 1 (Globally optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal within aGraph
returnT = {cheapest f(n), n connecting to aBeg}
//—-O2 (Complete)
// returnT contains anEnd
while anEnd not in returnT
(t,v) = edge with t in returnT, v not, and f(v) minimal
add (t,v) to returnT
//—- Outcome 1 (Globally optimized)
// returnT is a tree of distances from aBegin AND
// The distances in returnT are optimal with in aGraph
returnT = {cheapest f(n), n connecting to aBeg}
//—-O2 (Complete)
// returnT contains anEnd
while anEnd not in returnT
(t,v) = edge with t in return T, v not, and f(v) minimal
add (t,v) to returnT
/docProps/thumbnail.jpeg