CS代写 Lecture 1: supplement

Lecture 1: supplement
Additional disclaimer: we use small graphs as it makes it easy to reason about these searches. However: if we were really looking for a path from start to goal on a reasonably sized graph, there are better algorithms to use (see: algorithms class).
The techniques here generalize to searching any state space, where perhaps the states are unknown or simply too large to easily enumerate (see N-queens, etc.).
You need only: a current state, a method of enumerating actions to get to successor states, a goal test.

Copyright By PowCoder代写 加微信 powcoder

Using the graph from class notes looking for a path from S (“start”) to G (“goal”) we did::
In blind search, you sometime have a heuristic to aid move selection, sometimes not in which case you typically just make a rule so that move selection is deterministic (and therefore repeatable/debuggable).
Starting at S, we expand all outgoing edges, in practice this means we start by putting a path of just S in the ready queue, then: for each element in the queue add all successor paths to the queue by following an additional edge.
# now we expand A and then C S->A->B
->E S->C->A ->D
# note that A and C are reached multiple ways, but in most searches you keep a per-path visited list so we do not consider loops such as S->C->S and so forth.
# now we expand the 5 open paths S->A->B->E
S->A->C->D S->A->E->B S->C->A->B
S->C->D->G # goal reached, terminate
# in this case we expanded all of depth 3; depending on the conditions of the goal check it may make sense to terminate as soon as a goal state is reached, OR it may make sense to expand the entire Kth level (in this case 3) and then pick a goal if there is one.
Note: under unit-weight – that is if edges had the same weight and we were looking for the path with the shortest number of edges, then BFS is optimal (but otherwise it is not).

Starting at S, we expand until dead-end, then backtrack S->A->B->A->B …
# oops, there is a problem, we get stuck in an infinite cycle
So: unlike BFS, for DFS we need the concept of a “visited list” to remember all the nodes visited. So again
S->A->B->E # because A is on the visited-list and E comes next
# we cannot reach anywhere from E that is not on the visited list, so we backtrack S->A->B->? # also we cannot reach anywhere from B that is not on the visited list, so we backtrack
S->A->C->D # since A and S are on the visited-list
S->A->C->D->G
Goal reached, note that it is not optimal in any way. DFS produces “a path”, not “the path”. Problem: what if from B there was a sub-graph with thousands of nodes and edges? We would get lost traversing the whole thing…
Iterative Deepening:
Problem: How do we blend the “on target” power of BFS that does not get lost down the rabbit-hole with the memory efficiency of DFS?
ID does a DFS but only until depth=D, then terminates/backtracks. If no solution found at depth=D, increment by some constant (often 1) and go again.
ID depth=2;
S->A->B # terminate depth
->C # terminate depth ->C->A # terminate depth ->D # terminate depth
Note: we do not need a full visited list as depth limiting will avoid cycles, but we do use a per-path visited list as there is no need to re-explore the same path; so this is why we explore S->A->C and also S->C->A
ID depth=3 # increase by 1 S->A->B->E # terminate
->C->D # terminate depth
->E->B # terminate ->C->A->B # terminate depth

S->C->A->E # terminate depth ->D->G # goal found terminate
Goal found – in this case unit-optimal like BFS.
Note: this is only unit-optimal at the right depth. For example if we had started with D=5 then the DFS solution S->A->C->D->G would have been found; the key is that depth started below and worked up to the depth of the unit-optimal solution.
A*: Informed Search
Up until now we haven’t tried to look for an optimal solution using the actual weights on the graph. A* is a greedy search that keeps a priority queue of best options, selects and expands on the best one, repeat. If the smallest is the goal then it is accepted.
We use d(n)=g(n)+h(n) where g(n) is the actual path from start to n and h(n) is the heuristic function prediction of what it would take to get from n to goal.
Using h table from notes
0) add all edges from S into the queue, we also avoid paths that return to already visited nodes S->A (g=7+h=2 => 9)
S->C (g=4+h=6 => 10)
Expand smallest of S->A is 9, replace S->A with expansions from A S->A->B (g=15+h=19 => 34)
S->A->C (g=11+h=6 => 17)
S->A->E (g=18+h=22 => 40)
S->C (g=4+h=6 => 10)
Expand smallest S->C of 10, replace with expansions S->A->B (g=15+h=19 => 34)
S->A->C (g=11+h=6 => 17)
S->A->E (g=18+h=22 => 40)
S->C->A (g=8+h=2 => 10) S->C->D (g=10+h=1 => 11)
Expand smallest S->C->A of 10, replace S->A->B (g=15+h=19 => 34)
S->A->C (g=11+h=6 => 17)
S->A->E (g=18+h=22 => 40) S->C->A->B (g=16+h=19 => 35) S->C->A->E (g=19+h=22 => 41) S->C->D (g=10+h=1 => 11)
Expand smallest S->C->D of 11

S->A->B (g=15+h=19 => 34) S->A->C (g=11+h=6 => 17) S->A->E (g=18+h=22 => 40) S->C->A->B (g=16+h=19 => 35) S->C->A->E (g=19+h=22 => 41) S->C->D->G (g=11+h=0 => 11)
Note: when we expand to goal – we are NOT done. Rather we put the expansion into the queue.
When we pop the smallest for expansion, and it reaches the goal, then and only then do we declare a solution found.
This is the optimal solution. We over expanded because h(a) was overly optimistic, but it didn’t prevent finding the optimal solution.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com