CS计算机代考程序代写 algorithm CIS 471/571(Fall 2020): Introduction to Artificial Intelligence

CIS 471/571(Fall 2020): Introduction to Artificial Intelligence
Lecture 5: Constraint Satisfaction Problems (Part 2)
Thanh H. Nguyen
Source: http://ai.berkeley.edu/home.html

Announcements
§Project 1:
§Deadline: Oct 13th, 2020
§Homework 2: §Deadline: Oct 24th, 2020 §Will be posted today
Thanh H. Nguyen
10/13/20
2

Reminder: CSPs
§ CSPs:
§ Variables § Domains
§ Constraints
§ Implicit (provide code to compute)
§ Explicit (provide a list of the legal tuples)
§ Unary / Binary / N-ary
§ Goals:
§Here: find any solution §Also: find all, find best, etc.

Backtracking Search

Improving Backtracking
§General-purpose ideas give huge gains in speed
§Filtering: Can we detect inevitable failure early? § Arc consistency
§ Forward checking
§ Constraint propagation
§ Ordering:
§ Which variable should be assigned next? § In what order should its values be tried?
§Structure: Can we exploit the problem structure?

Example: Map Coloring

Example: Map Coloring
§An arc X ® Y is consistent iff for every x in the tail there is some y in the head which could be assigned without violating a constraint
§Enforcing consistency of X ® Y: filter values of the tail X to make X ® Y consistent §Forward checking: Enforcing consistency of arcs pointing to each new assignment
WA NT Q
SA NSW V
Thanh H. Nguyen
10/13/20
7

Example: Map Coloring
§Constraint propagation: enforce arc consistency of entire CSP § Maintain a queue of arcs to enforce consistency
§Important: If X loses a value, neighbors of X need to be rechecked! § After enforcing consistency on X ® Y, if X loses a value, all arcs pointing to
X need to be added back to the queue
NT SA
Q
WA
NSW
V
Thanh H. Nguyen
10/13/20
8

Ordering

Ordering: Minimum Remaining Values
§Variable Ordering: Minimum remaining values (MRV):
§ Choose the variable with the fewest legal left values in its domain
NSW
§Why min rather than max?
§Also called “most constrained variable” §“Fail-fast” ordering
WA
V
NT Q
SA

Ordering: Least Constraining Value
§Value Ordering: Least Constraining Value § Given a choice of variable, choose the least
§ I.e., the one that rules out the fewest values in the remaining variables
§ Note that it may take some computation to determine this! (E.g., rerunning filtering)
§Why least rather than most?
§Combining these ordering ideas makes 1000 queens feasible
constraining value

Structure

Problem Structure
§Extreme case: independent subproblems
§ Example: Tasmania and mainland do not interact
§Independent subproblems are identifiable as connected components of constraint graph
§ Suppose a graph of n variables can be broken into subproblems of only c variables:
§ Worst-case solution cost is O((n/c)(dc)), linear in n § E.g., n = 80, d = 2, c =20
§ 280 = 4 billion years at 10 million nodes/sec § (4)(220) = 0.4 seconds at 10 million nodes/sec

Tree-Structured CSPs
§ Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d2) time § Compare to general CSPs, where worst-case time is O(dn)

Tree-Structured CSPs
§Algorithm for tree-structured CSPs:
§Order: Choose a root variable, order variables so that parents precede children
§ Remove backward: For i = n : 2, apply RemoveInconsistent(Parent(Xi),Xi) § Assign forward: For i = 1 : n, assign Xi consistently with Parent(Xi)
§Runtime: O(n d2) (why?)

Tree-Structured CSPs
§ Claim 1: After backward pass, all root-to-leaf arcs are consistent
§ Proof: Each X®Y was made consistent at one point and Y’s domain could not have been
reduced thereafter (because Y’s children were processed before Y)
§ Claim 2: If root-to-leaf arcs are consistent, forward assignment will not backtrack
§ Proof: Induction on position
§ Why doesn’t this algorithm work with cycles in the constraint graph?
§ Note: we’ll see this basic idea again with Bayes’ nets

Improving Structure

Nearly Tree-Structured CSPs
§ Conditioning: instantiate a variable, prune its neighbors’ domains
§ Cutset conditioning: instantiate (in all ways) a set of variables such that the
remaining constraint graph is a tree
§ Cutset size c gives runtime O( (dc) (n-c) d2 ), very fast for small c

Cutset Conditioning
Choose a cutset
Instantiate the cutset (all possible ways)
Compute residual CSP for each assignment
Solve the residual CSPs (tree structured)
SA
SA SA SA

Cutset Quiz
§Find the smallest cutset for the graph below.

Tree Decomposition*
§ Idea: create a tree-structured graph of mega-variables § Each mega-variable encodes part of the original CSP § Subproblems overlap to ensure consistent solutions
M1 M2 M3 M4
WA 1 NT NT 1 Q Q 1 NS NS 1 V
11111111
SA SA SA SA
WW
{(WA=r,SA=g,NT=b), (WA=b,SA=r,NT=g), …}
{(NT=r,SA=g,Q=b), (NT=b,SA=g,Q=r), …}
Agree: (M1,M2) Î
{((WA=g,SA=g,NT=g), (NT=g,SA=g,Q=g)), …}
Agree on
shared vars
Agree on
shared vars
Agree on
shared vars

Iterative Improvement

Iterative Algorithms for CSPs
§ Local search methods typically work with “complete” states, i.e., all variables assigned § To apply to CSPs:
§ Take an assignment with unsatisfied constraints § Operators reassign variable values
§Nofringe! Liveontheedge.
§ Algorithm: While not solved,
§ Variable selection: randomly select any conflicted variable
§ Value selection: min-conflicts heuristic:
§ Choose a value that violates the fewest constraints
§ I.e., hill climb with h(n) = total number of violated constraints

Example: 4-Queens
§ States: 4 queens in 4 columns (44 = 256 states) § Operators: move queen in column
§ Goal test: no attacks
§ Evaluation: c(n) = number of attacks

Performance of Min-Conflicts
§ Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000)!
§The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio

Summary: CSPs
§CSPs are a special kind of search problem: §States are partial assignments
§Goal test defined by constraints
§Basic solution: backtracking search
§ Speed-ups: § Ordering
§ Filtering
§ Structure
§Iterative min-conflicts is often effective in practice

Local Search

Local Search
§ Tree search keeps unexplored alternatives on the fringe (ensures completeness) § Local search: improve a single option until you can’t make it better (no fringe!) § New successor function: local changes
§ Generally much faster and more memory efficient (but incomplete and suboptimal)

Hill Climbing
§Simple, general idea:
§Start wherever
§Repeat: move to the best neighboring state § If no neighbors better than current, quit
§What’s bad about this approach? § Complete?
§ Optimal?
§What’s good about it?

Hill Climbing Diagram

Hill Climbing Quiz
Starting from X, where do you end up ? Starting from Y, where do you end up ? Starting from Z, where do you end up ?

Simulated Annealing
§ Idea: Escape local maxima by allowing downhill moves § But make them rarer as time goes on
32