程序代写代做代考 C algorithm graph Restart of DPLL()

Restart of DPLL()
Logic in Computer Science
1
Modern SAT solvers
• Smart Decisions
• Fast unit propagation (BCP) • Learning from Conflicts
– Derive new clauses by resolving existing clauses starting from conflicting clauses
• For UNSAT instances, solver can produce a refutation proof upon termination
2
1

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w)
•a=0@1
3
DPLL Example
(a|b|-c) (b|c|d)(c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u)(-z|w) (-u|-w)
•a=0@1 •b=0@2
• c = 0, d= 1/0 (conflict)
4
2

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=0@2
• c = 0, d= 1/0 (conflict)
(a|b)
5
DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=1
(a|b)
6
3

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=1 •c=0@2 •d=0
(a|b)
7
DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=1 •c=0@2 •d=0
•x=0@3
(a|b)
8
4

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w) (x|y|w)(x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u)(-z|w) (-u|-w) (b|c)
•a=0@1 •b=1 •c=0@2 •d=0
•x=0@3 •y=0@4
• w = 1/0 (conflict)
(a|b)
9
DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=1 •c=0@2 •d=0
•x=0@3 •y=0@4
• w = 1/0 (conflict)
(x|y)
(a|b)
10
5

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w) (b|c)
•a=0@1 •b=1 •c=0@2 •d=0
•x=0@3
• y = 1, z = 1, u = 1/0 (conflict)
(x|y)
(a|b)
11
DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w)
(b|c)
•a=0@1 •b=1 •c=0@2 •d=0
•x=0@3
• y = 1, z = 1, u = 1/0 (conflict)
(x|-y|-z) (x|-y)
(a|b)
(x|y)
(x)
12
6

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w)
(b|c)
@0: x = 1, z = 1, u = 1, w = 1/0 (conflict)
(x|-y|-z) (x|-y)
(a|b)
(x|y)
(x)
13
DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w)(x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w)
(b|c)
@0: x = 1, z = 1, u = 1, w = 1/0 (conflict)
(x|-y|-z) (x|-y)
false
(-z|-u) (-x|-z)
(-x)
(a|b)
(x|y)
(x)
14
7

DPLL Example
(a|b|-c) (b|c|d) (c|-d) (y|-w) (x|y|w) (x|-y|z) (-y|-z|-u) (x|-z|u) (-x|z) (-x|u) (-z|w) (-u|-w)
(a|b)
(b|c)
(x|-y|-z) (x|-y)
(-z|-u) (-x|-z)
(-x)
(x|y)
(x)
false Refutation proof
What happens if the search on variables a, b, c, d, …, consumes too much time and effort?
15
Existing Restart Policies
Counting number of conflicting clauses
• Arithmetic: threshold += C – zChaff, Berkmin, …
• Geometric: threshold *= C – Minisat 1.14, 2.0
• Luby’s: threshold = Li L = x,x,2x,x,x,2x,4x,…
where x is Luby’s unit –ti = 2k−1, if i = 2k − 1; ti = ti−2 k−1+1, otherwise.
– TiniSat, Rsat, Minisat
17
8

Properties of Refutation Proofs
• Resolution Proof:
– Size = # of clauses in proof
– Width = length of largest clause in proof
• Every UNSAT CNF with a short proof MUST have a narrow proof.
• If a CNF has proof width k, we can find a proof in nO(k) – Just derive all resolvents of size <= k 18 Width-Based Restart Policy • Depends on the size of conflict clauses • Intuition: – Bias the solver to focus on narrow proofs – Long clauses = bad solver behavior • Restart when a lot of long clauses have been learned – A clause is violating if its size > W
– Restart if there are > N violating clauses
19
9

Comparison of Restart Strategies
Grid-pebbling formula (UNSAT) Running time Conflicts
20
Comparison of Restart Strategies
Grid-pebbling formula (SAT) Running time Conflicts
21
10

3SAT vs 2SAT
• SAT: Decide if a set of clauses is satisfiable. • 3SAT: Decide if a 3CNF is satisfiable.
• 2CNF: Each clause contains at most 2 literals. • 2SAT: Decide if a 2CNF is satisfiable.
• Theorem: 2SAT can be solved in polynomial time. • Proof:
– Construct a graph: the nodes of the graph are the literals appearing in the 2CNF. For each clause (A B) of the 2CNF, we create two edges: from A to B and from B to A.
– 2CNF is satisfiable iff there is a cycle in the graph which contains p and p for some variable p.
22
2SAT is linear solvable
C={c1:(p|q), c2:(-p|-r),c3:(p|-r), c4:(-q|r),c5:(q|s), c6:(-r|-s)}
Implication graph of C:
q -p -s r
-r s p -q
SSC: Strongly Connected Components
23
11

2SAT is linear solvable
C={c1:(p|q), c2:(-p|-r),c3:(p|-r), c4:(-q|r),c5:(q|s), c6:(-r|-s)}
S1
S2
q -p -s r
-r s p -q
SCC: S1 = { q, r, -s, -p }, S2 = { p, -q, -r, s }
G’ = ( { S1, S2}, { (S1, S2) }
G’ has no cycles; the topological order is S1, S2. Assign 0 to S1, 1 to S2.
24
Symmetry Breaking
• Symmetry is common in practice (e.g., identical trucks in vehicle routing)
• SAT encoding throws away this info. • Symmetry is useful for some proofs:
– e.g., Pigeon-hole principle: Impossible to place (n+1) birds into n bins, such that each bin gets at most one bird.
25
12

Pigeon Hole Principle
• Exponentially long proofs via resolution.
• Polynomially long proofs via cutting planes
Propositional var x(i,j) = assign bird i to hole j. 1) Each bird i gets a hole: ∑(bins j) x(i,j) = 1
2) Each hole j has capacity one: ∑(birds i) x(i,j) ≤ 1
Summing (1) over birds: ∑(birds i)∑(bins j) x(i,j) = n+1 Summing (2) over bins: ∑(bins j)∑(birds i) x(i,j) ≤ n Combine these to get 1 ≤ 0.
26
Symmetry Breaking
• Symmetries provided as part of input, or automatically detected (typically via graph isomorphism)
• Impose lexicographically minimal constraints
• Simple case: If {xi : i = 1,2,…, k} are all interchangeable, add constraints (xi v ¬xj) for all i < j. “If there’s a solution with r of k vars set true, let them be x1 through xr” 27 13 Symmetry Breaking • Order the variables, imagine assignment as a vector x. • Identify permutations π on variables, such that if p is a model, then so is p•π. • Add constraints x ≤ xπ x xπ π = (2, 1, 3) • Computing time of Pigeon-Hole can be linear. Example: 28 Local Search for SAT Solving Logic in Computer Science 29 14 Incomplete Algorithms • Returns satisfying assignment or FAILURE • Based on heuristic search for a solution • Faster than complete algorithms for many classes of satisfiable instances. • Examples: – Local search algorithms • Simulated annealing, tabu search, ... – Approximate algorithms • Linear programming 30 Local search for SAT • Make a guess (smart or dumb) about values of the variables • How’m I doin’? Repeat until satisfied • Try flipping a variable to make things better – Algorithms differ on which variable to flip 31 31 15 Local search for SAT • Flip a randomly chosen variable? – No, blundering around blindly takes exponential time – Ought to pick a variable that improves ... what? • Increase the # of satisfied clauses as much as possible • Break ties randomly • Note: Flipping a var will repair some clauses and break others – This is the “GSAT” (Greedy SAT) algorithm (almost) 32  Make a guess (smart or dumb) about values of the variables How’m I doin’?   Try flipping a variable that makes things better Repeat until satisfied 32 Local search for SAT • Flip most-improving variable. Guaranteed to work?  Make a guess (smart or dumb) about values of the variables How’m I doin’?   Try flipping a variable that makes things better Repeat until satisfied 1. A v C 2. A v B 3. ~Cv~B 4. ~Bv~A  What if first guess is A=1, B=1, C=1?  2 clauses satisfied (1, 2); 2 falsified (3, 4)  Flip A to 0  3 clauses satisfied (1, 2, 4)  Flip B to 0  all 4 clauses satisfied (pick this!)  Flip C to 0  3 clauses satisfied (1, 2, 3) 33 16 Local search for SAT  Flip most-improving variable. Guaranteed to work?  Make a guess (smart or dumb) about values of the variables How’m I doin’?   Try flipping a variable that makes things better Repeat until satisfied 1. A v C 2. A v B 3. ~Cv~B 4. ~Bv~A  But what if first guess is A=0, B=1, C=1?  3 clauses satisfied (1, 2, 4)  Flip A to 1  2 clauses satisfied (1, 2)  Flip B to 0  3 clauses satisfied (1, 3, 4)  Flip C to 0  3 clauses satisfied (2, 3, 4)  Pick one anyway ... (hoping it wins on next step) 34 Local search for SAT  Flip most-improving variable. Guaranteed to work?  Yes for 2SAT: probability  1 within O(n2) steps  No in general: can & usually does get locally stuck  Therefore, GSAT just restarts periodically  New random initial guess  Make a guess (smart or dumb) about values of the variables How’m I doin’?   Try flipping a variable that makes things better Repeat until satisfied 35 17 Greedy SAT (GSAT) proc GSAT(formula F) For (r = 0 to MAX-TRIES) Pick random assignment p. // Restart p. For (t=0 to MAX-FLIPS) If p satisfies F, return p; Else Find the variable v that if flipped maximizes the increase in satisfied clauses of F. Flip(v); Return FAILURE; Flip(boolean v) v := ¬v; 36 Discrete vs. Continuous Optimization • In calculus, we’re maximizing a real-valued function of n real variables – In SAT, we maximize the number of satisfied clauses over n Boolean variables, called MaxSAT. – This objective function is discrete since the variables can’t change gradually • You may already know how to find the maximal variable of a continuous function – From your calculus class: maximize a function by requiring all its partial derivatives = 0 – Well, the partial derivatives tell you which direction to change each variable if you want to increase the function 37 37 18 Gradient Descent/Ascent • GSAT is a greedy local optimization algorithm: Like a discrete version of gradient descent. • Could we make a continuous version of MaxSAT? – What would happen if we tried to solve it by gradient descent? nice smooth and nasty non-differentiable cost convex cost function function with local maxima 38 38 Problems with Hill Climbing 39 39 19 Problems with Hill Climbing • Local Optima (foothills): No neighbor is better, but not at global optimum. – (Maze: may have to move AWAY from goal to find (best) solution) • Plateaus: All neighbors look the same. – (15-puzzle: perhaps no action will change # of tiles out of place) • Ridge: going up only in a narrow direction. – Suppose no change going South, or going East, but big win going SE: have to flip 2 variables at once • Ignorance of the peak: Am I done? 40 40 How to escape local optima? • Restart every so often • Don’t be so greedy (more randomness) – Walk algorithms: With some probability, flip a randomly chosen variable instead of a best-improving variable – WalkSAT algorithms: Confine selection to variables in a single, randomly chosen unsatisfied clause (also faster!) – Simulated annealing (general technique): Probability of flipping is related to how much it helps/hurts • Force the algorithm to move away from current solution – Tabu algs: Refuse to flip back a var flipped in past t steps – Novelty algs for WalkSAT: With some probability, refuse to flip back the most recently flipped var in a clause – Dynamic local search algorithms: Gradually increase weight of unsatisfied clauses 41 41 20 Useful Ideas • With probability 1% – Choose randomly among all variables that appear in at least one unsatisfied clause • Else be greedier (other 99%) – Randomly choose an unsatisfied clause C • Flipping any variable in C will at least fix C – Choose the most-improving variable in C ... except ... – with probability p, ignore most recently flipped variable in C • The “adaptive” part: – If we improved # of satisfied clauses, decrease p slightly – If we haven’t gotten an improvement for a while, increase p – If we’ve been searching too long, restart the whole algorithm 42 42 WalkSAT (first good local search algorithm for SAT, very influential) • Choose an unsatisfied clause C – Flipping any variable in C will at least fix C • Compute a “break score” for each var in C – Flipping it would break how many other clauses? • If C has any vars with break score 0 – Pick at random among the vars with break score 0 • else with probability p – Pick at random among the vars with min break score • else – Pick at random among all vars in C 43 43 21 Simulated Annealing (popular general local search technique – often very effective, rather slow) • Pick a variable at random If flipping it improves assignment: do it. Else flip anyway with probability p = e-/T where –  = damage to score • What is p for =0? For large ? – T = “temperature” • What is p as T tends to infinity? • Higher T = more random, non-greedy exploration • As we run, decrease T from high temperature to near 0. 44 44 Genetic (Evolutionary) Algorithms (another popular general technique) • Many local searches at once – Consider 20 random initial assignments – Each one gets to try flipping 3 different variables (“reproduction with mutation”) – Now we have 60 assignments – Keep the best 20 (“natural selection”), and continue 45 45 22 Genetic (Evolutionary) Algorithms (another popular general technique – at least for evolutionary algorithms) Derive each new assignment by somehow combining two old assignments (“recombination or crossover”), not just modifying one (“mutation”) Parent 1 Parent 2 Child 1 Child 2 Good idea? Mutation 101 110 101 0111 0011 0011 110 0110 46 46 23