PowerPoint Presentation
Lecture 4:
Beyond Classical Search
C.-C. Hung
Kennesaw State University
(Slides used in the classroom only)
Outline
Chapter 4: Beyond classical search
Hill Climbing (Recap)
Simulated Annealing
Local Beam Search
Genetic Algorithms
In Chapter 3
Chapter 3: addresses a single category of problems (the solution is a sequence of actions):
Observable,
Deterministic,
Environments are known.
In Chapter 4
The algorithms are suitable for problems in which all that matters is the solution state, not the path cost to reach it.
Local search
Simulated annealing (from statistical physics)
Genetic algorithms (from evolutionary biology)
Local search algorithms
Search algorithms like BFS, DFS or A* explore all the search space systematically by keeping one or more paths in memory and by recording which alternatives have been explored.
When a goal is found, the path to that goal constitutes a solution.
Local search algorithms can be very helpful if we are interested in the solution state but not in the path to that goal. They operate only on the current state and move into neighboring states .
Recap: Hill Climbing
Hill Climbing …
Hill-climbing
Goal: Optimizing an objective function.
Does not require differentiable functions
Can be applied to “goal” predicate type of problems.
Boolean Satisfied Problems (BSAT) with objective function number of clauses satisfied.
Intuition: Always move to a better state
Heuristic Searches
Hill Climbing (aka Gradient Descent)
Special type of problem:
Don’t care how we got there.
Technique
Specify an evaluation function, e.
How close a state is to the solution
Randomly choose a state.
Only choose actions which improve e.
If cannot improve e, then perform a random restart
Choose another random state to restart the search from
Advantage
Only ever have to store one state (the present one)
Cycles must mean that e decreases, which can’t happen.
Hill-climbing search
Problem: depending on initial state, can get stuck in local maxima (Figure 4.1)
Example – 8 queens problem
Place 8 queens on board
No one can “take” another
Hill Climbing:
Throw queens on randomly
Evaluation
How many pairs attack each other
Move a queen out of other’s way
Improves the evaluation function
If this can’t be done
Throw queens on randomly again
Hill Climbing …
Mathematical Description
Hill climbing attempts to maximize (or minimize) a function f(x), where x are discrete states. These states are typically represented by vertices in a graph, where edges in the graph encode nearness or similarity of a graph.
Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value of f, until a local maximum (or local minimum) xm is reached.
Hill climbing can also operate on a continuous space: in that case, the algorithm is called gradient ascent (or gradient descent if the function is minimized).
A function for Hill Climbing …
A function f(x,y)
Another function for Hill Climbing
Another function f(x,y)
Hill Climbing – Algorithm (Pseudo-Code)
Step 1: Pick a random point in the search space.
Step 2: Consider all the neighbors of the current state.
Step 3: Choose the neighbor with the best quality and move to that state.
Step 4: Repeat steps 2 thru 3 until all the neighboring states are of lower quality.
Step 5: Return the current state as the solution state.
Hill-climbing search
“Like climbing Everest in thick fog with amnesia”
Hill-climbing search: 8-queens problem
h = number of pairs of queens that are attacking each other, either directly or indirectly
h = 17 for the above state
Hill-climbing search: 8-queens problem
A local minimum with h = 1
Properties of Hill-climbing
Hill-climbing resembles DFS search because it tends to steadfastly move down one path.
Hill-climbing is sometimes also called greedy local search.
“Pure” hill-climbing does not support backtracking so each move is irrevocable.
The initial state is chosen at random! Ties are also broken randomly. The algorithm may return different solutions depending on the start state!
Some of the problems….
Local Maxima: Local peaks that aren’t as high as the globally maximum peak.
Plateaus: Regions of the state space that are flat (all states have virtually the same evaluation measure).
Ridges (1): Impossible to get to the higher slope in a single move; must move down first in order to move up later. (Figure 4.4, page 124 Textbook)
Ridges (2): Areas with steep slopes at the bottom but gentle slopes near the top. Can wander around aimlessly on the top.
Hill Climbing
Simulated Annealing
What is Annealing?
Annealing is a thermal process for obtaining low energy states of a solid in a heat bath.
The process contains two steps:
Increase the temperature of the heat bath to a maximum value at which the solid melts.
Decrease carefully the temperature of the heat bath until the particles arrange themselves in the ground state of the solid. Ground state is a minimum energy state of the solid.
The ground state of the solid is obtained only if the maximum temperature is high enough and the cooling is done slowly.
Simulated Annealing: Motivation
Annealing in metals.
Heat the solid state metal to a high temperature.
Cool it down very slowly according to a specific schedule.
If the heating temperature is sufficiently high to ensure random state and the cooling process is slow enough to ensure thermal equilibrium, then the atoms will place themselves in a pattern that corresponds to the global energy minimum of a perfect crystal.
Simulated Annealing
The process of annealing can be simulated with the Metropolis algorithm, which is based on Markov Chain Monte Carlo (MCMC) techniques.
We can apply this algorithm to generate a solution to combinatorial optimization problems assuming an analogy between them and physical many-particle systems with the following equivalences:
Solutions in the problem are equivalent to states in a physical system.
The cost of a solution is equivalent to the “energy” of a state.
What is MCMC?
The process of annealing can be simulated with the Metropolis algorithm, which is based on Markov Chain Monte Carlo (MCMC) techniques.
Markov Chain – where we go next only depends on our last state (the Markov property).
Monte Carlo – just simulating data.
Simulated Annealing
Like hill-climbing, but probabilistically allows down moves, controlled by current temperature (T[k]) and how bad move is.
Let T[1], T[2],… be a temperature schedule.
usually T[1] is high, then T[k] is decreasing gradually.
For example, T[k] = 0.9 * T[k-1].
Let E be quality measure of state.
Goal: maximize E.
Simulated Annealing Algorithm (Pseudo-Code)
Current = random state, k = 1 and a temperature T.
If T[k] = 0, stop.
Next = random next state.
If Next is better than start, move there.
If Next is worse, then apply Metropolis algorithm :
Let Δ = E(next) – E(current)
Move to next with probability e^(-Δ/T[k]) using the following formula:
k = k+1
Boltzmann acceptance criterion
exp (-Δ/kT) is called Boltzmann acceptance criterion and k is Boltzmann’s constant (we ignore k here).
Boltzmann criterion is then compared with a random probability between 0 and 1.
When T is large, e^ (-Δ /T) is close to e^0, or 1. So for large T, you go anywhere.
When T is small, e^ (-Δ /T) is close to e^-inf, or 0. So you avoid most bad moves.
After T becomes 0, one often does simple hill-climbing.
Execution time depends on the cooling schedule; memory use is trivial.
30
Difficulty in Searching Global Optima
starting
point
descend
direction
local minima
global minima
barrier to local search
N
30
Local search techniques, such as steepest descend method, are very good in finding local optima. However, difficulties arise when the global optima is different from the local optima. Since all the immediate neighboring points around a local optima is worse than it in the performance value, local search can not proceed once trapped in a local optima point. We need some mechanism that can help us escape the trap of local optima. And the simulated annealing is one of such methods.
31
Possible Solutions
Solution 1:
Randomly select another starting point whenever trapped in a local optima — shake the picture horizontally.
31
32
Possible Solutions
Solution 2:
In additional to solution 1, we allow search to occasionally go on an ascending direction — shake the picture vertically.
32
33
Intuition of Simulated Annealing
Origin:
The annealing process of heated solids.
Intuition:
By allowing occasional ascent in the search process, we might be able to escape the trap of local minima.
N
33
The name of simulated annealing origins from the simulation of annealing process of heated solids. “In condensed matter physics, annealing denotes a physical process in which a solid in a heat bath is heated up by increasing the temperature of the heat bath to a maximum value at which all particles of the solid randomly arrange themselves in the liquid phase, followed by cooling through slowly lowering the temperature of the heat bath. In this way, all particles arrange themselves in the low energy ground state of a corresponding lattice.” (quoted from Simulated Annealing: Theory and Applications)
In solving combinatorial optimization problems, we make an analogy to the aforementioned process. The basic idea is that by allowing the search process to proceed in an unfavorable direction occasionally, we might be able to escape the trap of local optima and reach the global optima.
34
Consequences of the Occasional Ascents
Help escaping the
local optima.
desired effect
Might pass global optima
after reaching it
adverse effect
N
34
However, like swords have two edges, there are two consequences of allowing occasional ascent steps. On one hand, it fulfills our desire to let the algorithm proceed beyond local optima. On the other hand, we might miss the global optima by allowing the search process to pass through it. To maintain the desired effect and reduce the adverse effect, we need a sophisticated scheme to control the acceptance of occasional ascents, which is the heart of simulated annealing.
Simulated Annealing – Pseudo-Code
Step 1: Initialize – Start with a random initial placement. Initialize a very high “temperature”.
Step 2: Move – Perturb the placement through a defined move.
Step 3: Calculate score – Calculate the change in the score due to the move made.
Step 4: Choose – Depending on the change in score, accept or reject the move. The probability of acceptance depending on the current “temperature”.
Step 5: Update and repeat– Update the temperature value by lowering the temperature. Go back to Step 2.
The process is done until “Quenching Point” is reached.
Simulated Annealing: the code
1. Create random initial solution γ
2. Eold=cost(γ);
3. for(temp=tempmax; temp>=tempmin;temp=next_temp(temp) ) {
4. for(i=0;i
9. if(random() >= exp(-delta/K*temp);
10. undo_func(γ); //rejected bad move
11. else
12. Eold=Enew //accepted bad move
13. else
14. Eold=Enew; //always accept good moves
}
}
Convergence of simulated annealing
Metropolis criterion
Ball on terrain example –
SA vs Greedy Algorithms
Hill Climing
SA
SA Cooling Schedule
Starting Temperature
Final Temperature
Temperature Decrement
Iterations at each temperature
Practical Issues with simulated annealing
Start at a temperature where 50% of bad moves are accepted.
Each cooling step reduces the temperature by 10%
The number of iterations at each temperature should attempt to move between 1-10 times each “element” of the state.
The final temperature should not accept bad moves; this step is known as the quenching step.
Applications of SA
Basic Problems
Traveling salesman
Graph partitioning
Matching problems
Graph coloring
Scheduling
Engineering
VLSI design
Placement
Routing
Array logic minimization
Layout
Facilities layout
Image processing
Code design in information theory
Applications of SA
Circuit partitioning and placement.
Strategy scheduling for capital products with complex product structure.
Umpire scheduling in US Open Tennis tournament!
Event-based learning situations.
Jigsaw puzzles – Intuitive usage of SA
Given a jigsaw puzzle such that one has to obtain the final shape using all pieces together.
Starting with a random configuration, the human brain unconditionally chooses certain moves that tend to the solution.
However, certain moves that may or may not lead to the solution are accepted or rejected with a certain small probability.
The final shape is obtained as a result of a large number of iterations.
Conclusions for SA
SA algorithms are usually better than greedy algorithms, when it comes to problems that have numerous locally optimum solutions.
SA is not the best solution to circuit partitioning or placement. Network flow approach to solving these problems functions much faster.
SA guarantees a convergence upon running sufficiently large number of iterations.
45
Recap: Control of Annealing Process
Acceptance of a search step (Metropolis Criterion):
Assume the performance change in the search direction is .
Accept an ascending step only if it pass a random test,
Always accept a descending step, i.e.
N
45
Suppose that a step in the search direction produce a difference of in the performance value. The acceptance criterion, which is often referred to as Metropolis Criterion, is as follows:
If it is a favorable direction, say 0, we always accept it. Otherwise, this step is accepted with a probability exp(-/T), where T is a parameter.
46
Recap: Control of Annealing Process
Acceptance of a search step (Metropolis Criterion):
Use the following rule, called the Metropolis criterion, to decide whether or not to keep. the move or instead revert back to the original configuration: If , accept the move.
N
46
Suppose that a step in the search direction produce a difference of in the performance value. The acceptance criterion, which is often referred to as Metropolis Criterion, is as follows:
If it is a favorable direction, say 0, we always accept it. Otherwise, this step is accepted with a probability exp(-/T), where T is a parameter.
47
Recap: Simulated Annealing Algorithm
0) k = 0;
1) Search (i j), performance difference ;
2) If 0 then accept, else
if exp(-/T(k)) > random[0,1) then accept;
3) Repeat 1) and 2) for L(k) steps;
4) k = k+1;
5) Repeat 1) – 4) until stopping criterion is met.
N
47
This is an abstract description of a simulated annealing algorithm. With proper selection of parameters, it is proven that it can converge to a global optima with probability 1.
References
Aarst, “Simulated annealing and Boltzman machines”, Wiley, 1989.
Duda Hart Stork, “Pattern Classification”, Wiley Interscience, 2001.
Otten, “The Annealing Algorithm”, Kluwer Academic Publishers, 1989.
Sherwani, “Algorithms for VLSI Physical Design Automation”, Kluwer Academic Publishers, 1999.
Local Beam Search
Local beam search
Keep track of k states rather than just one.
Start with k randomly generated states.
At each iteration, all the successors of all k states are generated.
If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
How do we use the SA to maximize the function f(x) = x2 on the integer interval [0, 31]?
Review SA algorithm first.
Exercise: Simulated Annealing (SA)
51
Questions & Suggestions?
The End
52
Simulation
Simulated Annealing
Random Starting Point
x
function
value
T = Very
High
Simulated Annealing
Random Step
x
function
value
T = Very
High
Simulated Annealing
Even though E is lower, accept
x
function
value
T = Very
High
Simulated Annealing
Next Step; accept since higher E
x
function
value
T = Very
High
Simulated Annealing
Next Step; accept since higher E
x
function
value
T = Very
High
Simulated Annealing
Next Step; accept even though lower
x
function
value
T = High
Simulated Annealing
Next Step; accept even though lower
x
function
value
T = High
Simulated Annealing
Next Step; accept since higher
x
function
value
T = Medium
Simulated Annealing
Next Step; lower, but reject (T is falling)
x
function
value
T = Medium
Simulated Annealing
Next Step; Accept since E is higher
x
function
value
T = Medium
Simulated Annealing
Next Step; Accept since E change small
x
function
value
T = Low
Simulated Annealing
Next Step; Accept since E larget
x
function
value
T = Low
Simulated Annealing
Next Step; Reject since E lower and T low
x
function
value
T = Low
Simulated Annealing
Eventually converge to Maximum
x
function
value
T = Low
Appendix (ignore)
Simulated Annealing
Acceptance criterion and cooling schedule
Hill-climbing – depth-first with a heuristic (refined version)
function hillclimbing(node)
node = selectrandomnode;
loop
if (node is goal) then return(node);
successors = generatesuccessors(node);
bestsuccessor = selectbestnode(successors);
if (value(bestsuccessor) < value(node))
then return(node);
node = bestsuccessor;
end loop
Hill Climbing
72
Implementation of Simulated Annealing
Select a local search scheme
Define a neighborhood structure
Sample a new design from the neighboring designs of the current design.
Accept the new design based on Metropolis Criterion.
72
The implementation of simulated annealing algorithm is problem dependent. First, a proper local search scheme must be chosen. We need to define a neighborhood structure. Then usually search is carried out by randomly selecting one of the neighbors of the current design.
Simulated Annealing
The distribution used to decide if we accept a bad movement is know as Boltzman distribution.
Decrease the temperature slowly, accepting less bad moves at each temperature level until at very low temperatures the algorithm becomes a greedy hill-climbing algorithm.
This distribution is very well known is in solid physics and plays a central role in simulated annealing. (Where γ is the current configuration of the system, E γ is the energy related with it, and Z is a normalization constant).
Practical Issues with simulated annealing
Cost function must be carefully developed, it has to be “fractal and smooth”.
The energy function of the left would work with SA while the one of the right would fail.
Good
Bad
Practical Issues with simulated annealing
The cost function should be fast it is going to be called “millions” of times.
The best is if we just have to calculate the deltas produced by the modification instead of traversing through all the state.
This is dependent on the application.
Practical Issues with simulated annealing
In asymptotic convergence simulated annealing converges to globally optimal solutions.
In practice, the convergence of the algorithm depends of the cooling schedule.
There are some suggestion about the cooling schedule but it stills requires a lot of testing and it usually depends on the application.
Chart6
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
20 25 30 20 25 30 20 25 30 40 30 25 20 30 25 20 30 25 20
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
Sheet1
-90 -80 -70 -60 -50 -40 -30 -20 -10 0 -10 -20 -30 -40 -50 -60 -70 -80 -90
-80 -70 -60 -50 -40 -30 -20 -10 0 10 0 -10 -20 -30 -40 -50 -60 -70 -80
-70 -60 -50 -40 -30 -20 -10 0 10 20 10 0 -10 -20 -30 -40 -50 -60 -70
-60 -50 -40 -30 -20 -10 0 10 20 30 20 10 0 -10 -20 -30 -40 -50 -60
-50 -40 -30 -20 -10 0 10 20 30 40 30 20 10 0 -10 -20 -30 -40 -50
-40 -30 -20 -10 0 10 20 30 40 50 40 30 20 10 0 -10 -20 -30 -40
-30 -20 -10 0 10 20 30 40 50 60 50 40 30 20 10 0 -10 -20 -30
-20 -10 0 10 20 30 40 50 60 70 60 50 40 30 20 10 0 -10 -20
-10 0 10 20 30 40 50 60 70 80 70 60 50 40 30 20 10 0 -10
0 10 20 30 40 50 60 70 80 90 80 70 60 50 40 30 20 10 0
10 20 30 40 50 60 70 80 90 100 90 80 70 60 50 40 30 20 10
0 10 20 30 40 50 60 70 80 90 80 70 60 50 40 30 20 10 0
-10 0 10 20 30 40 50 60 70 80 70 60 50 40 30 20 10 0 -10
-20 -10 0 10 20 30 40 50 60 70 60 50 40 30 20 10 0 -10 -20
-30 -20 -10 0 10 20 30 40 50 60 50 40 30 20 10 0 -10 -20 -30
-40 -30 -20 -10 0 10 20 30 40 50 40 30 20 10 0 -10 -20 -30 -40
-50 -40 -30 -20 -10 0 10 20 30 40 30 20 10 0 -10 -20 -30 -40 -50
-60 -50 -40 -30 -20 -10 0 10 20 30 20 10 0 -10 -20 -30 -40 -50 -60
-70 -60 -50 -40 -30 -20 -10 0 10 20 10 0 -10 -20 -30 -40 -50 -60 -70
-80 -70 -60 -50 -40 -30 -20 -10 0 10 0 -10 -20 -30 -40 -50 -60 -70 -80
-90 -80 -70 -60 -50 -40 -30 -20 -10 0 -10 -20 -30 -40 -50 -60 -70 -80 -90
Sheet2
200 230 160 110 75 50 30 16 8 3 0 3 8 16 30 50 75 110 160 230
230 -260 -190 -140 -105 -80 -60 -46 -38 -33 -30 -33 -38 -46 -60 -80 -105 -140 -190 -260
160 -190 -120 -70 -35 -10 10 24 32 37 40 37 32 24 10 -10 -35 -70 -120 -190
110 -140 -70 -20 15 40 60 74 82 87 90 87 82 74 60 40 15 -20 -70 -140
75 -105 -35 15 50 75 95 109 117 122 125 122 117 109 95 75 50 15 -35 -105
50 -80 -10 40 75 100 120 134 142 147 150 147 142 134 120 100 75 40 -10 -80
30 -60 10 60 95 120 140 154 162 167 170 167 162 154 140 120 95 60 10 -60
16 -46 24 74 109 134 154 168 176 181 184 181 176 168 154 134 109 74 24 -46
8 -38 32 82 117 142 162 176 184 189 192 189 184 176 162 142 117 82 32 -38
3 -33 37 87 122 147 167 181 189 194 197 194 189 181 167 147 122 87 37 -33
0 -30 40 90 125 150 170 184 192 197 200 197 192 184 170 150 125 90 40 -30
3 -33 37 87 122 147 167 181 189 194 197 194 189 181 167 147 122 87 37 -33
8 -38 32 82 117 142 162 176 184 189 192 189 184 176 162 142 117 82 32 -38
16 -46 24 74 109 134 154 168 176 181 184 181 176 168 154 134 109 74 24 -46
30 -60 10 60 95 120 140 154 162 167 170 167 162 154 140 120 95 60 10 -60
50 -80 -10 40 75 100 120 134 142 147 150 147 142 134 120 100 75 40 -10 -80
75 -105 -35 15 50 75 95 109 117 122 125 122 117 109 95 75 50 15 -35 -105
110 -140 -70 -20 15 40 60 74 82 87 90 87 82 74 60 40 15 -20 -70 -140
160 -190 -120 -70 -35 -10 10 24 32 37 40 37 32 24 10 -10 -35 -70 -120 -190
230 -260 -190 -140 -105 -80 -60 -46 -38 -33 -30 -33 -38 -46 -60 -80 -105 -140 -190 -260
Sheet2
Sheet3
40 20 15 10 20 15 10 20 15 10 0 10 15 20 10 15 20 10 15 20
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
0 20 25 30 20 25 30 20 25 30 40 30 25 20 30 25 20 30 25 20
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
10 10 15 20 10 15 20 10 15 20 30 20 15 10 20 15 10 20 15 10
15 5 10 15 5 10 15 5 10 15 25 15 10 5 15 10 5 15 10 5
20 0 5 10 0 5 10 0 5 10 20 10 5 0 10 5 0 10 5 0
Sheet3
(
)
[
)
1
,
0
exp
random
T
>
D
–
HILL CLIMBING
HILL CLIMBING
HILL CLIMBING
C
O
S
T
F
U
N
C
T
I
O
N
,
C
NUMBER OF ITERATIONS
AT INIT_TEMP
AT FINAL_TEMP
Move accepted with
probability
=
e
-(^C/temp)
Unconditional Acceptance
�
HILL CLIMBING�
HILL CLIMBING�
HILL CLIMBING�
COST FUNCTION, C�
NUMBER OF ITERATIONS�
AT INIT_TEMP�
AT FINAL_TEMP�
Unconditional Acceptance�
Move accepted with
probability
= e-(^C/temp)�
Greedy Algorithm
gets stuck here!
Locally Optimum
Solution.
Simulated Annealing explores
more. Chooses this move with a
small probability (Hill Climbing)
Upon a large no. of iterations,
SA converges to this solution.
Initial position
of the ball
Greedy Algorithm gets stuck here!
Locally Optimum Solution.�
Simulated Annealing explores more. Chooses this move with a small probability (Hill Climbing)�
Upon a large no. of iterations, SA converges to this solution.�
Initial position of the ball�
D
0
£
D
/docProps/thumbnail.jpeg