程序代写代做代考 chain scheme ant algorithm Ants for Sampling in the Nested Partition Algorithm

Ants for Sampling in the Nested Partition Algorithm
Sameh Al-Shihabi
Industrial Engineering Department University of Jordan, Amman 11942, Jordan email: shihabi sameh@hotmail.com
Abstract
The Nested Partition algorithm suffers from the lack of information gathering and sharing despite the computational effort dedicated to sampling the different subregions at each step of the algorithm. It is also noticed that the solution quality of the NP algorithm largely depends on the quality of the samples generated. Ants are used in this work as a mean to improve the samples quality through information building and sharing to solve the Traveling Salesman Problem TSP. Some of the samples are found in this hybrid algorithm using the Max-Min Ant System MMAS while other samples are generated by perturbing the ants’ tours.
1 Introduction
The Nested Partition NP algorithm is a new optimization algorithm that is based on the concept of adaptive sampling. The NP algorithm steers the sampling effort toward a decreasing search space until the search space is indivisible. The algorithm partitions the solution feasible region into subregions where it samples each subregion and chooses the best promising subregion to partition further. Regions not belonging to the most promising subregion are aggregated into a surrounding subregion that is also sampled. The algorithm moves back to a larger subregion if the surrounding subregion is found to be more promising.
It is shown in [8] that this algorithm forms a Markov chain and would recognize the global optimum solution that forms an absorbing state and never leaves this state. Making the correct moves between the different transient states until being absorbed in the global optimum solution state, depends on lots of factors such as the partitioning scheme implemented and the quality of the samples generated [7].
The pure NP algorithm suffers from the problem of ”fresh starting” at each step of the algo- rithm, that is no information about previous good samples is kept. The problem is partially solved using the concept of inheritance [4] where some of the samples taken from the most promising subregion are passed to the resulting subregions. These samples are used in [4] as initial population for a Genetic Algorithm GA to find good samples out of the partitioned subregions [4].
In this work, a Max-Min Ant System MMAS is used to find some samples from each subregion and only the global best tour generated is inherited to the subregion that includes this tour. The MMAS is used to guarantee that each solution point has a positive probability of being selected which is a needed condition to guarantee the convergence of the NP method [8].
Ant Systems all share the concept of information sharing through the pheromone matrix. Ants trying to discover the shortest tour for a Traveling Salesman Problem TSP [5] save information about good edges to a pheromone matrix and uses this information to build new tours. The MMAS limits the pheromone levels to a maximum and minimum values so that stagnation is avoided and the discovery of new tours is guaranteed.
This work extends the work of [7] where a generic partitioning scheme is used and a two step sampling scheme is implemented. The first sampling step uses ants to find good routes at each subregion while the second sampling step finds the rest of the tours by perturbing the
11

ants’ tours from the first step. More emphasis is given to the surrounding subregion and the subregion inheriting the incumbent solution such that more samples are found using ants at these two subregions.
2 NP/MMAS Algorithm
Details about solving the TSP using the NP algorithm and the MMAS can be found in [7] and [9].
Due to the large presenting most from [7] and [9].
number of symbols used to explain the proposed algorithm, this section begins by of the notation used to describe the algorithm. Most of the notation is adopted
iteration number.
number of cities of the TSP instance.
distance between city i and city j.
value of the best solution found by iteration k.
k The
Nc The
dij The
Lg(k) The
Tg(k) A vector showing the best tour found by iteration k.
d(k) The depth of iteration k which represent the number of edges fixed
by the most promising subregion.
σ(k) The most promising subregion at iteration k.
s The number of tours sampled from each subregion.
Mσ(k) The number of subregions that σ(k) is being divided to.
θji Sample i = 1, 2, …, s taken from subregion j = 1, 2, …, Mσ(k) + 1.
τij The pheromone value between city i and j.
τmax(k) The maximum value of the pheromone at iteration k.
τmin(k) The minimum value of the pheromone at iteration k.
ρ The evaporation rate of the pheromone.
ηij A heuristic information which is taken as 1.0 for the TSP. dij
α A constant indicating the importance of the pheromone value.
β A constant showing the importance of the heuristic information.
Pbest The probability that the best solution is constructed again by ants.
ET Number of cities selected using Equation 1 divided by Nc.
fsurr The frequency of finding a sample by an ant from the surrounding subregion.
fg Q
The frequency of finding a sample by an ant from the subregion having the current solution.
The probability that an ant sampling the surrounding subregion would avoid the first arc of the current solution.
1. Initialization: Starting with city 0, the minimum distance heuristic is used to find an initial solution; Lg(0) and Tg(0). This solution is inherited by the subregion that has the first arc of this solution. The pheromone values of all edges are initialized to their maximum value τmax according to Equation 3.
2. Generic Partitioning: For the first iteration and starting with city 0, the solution space is divided into Nc − 1 subregions where the first edge is formed between city 0 and each of these Nc − 1 cities. For this stage, the whole feasible region is considered as the most promising subregion σ(0) = Θ and d(σ(0)) = 0.
At later iterations k > 0, the most promising subregion σ(k) is divided into Mσ(k) = Nc − d(σ(k)) subregions. Each subregion selects one of the remaining Nc − d(σ(k)) cities to be in the d(σ(k))th. position as shown in Figure 1. The abandoned subregions are aggregated to forms the surrounding subregion Θ \ σ(k).
3. Sampling: The MMAS is implemented in this step where some of the samples are found using ants while the rest are generated by perturbing the best tour found by ants. The first
12

0,x,x,…,0 􏺅􏺅􏺅􏺅􏺅“““““`
……
……
Figure 1: The partitioning steps of the first 2 iterations using the generic partitioning scheme [7].
tour out of each subregion is generated using ants. Each ant is equipped with a list of cities already visited and an ant at city i would choose city j to visit probabilistically according to
0,1,x,…,0
0,2,x,…,0 􏺆􏺆􏺆􏺆􏺆􏺆􏺆hhhhhhhhhhhhhh
0,2,1,…,0
0,2,3,…,0
τα ·ηβ
pij=􏰍 ij ij . (1)
τα ·ηβ l∈{cities not visited} il il
Depending on the subregion being sampled, ants copy to their lists the cities already fixed by the subregion. Ants sampling the surrounding subregion deliberately avoid an edge already selected by the most promising subregion [1]. Ants sampling the surrounding subregion will with probability Q avoid the first edge to give a chance for the edge connected to the other side of the first city be selected which improves the quality of the samples generated in the surrounding subregion [1].
The rest of the tours are generated using a computationally inexpensive algorithm that selects two edges of a tour and connects the first vertex of the first edge to the first vertex of the second edge, and connecting the second vertex of the first edge to the second vertex of the second edge [7]. Unlike the 2-opt heuristic, this algorithm does not consider the performance change. A fixed number of tours s is generated out of each region and the performance value of each sample L(θji),j = 1,2,…,Mσ(k) + 1 and i = 1,2,…,s, is the tour length of the sample.
It is observed during the development of the proposed algorithm that, having a high quality samples from the surrounding subregion would improve the solution quality. Due to this observation, more samples are found using ants from the surrounding subregion where one sample every fsurr samples is found using ants from the surrounding subregion. Similarly, every fg samples found from the subregion inheriting the current solution, a sample is found using an ant from this subregion.
4. Pheromone Updating: The pheromone matrix is updated according to
τij =ρ·τij + 1.0 ·{1·edge(i,j)∈Tsa} (2)
Lsa
where the subscript Lsa in the above equation stands for the route length of the ant selected
to update the pheromone matrix and ρ is the evaporation rate of the pheromone’s trail. 13
0,Nc,x,…,0
0,2,Nc,…,0

Equation 2 clearly shows that pheromone’s evaporation will take place for all the arcs while the addition of more pheromone only takes place along edges belonging to the route forming Tsa. In order to avoid stagnation and give the ants more chance to discover new routes; the algorithm chooses the best tour generated among all the subregions and uses this tour to update the pheromone matrix if it is within 5% of the current solution found, otherwise, the current solution is used to update the pheromone matrix.
The value of τ is limited between a maximum and minimum values according to: τmax(k)= 1 · 1 , (3)
1−ρ Lg(k) while the value of τmin(k) is found using:
τmax(k) · (1 − √n pbest)
τmin(k)= (avg)·√n p . (4)
3
best
Details about the calculation of the above two values are explained in [9]. It needs to be noted that the pheromone matrix is updated even if the samples generated out of each region are not generated by ants.
5. Calculating the Promising Index: The promising index of each subregion is taken as
I(σj(k)) = minj=1,2,…,M+1L(θji),i = 1,2,…,s. (5)
The subregion having the minimum promising index is considered as the most promising subregion. The value of Lg(k) and the vector Tg(k) are updated if a sample has a shorter route length than the current solution at iteration k − 1.
6. Backtracking: If the surrounding subregion is found to be more promising than the other subregions, the algorithm moves back to the first subregion that contains the new incumbent solution.
Numerical Experiment
All the problem instances used in this section are taken from the TSPLIB website. The CPU time (T) presented is in seconds where a Celeron/1.8 GHz processor is used. The term Equivalent tours (ET) represents the number of cities selected using Equation 1 divided by the number of cities of the chosen instance Nc. All the tables presented show the best solution found, worst solution, the average solution and the standard deviations of the solutions, CPU time and the ET.
The default parameters used in all experiments are: α = 1.0, β = 5.0, ρ = 0.7, Pbest = 0.05,
s = 1000, fsurr = 25, fg = 50 and Q = 10%. If a city is chosen using Equation 1, then this city is
chosen from the nearest 20 cities to it. If all these cities are already visited, the city chosen is the
one with the maximum value of ηα ·τ β. The parameters are chosen based on the computational ij ij
experience gained while developing the proposed algorithm and more work is still needed to finely tune the algorithm.
This section presents first the sensitivity of the algorithm with respect to β and s. A comparison is conducted between the proposed algorithm and the pure NP algorithm where the value of α is set to 0 and the algorithm is run again. Finally, the results obtained for solving a number of symmetric TSP instances is presented.
3.1 Sensitivity Analysis
Two problem instances are used as test problems, namely; eil51 and kro100 where all results presented are averaged using 15 independent runs. The optimum solution tour length for the eil51 problem instance is 426 and for the kro100 is 21282.
14

Table 1: The effect of changing β on the solution quality for the eil51 and kro100 problem instances.
instance
β
best
worst
avg. sol.
σsol.
avg. ET
σET
avg. T
σT
eil51
2 4 6
428 428 427
442 438 439
434.2 433.6 430.7
2.6 2.5 3.4
3420
3947
3537
1084
1253
1003
13.2 15.1 13.6
4.2 4.6 3.8
kro100
2 4 6
21543
21282
21282
22595
22388
21636
21951.7 21583.3 21343.8
282.9 327.7 105.4
10306 9885 10940.9
3083
2155
2946
140.1 134.9 148.1
38.8 27.6 38.7
Table 2: The effect of changing the value of s on the solution quality of eil51 and kro100 problem instances.
instance
s
best
worst
avg. sol.
σsol.
avg. ET
σET
avg. T
σT
eil51
250
500
750
1000
429
426
426
426
441
441
441
438
433.6 430.6 430.7 430.2
2.9 3.7 2.2 3.9
1709
2276
3146
4633
473
485 1055 1444
4.6
7.5 11.3 17.8
1.2 1.6 3.8 5.6
kro100
250
500
750
1000
21292
21292
21282
21292
22559
22500
21942
21733
21686.9 21626.8 21605.1 21407.8
295.0 230.7 217.2 121.3
6594
9959
8852
10382
1877
3622
2534
3312
64.3 112.3 112.1 144.3
18.1 4.0 30.4 42.4
3.1.1 The Effect of β
The value of β is varied between 2 and 6 using steps of 2. Table 1 shows the effect of changing β on the solution quality. It is clear out of Table 1 that increasing the value of β improves the solution quality obtained. It was expected before running this experiment that β = 2.0 would be the appropriate value to use such as the results presented in [9] however it is found that a higher value of β would improve the quality of the solution.
3.1.2 The Effect of s
The effect of the change in the value of s is more clear in the case of the kro100 than it is for eil51 as shown in Table 2. It is interesting to note that the computation time for kro100 is the same for s = 500, 750 and 1000. This could be justified by the number of backtracking steps performed at each trial. For s = 1000 more computation effort is done at each depth but less backtracking took place. For lower values of s, less sampling effort is done at each step while more backtracking took place.
3.2 NP Algorithm
To reflect the effect of information sharing through the pheromone matrix, the value of α is set to 0 and the same algorithm is run again. The results obtained for eil51 and kro100 are shown in Table 3 using the default parameters presented above. It is clear out of this table that the quality of the solution is definitely worse without the effect of information sharing and building that the MMAS offers.
15

Table 3: The effect of freezing the pheromone’s matrix on the solution quality of eil51 and kro100 problem instances. Results are averaged using 15 different runs.
Table 4: The solution details for some symmetric TSP instances. Results are averaged using 15 different runs.
3.3 Symmetric TSP
A number of symmetric TSP instances are solved using the proposed algorithm. The results of these instances are presented in Table 4. The algorithm did recognize the optimum solution for berlin52 and eil76 and found very good solutions for the rest of the instances.
4 Conclusion and Future Work
We presented in this work a novel hybrid algorithm that combines the NP method and the MMAS. This hybrid benefits from the capabilities offered by both algorithms; the NP as a tool to con- centrate the sampling in a decreasing search space and the MMAS with its pheromone matrix to save information and share it between the different subregions. The proposed algorithm could be implemented using parallel processors which will greatly decrease the computation time.
Lots of work is still needed to fine tune this algorithm or adopt a local heuristic algorithm such as the 2-opt or Lin-Kernighan heuristics to improve this algorithm. Other partitioning schemes, backtracking schemes can be tried instead of the ones used in this work.
References
[1] Al-Shihabi, S.: Backtracking ant system for the traveling salesman problem. Submitted to the Fourth International Workshop on Ant Colony Optimization and Swarm Intelligence, 2004.
[2] Dorigo, M.,Maniezzo V., Colorni, A.: The ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics-Part B, 26(1996) 29–42.
[3] Dorigo, M., Gambardella, L.: Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation1(1997) 53–66
[4] Kim, J.: Optimization under uncertainty with application to data clustering. Ph.D. Disserta- tion, Iowa State University, 2002.
[5] Lawler, E., Lenstra, A., Kan, R. , Shmoys, D.: The traveling salesman problem. John Wiley & Sons, 1985.
[6] Shi, L., O ́lafsson, S.: An integrated framework for deterministic and stochastic optimization. Proceedings of the 1997 Winter Simulation Conference (1997) 358–365
instance
best
worst
avg. sol.
σsol.
avg. ET
σET
avg. T
σT
eil51 kro100
429 21353
451 23254
437.1 22362.2
6.2 548.1
3597 11164
1308 3268
13.9 158.4
4.8 45.9
instance
opt.
best
worst
avg. sol.
σsol.
avg. ET
σET
avg. T
σT
berlin52 eil76 eil101 krob150 d198
7542
538
629
26130
15780
7542
538
630
26334
15893
7902
551
662
27246
16072
7608.4 544.4 643.9 26725.6 15976.9
105.5 3.8 8.4 255.5 49.3
3584
6332
11731
23882
31165
1237 1896 3107 8515.5 12340
14.1
50.9 169.3 851.0 1418.4
4.6 14.5 45.1 284.1 538.5
16

[7]Shi,L.,O ́lafsson,S.:Newparallelrandomizedalgorithmsforthetravelingsalesmanproblem. Computers & Operations Research 26 (1999) 371–394
[8] Shi, L., O ́lafsson, S.: Nested partition method for global optimization. Operations Research 48(2000) 390–407
[9] Stutzle, T., Hoos, H.: Max-Min ant system. Future Generation Computer Systems 18(2000) 889–914
17

18