程序代写代做代考 scheme algorithm CMS (2006) 3:331–348

CMS (2006) 3:331–348
DOI 10.1007/s10287-006-0023-y
ORIGINAL PAPER
An algorithm for the job shop scheduling problem based on global equilibrium search techniques
Panos M. Pardalos · Oleg V. Shylo
Published online: 13 June 2006 © Springer-Verlag 2006
Abstract The job shop scheduling problem is considered, and an algorithm based on the global equilibrium search method is proposed for its solution. Computational experiments using well-known benchmark problems are pre- sented. Several new upper bounds for these problems are obtained.
Keywords Job shop scheduling problem · Makespan · Global equilibrium search · Metaheuristics
MSC: 90B35
1 Introduction
The minimum makespan problem of job-shop (JSP) consists in scheduling the set of jobs J on some set of machines M with the objective to minimize com- pletion time needed for processing all jobs, subject to constraints that: (1) each job has a predefined processing order through all machines (precedence con- straints), and (2) each machine can process at most one job at a time (resource constraints). If a machine starts processing any job it must finish it without any interruption (non-preemptive). This problem is known to be NP-hard (Lawler et al. 1982), but even within this class of problems it is considered as one of the most challenging. Exact methods were successfully applied to problems of small dimensions, but for job shop problems with more than 15 machines and 15 jobs
Research partially supported by NSF and AirForce grants.
P. M. Pardalos (B) · O. V. Shylo
Department of Industrial and Systems Engineering, University of Florida, 303 Weil Hall, Gainesville, FL 32611, USA
e-mail: pardalos@ufl.edu
O. V. Shylo
e-mail: o.v.shylo@ufl.edu

332 P.M. Pardalos, O.V. Shylo
they are not able to find a high-quality solution with reasonable computational effort. For large dimensions there is a need for good approximate algorithms. The local search based methods were successfully applied to such problems: tabu method Nowicki and Smutnicki 1996,Taillard 1993, greedy randomized adaptive search Binato et al. 2001, and simulated annealing (Van Laarhoven et al. 1992). The detailed review of the methods for the JSP can be found in (Jain and Meeran 1999) and (Lee and Pinedo 2002). An approximate algorithm for the job-shop scheduling problem based on the Global Equilibrium Search (GES) method (Shylo 1999) is proposed in this paper. The main idea of this approach is to use the history of the search for guiding it into the areas with potentially good solutions.
At first, the minimum makespan problem of job-shop scheduling is for- malized in terms of a mathematical model, and then the proposed heuristic algorithm based on the GES method for this problem is described. Finally, the results of the computational experiments on the well-known benchmark instances are shown, and our approach is compared with some of the best- performing algorithms for job-shop scheduling.
2 Mathematical model and notations
For each problem of job-shop scheduling we have a set M = {M1, M2, . . . , Mm} of machines and a set O = {σ0, σ1, . . . , σN+1} of operations, where σ0 and σN+1 denote fictive “start” and “finish” operations. Each operation σ ∈ O has two properties: a machine M(σ) ∈ M that must process it and its processing time p(σ) (p(σ0) = p(σN+1) = 0). Additionally, let A denote the set of pairs of operations which are tied by precedence constraints, and Ek denote the set of all pairs of operations which require machine Mk for their processing (k = 1,…,m).
Further, let s(σ) denote the start time of the operation σ, and let s(σ0) = 0. The problem is to find a starting time for each operation σ ∈ O (schedule) such that
is minimized subject to
max[s(σ ) + p(σ )] (1) σ∈O
s(σ)≥0, σ∈O (2) s(ω)−s(σ)≥p(σ), (σ,ω)∈A (3)
s(ω) − s(σ ) ≥ p(σ ) ∨ s(σ ) − s(ω) ≥ p(ω),
(ω,σ)∈Ek, k=1,…,m. (4)
The value given in (1) is often referenced to as the makespan of the schedule represented by starting times of operations from O.

An algorithm for the job shop scheduling problem 333
This problem can be described using the disjunctive graph model of Roy and Sussmann 1964). Let G = (O, A, E) denote the disjunctive graph, where O is a set of nodes (a node for each operation), A is a set of directed arcs ((σ0 , σ ) ∈ A and (σ,σN+1) ∈ A for all σ ∈ O), and E = mk=1 Ek is a set of undirected arcs. The weight of each node is given by the processing time of the corresponding operation. Let S (selection) denote a set of directed arcs which is obtained from E by choosing the direction for each of its arcs. Then let GS = (O, A ∪ S) be a directed graph that corresponds to the selection S. If GS is an acyclic graph, then it defines a set of feasible schedules for the problem (1)–(4), and there is an obvious one-to-one correspondence between GS and the best schedule from this set: s(σ) in the best schedule is equal to the longest path from σ0 to σ in DS. Furthermore, the length of the longest path in GS is equal to the makespan of this schedule. Thus we can reformulate the problem (1)–(4) as follows. We should find a selection S that minimizes the length of the longest path in the corresponding directed acyclic graph GS.
The following notations are useful for the further discussion. Let JP(σ) (JS(σ )) denote an operation for which (JP(σ ), σ ) ∈ A ((σ , JS(σ )) ∈ A). Assum- ing S to be a selection with corresponding acyclic directed graph DS, let MP(σ) (MS(σ )) denote an operation for which (MP(σ ), σ ) ∈ S ((σ , MS(σ )) ∈ S). Let q(σ) be a length of the longest path from σ to σN+1, and s(σ) be a length of the longest path from σ0 to σ (start time). The longest path in DS is referenced to as a critical path, and the operations which belong to this path are called critical operations. We can decompose any critical path into a set of critical blocks: Critical operations σ (σ ̸= σ0) and ω (ω ̸= σN+1 and s(σ) < s(ω)) belong to the same critical block if and only if there is a path of length l between them with all arcs from S and s(σ ) + p(σ ) + l = s(ω). 3 Algorithm description The global equilibrium search method (Shylo 1999) has some common features with the simulated annealing method. It has been applied to many discrete optimization problems, and very promising results were obtained (Shylo 1999). The main idea of the GES method is to collect information about the solution space for its further use in the next stages of the search. Here we search for the solution in the form of a binary vector. The search is organized as a series of temperature cycles. During each stage of the temperature cycle some set of solutions is randomly generated. On the early stages of each temperature cycle the algorithm generates absolutely random solutions, but on the further stages the solutions are generated in such a way that they are more and more likely to have some common features with the best solutions found in the previous stages (on the last stage the algorithm just generates the best solution found during the search). Such organization allows us to alternate between diversification and intensification of the search in a very effective manner. The general scheme of our implementation of the GES algorithm is outlined in Fig. 1. 334 P.M. Pardalos, O.V. Shylo Fig. 1 GES for JSP The main cycle of the algorithm is performed while some stopping criterion is not satisfied. The algorithm in Fig. 1 runs until maxiter iterations have been performed. For the generation of feasible solutions we use a set F of known solutions. Since it is impossible to keep all of the solutions in the memory of the computer, we use reduced information about the solutions. In our imple- mentation we store two values for each variable: the value of the cost function of the best solution in which the variable is equal to zero and the best value of the cost function when the variable is equal to one. It appears that for the job shop scheduling problem this simple data allows us to construct a quite effec- tive algorithm based on the GES technique. For simplicity of the description of the algorithm we will use the notation F, as if we keep all the solutions in the memory. An algorithm for the job shop scheduling problem 335 In order to generate random feasible solutions, we introduce, for each variable, the probability of being equal to one or zero. These probabilities (pr) are uniquely defined by the set F and the current temperature. There are K temperature stages, and for each stage we have a predefined tem- perature value. The temperature increases as the algorithm proceeds from stage to stage. At the beginning of each stage the probabilities for gener- ation of solutions are calculated. The details of the generation procedure (calculate_generation_probabilities) are described in the next section. At every stage we generate ngen random feasible solutions, and the set F is updated with new solutions. After that we change the current temperature, and the new stage begins with recalculation of generation probabilities. In our implementation after the temperature cycle is finished, we "forget" all solutions found during the cycle except the best solution by setting F = {bestx}. Furthermore, if the algorithm runs for maxnfail cycles without an improvement of the best solution then we set F = ∅; in other words, the search process restarts. There are several basic elements which have to be defined more properly. The main part of this framework is the procedure that generates random solutions. It almost always appears that randomly generated solutions can be improved by applying some local search procedure. In our implementation, we use local search together with a heuristic procedure which conducts an extensive search for the improvements in the neighborhood of the local minima. Finally, the previous search results must be somehow used to encourage the generation of high-quality solutions. In the next subsections, the implementation of these elements is described in detail. 3.1 Encoding and generation procedure For each pair of operations which require the same machine for processing, we introduce a probability that one operation is processed before the other one. These probabilities are used in the generation procedure to determine the orderings of the operations. Let pr denote a vector which is composed of all such probabilities, and let prσ ,ω be the element of pr which corresponds to the probability of σ being processed before ω (prω,σ = 1 − prσ ,ω ). We can encode any feasible schedule as a binary vector. For each ordered pair of operations processed by the same machine, we introduce the binary variable which is equal to one if the first operation is processed earlier than the second, and equal to zero otherwise. Let x denote a binary vector which is composed of all such variables, and let x(σ , ω) denote the element in x which corresponds to the operations σ,ω ∈ O; thus x(σ,ω) = 1 means that in the solution represented by x, operation σ is processed earlier than ω. Let X denote a set of all binary vectors x which represent feasible solutions. All these vectors must encode different processing orderings; f (x) is equal to the makespan of the solution that corresponds to x. Actually, we do not generate solutions from a blank sheet. Empirical analysis reveals that it is much more efficient to apply a series of random moves within 336 P.M. Pardalos, O.V. Shylo the N1-neighborhood to escape from the local minima than to construct a totally new solution. In (Watson et al. 2003) it is shown that the average number of moves that are required to escape from a local minimum is usually very small for most job shop scheduling problems. The procedure that generates solutions is outlined in Fig. 2. We have already introduced the probabilities for generation of the initial solutions. Now we shall describe how we use the solution history to update these probabilities. The temperature cycle for the GES algorithm is determined by a set of K + 1 temperatures μ0,μ1,...,μK, such that μ0 < μ1 < ... < μK. Let F denote a set of solutions that we know for a given problem (F ⊆ X), and let Fq(σ,ω)={x∈F :x(σ,ω)=q},whereq=0,1andσ,ω∈O. In our implementation of the global equilibrium search method the proba- bilities for solution generation for a given temperature μk are: p r σk , ω = p r σ0 , ω . prσ0,ω + 1−prσ0,ω exp −μk min f(x)− min f(x) x∈F0(σ,ω) x∈F1(σ,ω) This simplification allows us to reduce both computational time and memory usage for our algorithm. It appears that for the job shop scheduling problem, it does not affect the overall quality of algorithm performance. 3.2 Local search The neighborhood of the solution is often defined as the set of solutions which can be obtained from it by applying some simple transformation. One can find Fig. 2 Procedure that generates solution An algorithm for the job shop scheduling problem 337 a review of the neighborhood structures for the JSP in (Jain et al. 2000). We use two different types of neighborhoods in our algorithm. First one was initially proposed in (Van Laarhoven et al. 1992), this neigh- borhood consists of solutions that can be obtained by reversing the processing order of two successive critical operations which are processed on the same machine. Let’s assume that the initial solution is represented by a selection S, then the full set of moves N1(S) can be defined as a set of pairs (σ,ω) such that:σ,ωarecriticaloperations,(σ,ω)∈S,s(σ)+p(σ)=s(ω).LetS′ denote a selection which is obtained from S by applying move (σ,ω) ∈ N1(S). Then graph DS′ can be easily derived from DS by: 1. Removingarcs(σ,ω),(MP(σ),σ),(ω,MS(ω); 2. Addingarcs(ω,σ),(MP(σ),ω),(σ,MS(ω). A very useful property of such transformation is that it can never lead to a cyclic graphDS′ VanLaarhovenetal.(1992). The second neighborhood that is used in our algorithm was initially proposed in (Grabowski et al. 1988). For this neighborhood, the simple transformation is moving the critical operation, either to the beginning, or to the end of its block. If the solution is defined by a selection S, then the set of moves N2(S) for this neighborhood can be defined as a set of pairs (σ,ω) of different operations, such that: σ , ω are critical operations that belong to the same critical block, and σ is either the first or last operation of this block, σ ̸= σ0, σ ̸= σN+1. Let S′ denote a selection that is obtained from S by applying move (σ , ω) ∈ N2(S). If σ is the first operation of its critical block, then graph DS′ is derived from DS by: 1. Removingarcs(MP(σ),σ),MP(ω),ω),(ω,MS(ω)). 2. Addingarcs(MP(σ),ω),(ω,σ),MP(ω),MS(ω)). Similarly if σ is the last operation of its critical block, then graph DS′ is derived from DS by: 1. Removingarcs(MP(ω),ω),(ω,MS(ω)),(σ,MS(σ)). 2. Addingarcs(MP(ω),MS(ω)),(σ,ω),(ω,MS(σ)). However, if a critical block consists of only two operations σ and ω, then moves (σ , ω) ∈ N2(S) and (ω, σ ) ∈ N2(S) lead to the identical acyclic graphs. Hence, we removed one of them from N2(S). The main disadvantage of moves for this neighborhood is that they can result in a cyclic graph. Hence, it is necessary to check all moves for feasibility. In our realization of local search, we used a first improvement strategy (a swap is made as soon as an improving one is found) with little modification. If a move leads to a solution with the same makespan value as the makespan of the initial solution, then we implement it only if a sum of processing times for all critical operations in the initial solution is greater than that in the result- ing solution. Such enhancement allows us to take into account situations when more than one critical path exists. 338 P.M. Pardalos, O.V. Shylo Nowicki and Smutnicki (2002) proposed an efficient scheme for acceleration of the local search. We have modified their approach for the N2-neighborhood. Let FS = (FS(σ0), . . . , FS(σN+1) denote a list of operations from O, which is a topological order of nodes from graph DS. Hence, this list must satisfy the fol- lowing conditions: for any pair of operations σ , ω ∈ O such that (σ , ω) ∈ A∪S, it is required that fS(σ) < fS(ω), where fS(σ) and fS(ω) are the positions occupied byσ andωinFS. Let us consider a case when we apply the move (σ , ω) ∈ N2(S) to S, and let σ be the first operation of critical block. If the resulting selection S′ leads to an acyclic graph DS′ , then it is obvious that for every operation δ ∈ O, such that fS(δ) < fS(σ ), the value s(δ) will not change: s(S, δ) = s(S′, δ), where s(S, δ) and s(S′, δ) denote a length of the longest path from σ0 to δ in graphs DS and DS′ accordingly; similarly, for every operation δ ∈ O, such that fS(δ) > fS(ω), the
value q(δ) will not change. In Fig. 3 the procedure for a quick move estimation
is outlined, which uses this property. This procedure returns the lower bound of
the longest path in DS′ . Here s(S, δ) denotes the length of the longest path from
σ0 to δ in the graph DS = (O, A ∪ S); q(S, δ) denotes the length of the longest
path from δ to σN+1 in the graph DS = (O, A ∪ S), f −1(i) denotes the operation S
such that fS(f−1(i)) = i. This procedure can be further improved so that it can S′
return an exact value of s(S , σN+1), as it is done in (Nowicki and Smutnicki
Fig. 3 Estimation procedure

An algorithm for the job shop scheduling problem 339
2002). Only slight modifications for this procedure are necessary when σ is the last operation of its critical block.
3.3 Improving heuristic
Each time the local optimum is found by the local search procedure, we try to improve it using a simple heuristic. Actually, this heuristic just transforms the local optimum for the N2-neighborhood and restarts the local search from a new solution. This transformation procedure is not as simple as in N1 or N2 neighborhoods, but the idea is still transparent. If some operation σ cannot be started earlier because of its job predecessors, then we try to push them back. For example, if σ cannot be started earlier because of JP(σ), then we push back the processing of JP(σ ) by swapping it with MP(JP(σ )) in the processing order. Then if JP(σ) does not constrain σ any longer, the transformation is finished. Otherwise we continue pushing JP(σ ) back in the processing order. It is obvious that if JP(σ ) cannot be started earlier because the constraint coming from JP(JP(σ )) then pushing it back makes no sense. Therefore, we try to push JP(JP(σ )) back until it no longer constrains JP(σ ), and so on. More exactly this transformation is outlined in Fig. 4.
In our algorithm we apply the described above transformation for each neighbor from N2-neighborhood of each local optimum we found and start local search from transformed solution. The procedure scheme is outlined in Fig. 5.
Fig. 4 Solution transform procedure

340 P.M. Pardalos, O.V. Shylo
Fig. 5 Intense search procedure
4 Computational results
The GES algorithm described in this paper was implemented in C language. All computational experiments were conducted using the personal computer Pentium 2.8 GHz with 512 Mb of RAM.
The parameters of the algorithm were the same for all computational exper- iments: maxnfail = 3, K = 40, ngen = 100. Initial probabilities at the beginning of each temperature cycle are all initially set to 0.5. The choice of temperatures was done in such a manner that algorithm converges to the best solution found at the end of the temperature cycle. So we set μ0 = 0, μ1 = 4 × 10−6, and
μk =μk−1exp ln(3.0)−ln(μ1) , (k=2,…,K−1). K−2
Thus there was no tuning of the algorithm for any particular instance.
The proposed algorithm has been tested on the benchmark problems taken from OR-library (http://www.mscmga.ms.ic.ac.uk/info.html). We used several
well-known problem classes from the literature:
1. 10problemsdenotedas(ORB1–ORB10)duetoApplegateandCook(1991);
2. 40 problems denoted as (LA01–LA40) due to Laurence (1984);
3. 80 problems denoted by (TA1–TA80) due to Taillard (1993). Optimal solu-
tions are known for 47 problems from this class.
4. 80 problems denoted by (DMU1–DMU80) due to Demirkkol et al. (1997).

An algorithm for the job shop scheduling problem 341
The computational results are given in the tables below. For the problems with an unknown optimal solution, we provide its lower bound (LB) and upper bound (UB). We compare our results with those obtained by Balas and Va- zacopoulos (1995a)(BV), Pezella and Merelli (2000) (TSSB). We present the original computational times for these approaches; thus, TSSB was tested on Pentium 133 MHz and BV on Sun Sparc 330.
Additionally, we have implemented and tested a multi-start version of TSAB algorithm, which was proposed by Nowicki and Smutnicki (1996); we refer to it as RTSAB. This algorithm initially starts with a random solution. After each run of TSAB, the best solution found during that run is used to generate a new initial solution for the next run of TSAB. For such generation we use the function outlined in Fig. 2, where all components of vector pr are set equal to
1 . In our implementation of TSAB algorithm we use the acceleration proposed 2
in (Nowicki and Smutnicki 2002). Each run of TSAB uses the following param- eters: maxt = 8, maxl = 5, maxc = 2, maxδ = 100, and maxiter = 250, 000, see (Nowicki and Smutnicki 1996) for details.
We have also investigated the behavior of our algorithm when the generation
probabilities are all fixed and are equal to 1 for all iterations. Hence, in this case 2
the algorithm does not use any of GES features. We refer to this algorithm as RIMP; its testing results provide a good illustration of the performance of the simple combination of the local search and the improvement heuristic described above and allow to reveal the enhancement provided by GES strategy.
In comparative tests we use a notation RI, which is a percentage by which the best found solution (with makespan CMax) is above the best-known upper bound:
RI = 100 % CMax − UB UB
All results for other approaches were taken from the literature and some results were taken from Taillard’s home page (http://www.eivd.ch/ina /collaborateurs/etd/default.htm).
The results for problems La01–La40 and Orb01–Orb10 are shown in Tables 1 and 2. All of these instances were solved optimally by our approach. The com- parison with algorithm TSSB (Pezzella and Merelli 2000) is given in Table 5.
TA benchmarks are one of the hardest instances for job shop scheduling. Table 4 shows the results for these problems. Despite the fact that there was a lot of time consuming experiments on solving these problems with many different algorithms, our approach was able to find several new upper bounds (TA11,TA15,TA19,TA20,TA32,TA46). In general the proposed algorithm is performing quite well on all sets of problems comparatively with shifting bot- tleneck procedure (Balas and Vazacopoulos 1995a) and tabu search algorithm (Pezzella and Merelli 2000). The computational results for RTSAB reveal that GES outperforms it on the prevailing majority of instances. Comparison with RIMP shows that GES is performing significantly better. For easy problems, the use of GES methodology allows us to decrease solving time. On the other

342 P.M. Pardalos, O.V. Shylo
hand, this strategy allows us to obtain a better solution for hard instances. The average results for all algorithms are given in Table 6.
The DMU class of problems consists of 80 instances. Instances DMU41-80 are considered as particulary hard (Demirkkol et al. 1997), but computational experiments showed that our approach allows us to obtain high-quality solu- tions even for these problems and for many of them the best upper bounds were improved. We tested each problem in such a way that the algorithm stopped if either the best known upper bound was improved or overall running time exceeded 30000 seconds. The computational experiments for these problems are given in Table 8.
5 Concluding Remarks
In this paper, a new heuristic approach for the job shop scheduling problem based on the global equilibrium search method has been proposed. Computa- tional experiments on a wide range of benchmark problems were conducted, and for many of them the algorithm was able to find optimal or near optimal solutions. For all experiments, the algorithm’s parameters were fixed; so there was no tuning for each problem. In spite of numerous, time-excessive compu- tational experiments, which were conducted in the last decades, our approach was able to improve many of the known upper bounds. GES method has been tested on many combinatorial problems, and in this paper we show that this approach is quite good for one of the most challenging NP-hard problems. Our modification of GES algorithm is a very powerful and competitive tool for job shop scheduling problems comparing to other best-performing algorithms. The ideas introduced by the global equilibrium search method provide a very promising approach for the construction of algorithms for discrete optimization problems.
6 Appendix: Tables (1–8)
We use the following notations:
Opt − the optimal value of the makespan
LB − the best lower bound for the optimal makespan
UB − the best upper bound for the optimal makespan, if we found the new upper bound then the old upper bound is presented in brackets
|J|x|M| − problem size (number of jobs x number of machines) GES1 − best solution by GES algorithm run for 5000 seconds GES2 − best solution by GES algorithm run for 30000 seconds
TSSB − best solution by TSSB (Pezzella and Merelli 2000) Time (sec) − GES computing time in seconds
Tav − average computing time
RTSAB − best solution by RTSAB algorithm run for 5000 seconds RIMPR − best solution by RIMPR algorithm run for 5000 seconds
BV − best solution among those provided (Balas and Vazacopoulos 1995b) ARI − average RI for the best solution

An algorithm for the job shop scheduling problem
343
Table 1
Computational results by GES for ORB1-ORB10
Prob |J| |M| LB orb01 10 10 1,059
orb02 10 10 888 orb03 10 10 1,005 orb04 10 10 1,005 orb05 10 10 887 orb06 10 10 1,010 orb07 10 10 397 orb08 10 10 899 orb09 10 10 934 orb10 10 10 944
UB GES1
Time (s)
11 1 6 50 4 2 2 3 14 1
TSSB
666 655 597 590 593 926 890 863 951 958
1,222 1,039 1,150 1,292 1,207
945 784 848 842 902
1,046 927 1,032 938 979 1,218 1,235 1,216 1,168 1,355 1,784 1,850 1,719 1,721 1,888 1,268 1,411 1,201 1,240 1,233
1,059 888 1,005 1,005 887 1,010 397 899 934 944
1,059 888 1,005 1,005 887 1,010 397 899 934 944
GES2
666 655 597 590 593 926 890 863 951 958
1,222 1,039 1,150 1,292 1,207
945 784 848 842 902
1,046 927 1,032 935 977 1,218 1,235 1,216 1,152 1,355 1,784 1,850 1,719 1,721 1,888 1,268 1,397 1,196 1,233 1,222
Table 2
Computational results for LA01-LA40
Prob |J|x|M| la01 10×5
la02 10×5 la03 10×5 la04 10×5 la05 10×5 la06 15×5 la07 15×5 la08 15×5 la09 15×5 la10 15×5 la11 20×5 la12 20×5 la13 20×5 la14 20×5 la15 20×5 la16 10×10 la17 10×10 la18 10×10 la19 10×10 la20 10×10 la21 15×10 la22 15×10 la23 15×10 la24 15×10 la25 15×10 la26 20×10 la27 20×10 la28 20×10 la29 20×10 la30 20×10 la31 30×10 la32 30×10 la33 30×10 la34 30×10 la35 30×10 la36 15×15 la37 15×15 la38 15×15 la39 15×15 la40 15×15
Opt GES1
666 666 655 655 597 597 590 590 593 593 926 926 890 890 863 863 951 951 958 958
1,222 1,222 1,039 1,039 1,150 1,150 1,292 1,292 1,207 1,207
945 945 784 784 848 848 842 842 902 902
1,046 1,046 927 927 1,032 1,032 935 935 977 977 1,218 1,218 1,235 1,235 1,216 1,216 1,152 1,153 1,355 1,355 1,784 1,784 1,850 1,850 1,719 1,719 1,721 1,721 1,888 1,888 1,268 1,268 1,397 1,397 1,196 1,196 1,233 1,233 1,222 1,226

344
P.M. Pardalos, O.V. Shylo
Table 3
Prob |J|x|M| ta01 15×15
ta02 15×15 ta03 15×15 ta04 15×15 ta05 15×15 ta06 15×15 ta07 15×15 ta08 15×15 ta09 15×15 ta10 15×15 ta11 20×15
ta12 20×15 ta13 20×15 ta14 20×15 ta15 20×15
ta16 20×15 ta17 20×15 ta18 20×15 ta19 20×15
ta20 20×15 ta21 20×20
ta22 20×20 ta23 20×20 ta24 20×20 ta25 20×20 ta26 20×20 ta27 20×20 ta28 20×20 ta29 20×20 ta30 20×20 ta31 30×15 ta32 30×15
ta33 30×15 ta34 30×15 ta35 30×15 ta36 30×15 ta37 30×15 ta38 30×15
Results for TA1–TA80
LB UB
1,231 1,231 1,244 1,244 1,218 1,218 1,175 1,175 1,224 1,224 1,238 1,238 1,227 1,227 1,217 1,217 1,274 1,274 1,241 1,241 1,323 1,357
(1,358) 1,351 1,367 1,282 1,342 1,345 1,345 1,304 1,339
(1,340) 1,302 1,360 1,462 1,462 1,369 1,396 1,297 1,332
(1,335) 1,318 1,348
(1,351) 1,539 1,644 1,511 1,600 1,472 1,557 1,602 1,647 1,504 1,595 1,539 1,645 1,616 1,680 1,591 1,614 1,514 1,625 1,472 1,584 1,764 1,764 1,774 1,793
(1,796) 1,778 1,793 1,828 1,829 2,007 2,007 1,819 1,819 1,771 1,778 1,673 1,673
GES2 GES1 RTSAB
1,231 1,231 1,231 1,244 1,244 1,244 1,218 1,218 1,220 1,175 1,175 1,175 1,224 1,224 1,229 1,238 1,238 1,238 1,228 1,228 1,228 1,217 1,217 1,217 1,274 1,274 1,280 1,241 1,241 1,241 1,357 1,357 1,367
1,367 1,375 1,377 1,344 1,344 1,350 1,345 1,345 1,345 1,339 1,339 1,347
1,360 1,360 1,361 1,469 1,473 1,476 1,401 1,401 1,408 1,332 1,332 1,338
1,348 1,352 1,355
1,647 1,647 1,648 1,602 1,602 1,607 1,558 1,558 1,560 1,653 1,653 1,654 1,596 1,596 1,597 1,647 1,647 1,654 1,685 1,685 1,691 1,614 1,616 1,619 1,625 1,625 1,627 1,584 1,584 1,585 1,764 1,766 1,764 1,793 1,801 1,818
1,799 1,802 1,805 1,832 1,832 1,832 2,007 2,007 2,007 1,819 1,819 1,821 1,779 1,779 1,789 1,673 1,673 1,678
GES2 GES1 RTSAB
1,795 1,795 1,806 1,680 1,683 1,685 2,022 2,029 2,035 1,956 1,957 1,968 1,870 1,875 1,875 1,991 1,992 2,000
RIMP TSSB
1,231 1,241 1,244 1,244 1,218 1,222 1,175 1,175 1,224 1,229 1,238 1,245 1,228 1,228 1,217 1,220 1,274 1,291 1,241 1,250 1,365 1,371
1,377 1,379 1,350 1,362 1,345 1,345 1,345 1,360
1,365 1,370 1,473 1,481 1,411 1,426 1,342 1,351
1,357 1,366
1,652 1,659 1,606 1,623 1,563 1,573 1,656 1,659 1,598 1,606 1,655 1,666 1,687 1,697 1,619 1,622 1,625 1,635 1,585 1,614 1,766 1,771 1,816 1,840
1,817 1,833 1,835 1,846 2,007 2,007 1,824 1,825 1,793 1,813 1,682 1,697
RIMP TSSB
1,798 1,815 1,695 1,725 2,047 2,045 1,969 1,979 1,887 1,898 2,006 2,036
BV
1,231 1,244 1,218 1,181 1,233 1,243 1,228 1,217 1,274 1,241 1,392
1,367 1,350 1,345 1,353
1,369 1,478 1,396 1,341
1,359
1,659 1,603 1,558 1,659 1,615 1,659 1,689 1,615 1,629 1,604 1,766 1,803
1,796 1,832 2,007 1,823 1,784 1,681
BV
1,798 1,686 2,026 1,967 1,881 2,004
Table 4
Prob
ta39 ta40 ta41 ta42 ta43 ta44
Results for TA1–TA80
|J|x|M| LB UB 30×15 1,795 1,795
30×15 1,631 1,674 30×20 1,859 2,014 30×20 1,867 1,956 30×20 1,809 1,859 30×20 1,927 1,984

An algorithm for the job shop scheduling problem
345
Table 4
Prob
ta45 ta46
ta47 ta48 ta49 ta50 ta51 ta52 ta53 ta54 ta55 ta56 ta57 ta58 ta59 ta60 ta61 ta62 ta63 ta64 ta65 ta66 ta67 ta68 ta69 ta70 ta71 ta72 ta73 ta74 ta75 ta76 ta77 ta78 ta79 ta80
Table 5
Problem
la01–la05 la06–la10 la11–la15 la16–la20 la21–la25 la26–la30 la31–la35 la36–la40
continued
|J|x|M| LB 30×20 1,997
30×20 1,940 30×20 1,789
30×20 1,912 30×20 1,915 30×20 1,807 50×15 2,760 50×15 2,756 50×15 2,717 50×15 2,839 50×15 2,679 50×15 2,781 50×15 2,943 50×15 2,885 50×15 2,655 50×15 2,723 50×20 2,868 50×20 2,869 50×20 2,755 50×20 2,702 50×20 2,725 50×20 2,845 50×20 2,825 50×20 2,784 50×20 3,071 50×20 2,995 100×20 5,464 100×20 5,181 100×20 5,568 100×20 5,339 100×20 5,392 100×20 5,342 100×20 5,436 100×20 5,394 100×20 5,358 100×20 5,183
UB GES2
2,000 2,004 2,011 2,011 (2,016)
1,903 1,903 1,952 1,962 1,968 1,969 1,926 1,931 2,760 2,760 2,756 2,756 2,717 2,717 2,839 2,839 2,679 2,679 2,781 2,781 2,943 2,943 2,885 2,885 2,655 2,655 2,723 2,723 2,868 2,868 2,869 2,872 2,755 2,755 2,702 2,702 2,725 2,725 2,845 2,845 2,825 2,825 2,784 2,784 3,071 3,071 2,995 2,995 5,464 5,464 5,181 5,181 5,568 5,568 5,339 5,339 5,392 5,392 5,342 5,342 5,436 5,436 5,394 5,394 5,358 5,358 5,183 5,183
GES1 RTSAB
2,004 2,013 2,011 2,024
1,911 1,922 1,975 1,966 1,974 1,980 1,940 1,942 2,760 2,760 2,756 2,756 2,717 2,717 2,839 2,839 2,679 2,679 2,781 2,781 2,943 2,943 2,885 2,885 2,655 2,655 2,723 2,723 2,868 2,868 2,872 2,877 2,755 2,755 2,702 2,702 2,725 2,734 2,845 2,881 2,825 2,825 2,784 2,784 3,071 3,071 2,995 2,995 5,464 5,464 5,181 5,181 5,568 5,568 5,339 5,339 5,392 5,392 5,342 5,342 5,436 5,436 5,394 5,394 5,358 5,358 5,183 5,183
RIMP TSSB
2,016 2,021 2,036 2,047
1,931 1,938 1,978 1,996 1,988 2,013 1,945 1,975 2,760 2,760 2,756 2,756 2,717 2,717 2,839 2,839 2,679 2,684 2,781 2,781 2,943 2,943 2,885 2,885 2,655 2,655 2,723 2,723 2,868 2,868 2,875 2,942 2,755 2,755 2,702 2,702 2,725 2,725 2,845 2,845 2,825 2,865 2,784 2,784 3,071 3,071 2,995 2,995 5,464 5,464 5,181 5,181 5,568 5,568 5,339 5,339 5,392 5,392 5,342 5,342 5,436 5,436 5,394 5,394 5,358 5,358 5,183 5,183
BV
2,008 2,040
1,921 1,982 1,994 1,967 2,760 2,756 2,717 2,839 2,679 2,781 2,943 2,885 2,655 2,723 2,868 2,900 2,755 2,702 2,725 2,845 2,826 2,784 3,071 2,995 5,464 5,181 5,568 5,339 5,392 5,342 5,436 5,394 5,358 5,183
Tav
9.8 –
– 61. 5 115 105 – 141
Comparison with TSSB
|J|x|M|
GES1
TSSB
ARI
Tav ARI
– 0.00
– 0.00
– 0.00
– 0.00
17.2 0.11 15.4 0.28 – 0.00 109.2 0.58
10×5 0.00 15×5 0.00 20×5 0.00 10×10 0.00 15×10 0.00 20×10 0.02 30×10 0.00 15×15 0.07

346
P.M. Pardalos, O.V. Shylo
Table 6
Prob
ta01–ta10 ta11–ta20 ta21–ta30 ta31–ta40 ta41–ta50 ta51–ta60 ta61–ta70 ta71–ta80
Comparison with other algorithms
GES1 RTSAB RIMP TSSB BV
ARI Tav ARI Tav
0.01 84.5 0.11 566.5 0.21 1154.6 0.55 1612.5 0.13 1668.8 0.31 2241.7 0.19 1460 0.46 1171.4 0.49 2111.4 0.78 3229.9 0.00 6.4 0.00 238.8 0.01 185.2 0.19 1155.2 0.00 38.1 0.00 452.5
ARI Tav ARI
0.01 190.0 0.45 0.60 2044.3 1.19 0.34 1561.6 1.01 0.62 2052.2 1.41 1.18 2205.6 1.92 0.00 7.5 0.02 0.02 188.6 0.40 0.00 38.2 0.00
Tav
2175
2526 34910 14133 11512
ARI Tav
0.17 1498 0.75 4559 0.61 6850
0.30 8491 1.11 16018
421 0.00 6342 0.11 231 0.00
196 2689 851
Table 7
Prob
Results for DMU1–DMU80
n m LB UB GES2
DMU1 20 15 DMU2 20 15 DMU3 20 15 DMU4 20 15 DMU5 20 15 DMU6 20 20 DMU7 20 20 DMU8 20 20 DMU9 20 20 DMU10 20 20 DMU11 30 15 DMU12 30 15 DMU13 30 15 DMU14 30 15 DMU15 30 15 DMU16 30 20 DMU17 30 20 DMU18 30 20 DMU19 30 20 DMU20 30 20 DMU21 40 15 DMU22 40 15 DMU23 40 15 DMU24 40 15 DMU25 40 15 DMU26 40 20 DMU27 40 20 DMU28 40 20 DMU29 40 20 DMU30 40 20 DMU31 50 15 DMU32 50 15 DMU33 50 15 DMU34 50 15 DMU35 50 15 DMU36 50 20 DMU37 50 20 DMU38 50 20
2,501 2,563
2,651 2,706
2,731 2,731
2,601 2,669
2,749 2,749
2,834 3,250(3,252) 2,677 3,053(3,063) 2,901 3,197(3,199) 2,739 3,092
2,716 2,984(2,985) 3,395 3,453(3,457) 3,481 3,518(3,519) 3,681 3,697(3,698) 3,394 3,394
3,332 3,343
3,726 3,781(3,787) 3,697 3,848(3,854) 3,844 3,849(3,852) 3,650 3,807(3,814) 3,604 3,739(3,740) 4,380 4,380
4,725 4,725
4,668 4,668
4,648 4,648
4,164 4,164
4,647 4,667(4,670) 4,848 4,848
4,692 4,692
4,691 4,691
4,732 4,732
5,640 5,640
5,927 5,927
5,728 5,728
5,385 5,385
5,635 5,635
5,621 5,621
5,851 5,851
5,713 5,713
2,566 2,706 2,731 2,669 2,749 3,250 3,053 3,197 3,092 2,984 3,453 3,518 3,697 3,394 3,343 3,781 3,848 3,849 3,807 3,739 4,380 4,725 4,668 4,648 4,164 4,667 4,848 4,692 4,691 4,732 5,640 5,927 5,728 5,385 5,635 5,621 5,851 5,713

An algorithm for the job shop scheduling problem Table 7 continued
347
Prob n m
DMU39 50 20 DMU40 50 20 DMU41 20 15 DMU42 20 15 DMU43 20 15
LB
5,747 5,577 2,839 3,066 3,121
LB
3,112 2,930 3,425 3,353 3,317 3,369 3,379 3,839 4,012 4,108 4,165 4,099 4,366 4,182 4,214 4,199 4,259 4,886 5,004 5,049 5,130 5,072 5,357 5,484 5,423 5,419 5,492 6,050 6,223 5,935 6,015 6,010 6,329 6,399 6,508 6,593 6,435
UB
5,747
5,577 3,267(3,275) 3,401(3,416) 3,443(3,455)
UB
3,489(3,501) 3,273 4,099(4,101) 3,972(3,973) 3,810(3,834) 3,754(3,765) 3,768(3,772) 4,247(4,252) 4,380(4,383) 4,450(4,454) 4,424(4,425) 4,331(4,332) 5,049 4,779(4,781) 4,829(4,834) 4,694(4,696) 4,888(4,890) 5,293(5,294) 5,342
5,437
5,367 5,269(5,271) 5,902(5,910) 6,012(6,016) 5,934(5,936) 5,891 6,072(6,076) 6,302
6,571
6,283
6,376 6,380(6,384) 6,974(6,975) 6,930
6,962 7,158(7,164) 6,824
GES2
5,747 5,577 3,267 3,401 3,443
GES2
3,489 3,273 4,099 3,972 3,810 3,754 3,768 4,247 4,380 4,450 4,424 4,331 5,051 4,779 4,829 4,694 4,888 5,293 5,354 5,439 5,388 5,269 5,902 6,012 5,934 6,002 6,072 6,333 6,589 6,291 6,376 6,380 6,974 7,006 6,988 7,158 6,843
Table 8
Problem
DMU44 DMU45 DMU46 DMU47 DMU48 DMU49 DMU50 DMU51 DMU52 DMU53 DMU54 DMU55 DMU56 DMU57 DMU58 DMU59 DMU60 DMU61 DMU62 DMU63 DMU64 DMU65 DMU66 DMU67 DMU68 DMU69 DMU70 DMU71 DMU72 DMU73 DMU74 DMU75 DMU76 DMU77 DMU78 DMU79 DMU80
Results for DMU1–DMU80
n m
20 15 20 15 20 20 20 20 20 20 20 20 20 20 30 15 30 15 30 15 30 15 30 15 30 20 30 20 30 20 30 20 30 20 40 15 40 15 40 15 40 15 40 15 40 20 40 20 40 20 40 20 40 20 50 15 50 15 50 15 50 15 50 15 50 20 50 20 50 20 50 20 50 20
The authors would like to thank Dr. A. Vazacopoulos for helpful comments on an earlier draft of the paper.
Acknowledgements

348 P.M. Pardalos, O.V. Shylo
References
Applegate D, Cook W (1991) A computational study of the job-shop scheduling instance. ORSA J Comput 3:149–156
Balas E, Vazacopoulos A (1995a) Guided local search with shifting bottleneck for job shop sched- uling. MSSRR 609, GSIA, Carnegie Mellon University, Pittsburgh.
Balas E, Vazacopoulos A (1995b) Guided local search with shifting bottleneck for job shop sched- uling. Manage Sci 44:262–275
Binato S, Hery W, Loewenstern D, Resende M (2001) A GRASP for job shop scheduling, Essays and surveys on metaheuristics. Kluwer, Dordrecht, pp 59–79
Demirkkol E, Mehta S, Uzsloy R (1997) Benchmarks for job shop scheduling problems. Eur J Operat Res 109:137–141
Grabowski J, Nowicki E, Smutnicki C (1988) Block algorithm for scheduling of operations in job-shop system (polish) Przeglad Statystyczny 35:67–80
Jain A, Meeran S (1999) Deterministic job shop scheduling: past, present and future. Eur J Operat Res 113:390–434
Jain A, Rangaswamy B, Meeran S (2000) New and stronger job shop neighborshoods: a focus on the method of nowicki and smutnicki (1996). J Heurist 6(4):457–480
Lawler EL, Lenstra J-K, Rinnooy KA (1982) Recent developments in deterministic sequencing and scheduling Reidel, Dordrecht pp 35–73
Lawrence S (1984) Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques (supplement). Graduate School of Industrial Administration Carnegie-Mellon University, Pittsburgh, Pennsylvania.
Lee C-Y, Pinedo M (2002) Optimization and heuristic scheduling. In: Pardalos PM, Resende MGC (eds) Handbook of Applied Optimization. Oxford University Press, Oxford, pp 569–584
Nowicki E, Smutnicki C (1996) A fast tabu search algorithm for the job-shop problem. Manage Sci 42(6):797–813
Nowicki E, Smutnicki C (2002) Some new tools to solve the job shop problem. Technical report 60/2002, Institute of Engineering Cybernetics, Technical University of Wroclaw, Wroclaw, Poland Pezzella F, Merelli E (2000) A tabu search method guided by shifting bottleneck for the job shop
scheduling problem. Eur J Operat Res, 120:297–310
Roy B, Sussman B (1964) Les problemes d’ordonnancement avec contraintes disjonctives. Note
DS9 bis, SEMA, Paris (in French)
Shylo V (1999) A global equilibrium search method (Russian). Kybernetika i Systemniy Analys
1:74–80
Taillard E (1993) Benchmarks for basic scheduling problems. Eur J Operat Res, 64:278–285
Van Laarhoven P, Aarts E, Lenstra J (1992) Job shop scheduling by simulated annealing. Operat
Res 40:113–125
Watson J, Howe A, Whitley L (2003) An analysis of iterated local search for job-shop scheduling.
In: Fifth metaheuristics international Conference (MIC 2003)