程序代写代做代考 AI assembly scheme flex algorithm js A survey of priority rule-based scheduling

A survey of priority rule-based scheduling

OR Spektrum (1989) 11:3–16

�9 Springer-Verlag 1989

A Survey of Priority Rule-Based Scheduling

R. Haupt

Lehrstuhl fiir Personalwirtschaft, Universit~t zu K61n, Albertus-Magnus-Platz, D-5000 K61n 41, Federal Republic of Germany

Received February 2, 1988 / Accepted June 30, 1988

Summary. In this paper, we survey the literature on
heuristic priority rule-based job shop scheduling. Priority
rules have been intensively investigated over the last 30
years by means of simulation experiments. They are
also used in Shop Floor Control software systems. We
present a classification, a characterization, and an
evaluation of elementary priority rules. Some priority
rule-related model extensions are discussed.

Zusammenfassung. In diesem Beitrag wird ein Ober-
blick iJber heuristische, priorit~itsregelgestiitzte Auf-
tragsreihenfolgeplanung gegeben. Priorit/itsregeln sind in
den letzten 30 Jahren eingehend anhand yon Simulations-
Experimenten untersucht worden. Sie haben ebenso
in Programmsysteme zur Produktionsplanung und
-steuerung Eingang gefunden. Der Beitrag bemiiht sich
um eine Klassifizierung, Charakterisierung und Beurtei-
lung yon elementaren Priorit~itsregeln. Abschhel~nd
werden einige priorit/itsregelrelevante Modellerweiterun-
gen angeschnitten.

1. Introduction

Production scheduling is part of the shop management
decision system. Specifically, the day-to-day scheduling
and dispatching of jobs through the shop is embedded
in a Shop Floor Control (SFC) System.

The scheduling or sequencing problem is defined as
the determination of the order in which a set of lobs
(tasks) {ili = 1, . . . ,n) is to be processed through a set
of machines (processors, work stations) (klk = 1 . . . . . m)
[36, 10, 111, 59].

In real life situations beyond the manufacturing
industries jobs may as well be interpreted as aircrafts

queuing up to land, or as patients waiting for to be
treated by a consultant surgeon, or as bank customers
at a row of tellers’ windows, respectively. Correspond-
ingly, one would identify machines with an airport
runway, or with surgical facilities, or with bank tellers,
respectively [36, 59].

Job i is specified by a set of operations (j l/= 1 .. . . , mi}
representing the processing requirements on various
machines. Depending on whether the job processing
order (routing) implies a predetermined sequence of
operations, we distinguish between the case of arbitrary
routings (open shop) and the case of given job routings
which may be identical for all jobs (flow shop) or non-
identical (fob shop). Unless otherwise specified, we
assume in what follows the case of individual but deter-
mined job processing orders (job shop).

A scheduling model usually involves the following
assumptions [36, p. 5; 220; 38; 111, p. 11; 59, p. 8],
the four last of which will be relaxed in Sect. 4:

(1) Each job is processed by one machine at a time (no
splitting, no overlapping).

(2) Each machine processes one job at a time (no use
of machining-centers).

(3) Each operation once started must be completed
without interruption (no preemption).

(4) Each machine is continuously available for produc-
tion (no breakdown).

(5) The only limiting resource is the machine (no lack
of operator, tool, or material).

(6) Processing of an operation comprises all technolog-
icaUy determined times such as machining, setup, or
move times. Processing times are independent of the
schedule (no sequence-dependent setup-times).

4 R. Haupt: Priority Rule-Based Scheduling

(7) Jobs are strictly-ordered sequences of operations
(no assembly).

(8) A given operation can be performed by only one
type of machine (no altemate routing).

In Sect. 2 we introduce the concept of the “priority
rule” by referring to the distinction in static vs. dynam-
ic scheduling. In Sect. 3 a state-of-art survey of dynamic
job shop scheduling research under the above mentioned
assumptions is provided. In Sect. 4 we discuss some
priority rule-based scheduling extensions by relaxing
assumptions (5) to (8). In Sect. 5 we try to give an
outlook on future dynamic scheduling research.

2. Dynamic Job Shop Scheduling

One basic distinction in scheduling research refers to
the nature of the job arrivals in the shop [36, p. 7]. In
a static model jobs arrive simultaneously and are avail-
able for to be scheduled at the same instant. According-
ly, their ready or release times ri are 0, i.e. the total set
of jobs is scheduled at time t = 0 (“all-at-once-schedul-
ing”). New entering jobs are not admitted to the shop
until the preceding scheduling cycle is finished. A
dynamic model allows for a continuous stream of
arriving orders in time that are intermittently released
to the shop and are included in the current scheduling
procedure. Reasonably, the distinction between simul-
taneous and intermittent job arrivals involves the one
between known and fixed job data on one hand and
stochastic data, in particular job interarrival times, on
the other hand. Hence, we distinguish a static/deter-
ministic scheduling problem from a dynamic~stochastic
one. Here, we are exclusively concerned with the latter
case.

Within the subset of dynamic/stochastic models
we deal with experimental, simulation-based approaches,
while ignoring the analytical procedures by means of
queuing theory systems. The vast majority of simula-
tion-based dynamic job shop scheduling literature as-
sumes a Poisson distribution of job arrivals and, cor-
respondingly, exponentially distributed interarrival times.
As far as the processing time is considered as random
variable, exponential as well as normal distributions
occur.

The evident advantage of a dynamic scheduling ap-
proach is due to the fact that it allows for an up-to-
date decision with respect to meanwhile entering (pos-
sibly rush) jobs, by loading a machine at the latest
possible moment, namely as that machine gets idle
[101]. Contrarily, a static model postpones the urgent
job to the subsequent scheduling cycle. Accordingly,
the dynamic property of a simulation model exhibits
the obvious disadvantage that each sequencing decision

is based on the constrained information horizon given
by the set of currently scheduleable jobs, which pro-
hibits the definition of an overall optimum: due to the
real-time capability of a given sequencing decision, only
the selection of the first job of the computed job se-
quence on that machine is actually performed, while
the remainder of the scheduled jobs is rescheduled and
possibly revised on the occasion of the subsequent
loading moment. Thus, rather than to determine a
global optimal sequencing policy, a dynamic job shop
scheduling simulation at best is able to provide a heu-
ristic optimum among alternative sequencing strategies
by which a given job file is scheduled through the shop
in successive simulation runs [25].

Such a policy that defines a specific sequencing
decision each time a machine gets idle, is called a pri-
ority rule. A priority rule allows an idle machine to
select its next operation from among those available.
Primarily, “available” refers to currently waiting jobs
at the corresponding machine; but, as we show in
Sect. 3.2, the availability-property may also be extended
to jobs being currently in the queue or on the machine
of other work stations before proceeding to the queue
in question.

3. Elementary Approaches in Priority Rule-Based
Scheduling

3.1. Measures of Shop Performance

Basically, two groups of appropriate performance
criteria, namely flow-time-based and due-date-based
measures, are of importance in dynamic scheduling.
The following definitions and notations are introduced
[36, p. 11; l l l , p . 18; 59, p. 10]:

pq: processing-time for operation ] of job i

di: due-date, i.e. the promised delivery date of job i

ai: allowance for job i, i.e. the allowed lead time that
is assigned to job i at its arrival or release-time ri;

a i = d i – r i

Wi]: waiting-time preceding operation/’ of job i

Ci: completion-time of job i;

m i

Ci=ri + ~ (Wq+pq),
]=1

since, according to assumption (6) in Sect. 1, a job
in the shop is either on a machine or in a queue

R. H a u p t : P r i o r i t y R u l e – B a s e d S c h e d u l i n g 5

Fi: flow-time (shop time, manufacturing i n t e r v a l ) o f
job i;

m i m i

F i = C i – r i = ~ Wq+ y~ Pi/
/=I /=i

Li: lateness of job i ;

Li = Ci – di = Ci – (ai + ri) = Fi – ai

Ti: tardiness of job i;

Ti = max {0, Li}

Flow-time-oriented performance measures are

n 1 n

F i or /F = – . ~ Fi
i = 1 17 i = 1

n n

respectively. By definition, F, C, W, ~ Fi, ~, Ci, and
n m i i = 1 i = 1

I~ N Wi/ are equivalent performance criteria, i.e. a
i = 1 j = l

n

schedule that optimizes N Fi or iF, also optimizes
i = l

n m i 1 n m i
F_, F_, Wi/ or I ~ = – – – ~ ~ Wi/,
i=I i=i n i=I 1=I

mi
i=I

as well as

n 1 n

~ Q or C = – – 2 C/,
i = 1 F/ i = 1

since both processing-times and release-dates are unaf-
fected by the sequencing decision (s. Assumption 6 in
Sect. 1) [111, p. 21].

Furthermore, F is proportional to the mean in-
process inventory, since the steady-state behaviour of
a waiting-system implies [36, p. 19]: f f = 1/X .N, where

denotes the mean arrival rate (and accordingly, 1/k
denotes the mean time between two arrivals), and _N
denotes the mean number of jobs in the system.

Another flow-time-based measure of effectiveness is
the distribution of the individual flow-times, as expressed

1 n

e.g. by the flow-time variance o2 (F) = – . Z, (Fi _ i f ) 2 .
n i=1

In static problems, the measures Fmax = max {Fi)
and Cma x = max (Ci) denote the scheduling time or

make-span, since ri = 0 for all i = 1, …, n. In a dynamic
model (i.e. different r i >1 0 for the jobs), the make-span
is not a reasonable criterion. (Correspondingly, measures
of machine or shop utilization are unusual in a dynamic
scheduling environment.)

Due-date-based criteria are

n 1 n

)] Li or ] ] = – . E Li,
i = 1 t/ i = 1

n

respectively (which is equivalent to Z Fi or if, respec-
i = 1

tively, since Li and Fi differ from one another by the
sequence-independent allowances ai) , and

n 1 n

~” Ti or T = – ” ~” Ti,
i = 1 n i = 1

respectively. In case, early deliveries involve penalties
(which might be realistic under just-in-time management
conditions), the measures

n 1 n
Z Ei or f f = – ‘ ~ Ei,

i = 1 /7 i = 1

where Ei, the earliness of job i, is defined as E i = max {-
(Ci – d i ) , 0), could be of interest.

Rather than defining Lma x = m a x {Li} or Tma x =
max (Ti} (which are useful expressions in static ap-
proaches), one would prefer variance criteria for lateness
and tardiness

:(L)= l_.
n i = l

o2(r ) = 1_. f)2
/7 i = 1

or the fraction of jobs tardy ft(O <~ft <~ 1) in dynamic models. Some priority rule-based investigations introduce cost criteria rather than time-oriented measures. Flow- time-oriented performance criteria might be replaced by in-process inventory cost, while tardiness-based measures account for contractual penalties for late deliveries, for customer badwill, for lost sales, and rash shipping cost [ 122]. Due to the introduction of cost-based performance expressions one succeeds in coping with the multiple objective problem. 3. 2. Classification o f Priority Rules As the definition of a priority rule given in Sect. 2 points out, the concept of a priority rule-based schedul- ing approach considers the sequencing decision as a set of independent decentralized one-machine problems [73, 101]. Hence, the idea of transferring algorithms which are optimal with respect to static one-machine problems to a dynamic job shop model environment seems to be intuitively appealing. For example, an order of jobs arranged according to non-decreasing processing- times minimizing F , C and I~ in the one-machine case (Smith-rule) [36, p. 26] can be used as a reasonable heu- ristic flow-time-oriented sequencing principle ("shortest processing-time"-rule) in a dynamic job shop approach. Similarly, the application of the "due-date"-rule, i.e. a sequence according to non-decreasing job due-dates, which minimizes Tma x and Lma x in the one-machine problem (Jackson-rule) [36, p. 30] may serve as tardiness- oriented "rule of thumb" in a simulative approach. As these two examples show, evident job attributes, like processing-times, due-dates, and other data of the job file, are typical arguments of a priority function. A priority function based on properties of an in- dividual job assigns a priority value to that job, regard- less of the urgency of other competing jobs. A basical- ly different approach is one, in which the priority of a job depends on the urgency of other jobs too. This could be achieved for example by checking, if a selec- tion of a job makes its competitor jobs critical (e.g. in terms of their slack). In fact, such a rule estimates the effect expressed by some performance measure that a hypothetical job selection has on the total of scheduleable jobs, i.e. on other jobs and on itself. Thus, this procedure tries first to "opt imize" and then to select. The "alternate operation" rule of Gere [60] provides an example for this approach. Priority rules on the basis of individual job informa- tion without considering priority interactions with other competing jobs first can be classified into time- independent and time-dependent functions. A processing- R . H a u p t : Priority Rule-Based S c h e d u l i n g Table 1. Classification scheme for priority rules Computation of a job's priority referred to 1. attributes only of the job in question: 1.1. Local queue information: 1.1.1. Time-independent priority computation: - Random attributes: RANDOM, FCFS, FASFS - Job processing attributes: SPT, LPT, LWKR, MWKR, FOPNR, GOPNR, TWORK - Job due-date attributes: DD, ODD 1.1.2. Time-dependent priority computation: ALL, SL, CR, ALL/OPN, S/OPN, S/WKR, S/ALL, OSL, OCR 1.2. Global shop information: NINQ, W!NQ, XWINQ LA 2. both of the job in question and of other competing jobs: ALTOP time-based rule can be regarded as time-independent, because the priority value, once computed at the entry of that operation in the corresponding queue, does not change over time, while time-dependent priorities, e.g. slack-based rules (s. Sect. 3.3), vary over time by defini- tion and have to be (re)calculated, for reasons of real- time processing, at the last possible moment , i.e. as the machine gets idle and a loading decision is to be made. Second, a distinction between local and global priority rules refers to the extension of the informa- tion status of rules. A local rule requires information only about jobs currently waiting at the machine in question, whereas global rules are based on information about jobs or about the shop beyond the correspond- ing queue. Since priority rules are typically applied in a decentralized sequencing procedure, the use of local queue information seems to be appropriate. But the @ Nii(t) P q R Xij Yij(t) Completion time of operation/of job i (Cio = ri = release time of job i, s. above) Number of waiting jobs at time t in the queue containing operation j of job i Set of jobs which are currently processed on machines preceding operation j Index of remaining operations (q = ] .. . . , m i) Set of jobs in queue of operationj Particular value of a random variable, uniformly distributed between 0 and 1, assigned to operation] of job i Total work, i.e. the sum of the imminent-operation processing-times, of waiting jobs at time t in the queue containing opera- tion/ of job i Y~j(t) Total work (including work that will "soon" arrive) of waiting jobs at time t in the queue containing operation ] of job i (a job is expected to arrive "soon", if, at time t, its preceding operation is being performed Zi(t) Priority value of job i at time t ( i ~ R for rules (1)-(24) and (26), i~{P u R ) for rule (25)) (Smallest values of Zi(t) have greatest priority) R. Haupt: Priority Rule-Based Scheduling Table 2. Formalization of basic priority rules Rule Definition Description Zi(t) Job selected which has ... (1) RANDOM Xq (2) FCFS Ci,]- 1 (3) FASFS ri (4) SIT Pij (5) LPT -P i j mi (6) LWKR ~ . Piq q=l mi (7) MWKR - ~ . Piq q=l (8) FOPNR mi - J + 1 (9) GOPNR - ( m i - j + 1) mi (10) TWORK ~ Pi] ]=t (11) DD d i (12) ALL di - t mi (13) SL d i - t - ~ . P i q q=l mi (14) CR (d i - t)] ~ . Piq q=l (15) ALL/OPN (d i - t)/(m i - j + 1) ( (16) S/OPN d i - t - ~ . P i q / ( m i - I + 1) q=l , (17) S/WKR d i - t - N Piq q=] _- ( m;) (18) S/ALL d i - t - ~ Piq / ( d i - t ) q=/ (19) ODD dg (20) OSL dij - t - P i j (21) OCR (di] - t)/pij (22) NINQ Ni, j+ l ( t ) (23) WlNQ Yi, j+ l ( t ) (24) XWlNQ Yi, ]+ 1 (t) (25) LA Z i (i E {P u R}) (26) ALTOP mh - ~, rain (d h - ( t + p i j ) - 2 Phq 0} hER q=l h -~ i mi - min {d i - (t + Pii) - q=]+ l the smallest value of a random priority arrived at queue first; ("first come, first served'3 arrived at shop first; ("first arrival at shop, first served") the shortest processing-time the longest processing-time the least work remaining the most work remaining the fewest number of operations remaining the greatest number of operations remaining the greatest total work the earliest due-date the smallest allowance the smallest slack the smallest critical ratio the smallest ratio of allowance per number of operations remaining the smallest ratio of slack per number of operations remaining the smallest ratio of slack per work remaining the smallest ratio of slack per allowance the earliest operation due-date the smallest operation slack the smallest operation critical ratio the least number of jobs in the queue of its next operation the least total work in the queue of its next operation the least total work in the queue of its next operation (both present and expected) the highest priority among the jobs in the corresponding queue and those which are currently processed on machines preceding to the operation in question ("look-ahead") the smallest sum of tardiness for all jobs in the queue due to the selected job ("alternate operation") Piq; O} 8 R. Haupt: Priority Rule-Based Scheduling more one is willing to consider a scheduling problem as a centralized approach, where interactions with parallel loading decisions at other machines are to be taken into account, the more the use of global shop information becomes adequate. Rather than to regard a machine as an island, one might expect scheduling improvements by extending the "myopic" informa- tion horizon to neighbor machines and subsequent sequencing decisions. Similar to global queue information, the performance evaluation rationale, i.e. the priority interaction ap- proach, equally tends to anticipate the future progress of a schedule, which, of course, in a dynamic model environment, is a rather limited "look ahead", since the expected performance of a job selection is no more than an approximate estimation of the objective function. Summarizing, we present a classification scheme in Table 1. The rules to be defined in Sect. 3.3 are as- signed to the appropriate cases. Though the heuristic scheduling literature is familiar with the two basic distinctions between time-independent and time-depen- dent rules and between local and global rules [77, 36, 91, 104], survey articles often ignore those classifica- tions [38, 104, 24]. Moreover, the class 2 ("... attributes both of the job in question and of other jobs") does not occur at all. 3. 3. Survey o f Priority Rules The rules presented in Table 2 and classified in Table 1 are basic ones, i.e. rules that are adequate to the elemen- tary approaches under assumptions (1)-(8) in Sect. 1 (rules relating to more complex model extensions are introduced in Sect. 4). Moreover, the list in Tables 1 and 2 is limited to simple rules, since combinations of rules can be designed in a great variety, once a set of elementary simple rules having been defined (s. later in this section). Finally, the survey is not exhaustive; we discuss related modifications of some rules in this section and in Sect. 3.4. Rules with random attributes, i.e. (1) RANDOM, (2) FCFS, and (3) FASFS, serve as benchmarks in com- parison to reasonable heuristics which one would expect to perform better. Among the rules with job processing information, (4) SPT is the most known, the most applied, and yet one of the most efficient rules. In line with (5) LPT, it requires the lowest information amount, since only operation data (not job data) from the local queue (not from other queues) are needed. While (6) LWKR and (7) MWKR refer to the work remaining, (8) FOPNR and (9) GOPNR are based on the number of operations remaining. Similarly, as alternate version to (10) TWORK, a variant could be designed which favors the greatest number of total operations [24]. LWKR and FOPNR give preference to jobs the work completed of which is rather advanced. Thus, they can be regarded as value-oriented rules selecting jobs with a high fraction of their value added or cumulative value to their total value, whereas TWORK accounts for a static value version. Explicit value approaches are investigated by e.g. [5, 6, 7, 74, 106, 119,120, 72]. The intent of MWKR and GOPNR is to speed up jobs with large processing work resulting in a well- balanced work progress of all jobs, at the expense of a high volume of in-process inventory, while LWKR and FOPNR tend to reduce the number of jobs in the shop. Other modifications of job processing-oriented rules concern SPT or LPT multiplied with or divided by TWORK, respectively [ 125 ]. Among the due-date-related priority rules, regardless of whether they are time-independent (class 1.1.1) or time-dependent (class 1.1.2 in Table 1), (11) DD is the most important rule and the one that formalizes a natural, intuitive due-date behavior of everyday life. DD is equivalent to (12) ALL, i.e. job orders computed ac- cording to DD and ALL are identical, yet ALL would require actualized time-dependent rescheduling. There- fore, DD clearly dominates ALL, which has no practical importance. Among the total set of due-date-related rules (11)- (21), only DD and (19) ODD are time-independent. Concerning the time-dependent rules, some authors discuss the possibility of varying rescheduling frequencies [25, 99, 56, 4, 96], which points out a trade-off between improved performance and the cost in terms of schedul- ing effort due to real-time-based rescheduling procedures. While both (13) SL and (14) CR tend to reflect a job's urgency in a more appropriate manner than this is done by the simple allowance, they modify DD (or ALL) in their own ways, SL referring to the difference and CR referring to the fraction of a job's allowance and its remaining work. CR might possibly be the more appealing version, because a "critical" job is defined as one with C R < 1 [11]. CR is an allowance per remaining work expression. Hence, CR and (15) ALL/OPN differ from one another by their denominators, i.e. remaining work and remain- ing number of operations, respectively. They are ex- plicitely compared with one another in [136, 88, 92]. CR and ALL/OPN belong to the class of time-dependent ratio type rules, which moreover comprises (16) S/OPN, (17) S/WKR, and (18) S/ALL. Adam et al. [3] have pointed out an anomaly in the behavior of those ap- proaches in case of a negative numerator: For a negative slack, S/OPN, e.g. does not reflect the urgency of a job as intended, since a job with many operations left is not considered as urgent as another job with the same R. Haupt: Priority Rule-Based Scheduling 9 negative slack but with only few remaining operations. Kanet [79] has given the correct solution to this dilem- ma by replacing the ratio by a product of slack and number of operations remaining in the case of a negative slack (see similar suggestions [60; 84; 70, p. 115; 121]). S/ALL has been proposed by Miyazaki [88]. Baker et al. [15] show the "critical ratio"-logic of this rule: S/ALL = 1 - 1/CR. The idea of internally set operation due-dates has been suggested by Conway et al. [36]. Operation due- dates serve as "milestones" that pace the work progress through the shop. (The question, how to set those milestones, is treated in Sect. 3.4.) Baker [11] intro- duces an operation-based CR-version, i.e. (21) OCR, while ODD and (20) OSL were studied already earlier [36, p. 231; 85; 96]. The group of rules based on global shop information (class 1.2 in Table 1) extends the sequencing decision horizon by anticipating machines the jobs in question will proceed to next, and by anticipating jobs that are expected to arrive soon at the machine in question. The first case comprises rules (22) NINQ, (23) WINQ, and (24) XWlNQ, the latter case rule (25) LA. The idea of NINQ, WlNQ, and XWINQ is to give preference to jobs that would move on to queues with the least backlog, rather than to speed-up a job now that will be stopped in a congested queue after [36, p. 223]. WINQ and NINQ are related to one another like LWKR to FOPNR or like MWKR to GOPNR, i.e. WINQ allows for the weights given by the processing times of waiting operations. XWINQ, at the cost of a complex implementation effort, estimates the next queue backlog by the moment the job in question enters this next queue. The rationale of LA on the other hand takes into account a rush job currently being processed on its predecessor machine; the urgency of such a job may require to book a machine-reservation in advance, even at the expense of introducing a deliberate idle period on that machine until then [60]. Of course, the "Insert"- option allows for other jobs that can be fitted into the idle time gap. A last global shop information-based approach re- ported in the literature is one which takes into account the total number of waiting jobs in all queues of the shop, which is performed by the "processing-time-fac- tor" [85] (see also [36, p. 235]). This information is used to control the weight of the SPT-rule within an additive combination of several components. Finally, we are aware of only one example of rules in class 2 of Table 1 ("... attributes both of the job in question and of other jobs"), i.e. (26) ALTOP by Gere [60]. While Gere is not explicit enough in formalizing this rule, the definition of Zi(t) given in Table 2 as- sumes the objective function of minimizing the total tardiness of all jobs. Yet, one should point out that this evaluation approach is based at best on a rough estimate of the performance, since a job is considered as critical, only if its slack has become irreversibly negative at the moment of the loading decision and, hence, its delay is unavoidable. The use of value-based rules rather than time-based ones is suggested in some investigations that introduce cost-oriented performance measures [5, 6, 74, 106, 119, 120, 22]. Besides expressions like "highest value added" (s. above), sales-based rules like "the most profitable job" or "the job with the highest selling price" are mentioned [74, 106, 119, 120]. Furthermore, the "expected delay cost" of a job is used as selection principle [106, 72], where the estimation of tardiness cost refers to a comparison between a regular, predeter- mined milestone-based work progress and the actual job status. The idea to combine simple priority terms to more complex rules opens a field of great heuristic variety. In fact, it might be reasonable to link any of the in- troduced basic expressions, which would produce an immense number of possible priority functions. Rather than to survey specific combinations that are suggested in the literature, we characterize shortly the combina- tion mechanisms. Two basic procedures are standard: The additive and the alternative combination. (Ratios or products of rules are classified under simple rules, e.g. S/OPN or CR, s. above.) The additive or weighted combination determines the priority by an expression Zi(t)= g afQ;i, where Qfi denotes the priority value of the f = l simple priority rule f (f= 1 . . . . . g) for job i and a f denotes the weighting or coefficient of rule f(af>~ O)

g

[73]. The special case of0~ 0) Zi(t)=

(c x P_i) + (sai) + y(mi – / + 1)

(b xp_i)+z(mi-/+ 1)

(A similar version is the “earliest modified due-date”
rule, which is based on the original due-date or the
earliest finish-time, whichever is greater [13, 14, 11];
see moreover [81].)

even a broader range of rules can be expressed, e.g.:

SPT: c = ( 1 , 0 . . . . ,0); b = (0 . . . . . 0);

s = y =O;z = 1 / (mi – j+ 1);

– Second, the selection of a rule may depend on a com-
parison of the numerical priority values of both rules,
e.g. [100]: Zi(t ) = min (SPT + r; S/OPN} (-oo ~< r ~<~) - Third, one dominant rule is applied, but another rule is used to resolve ties; i.e. S/OPN might be used regularly but if more than one job have the same highest priority, SPT works as tie-breaker. A recent approach from O'Grady et al. [63] formaliz- ing the priority rule concept, may conclude the survey. The importance of that article results on one hand from the fact that a generalized additive combination expres- sion is suggested covering a couple of simple rules with local queue information (class 1.1 in Table 1). The following notation is in troduce d: SL: c = ( -1 . . . . . -1 ) ; b = (0 . . . . . 0); s = 1 ;y =O;z = 1 / (mi - /+ 1); CR: c = ( 0 . . . . . 0 ) ; b = (1 . . . . . 1); s = 1 ;y =z =0; F O P N R : c = (0 , . . . , 0); b = (0 . . . . . 0); s = O ; y = 1;z = 1/(mi-] + 1); LWKR: c = (1 , . . . , 1); b = (0 . . . . . 0); s = y = 0 ; z = 1 / (mi - j + 1); S/OPN: c = ( - 1 . . . . . - 1 ) ; b = ( 0 . . . . . 0 ) ; s =z = 1;y =0; c ,b : coefficient vector of the remaining processing times of a job _Pi: vector which contains the remaining processing times for job i s: coefficient of the due-date of a job y,z: coefficient of the number o f remaining operations of a job O'Grady et al. define the priority function: Zi(t) = (c x P_i) + (sai) rn i (where e x pi = ~ CqPiq) . Several rules are contained q=] as special cases within this general framework, e.g.: SPT: c = ( 1 , 0 . . . . . 0); s=O; ALL (or DD): c = (0, ..., 0); s = 1 ; SL: c = ( - 1 . . . . . -1 ) ; s =1 ; a S P T + ( 1 - a ) S / O P N ( 0 ~ < a ~ < l ) : c = ( ( m / - ] + 1)a - (1 - a), - (1 - a) . . . . . - (1 - a)); b = ( 0 . . . . . 0);s = 1 - a ; y = 0; z = 1; etc. The idea of O'Grady et al. is also of major importance for quite another reason. Since the components o f such a general combination do not represent rules, but job properties, arbitrary coefficients c, b, s, y , and z rep- resent scheduling principles beyond heuristic priority rules. These principles belong to the area of "prob- abilistic dispatching" or Monte Carlo sampling proce- dures the application of which has been restricted so far to the static case [57; 36, pp. 121-129; 10, pp. 196-210] . The results of the experimental design of a sequence of simulation runs O'Grady et al. are con- cemed with, are very encouraging. At the same time, one should point out that this approach is equally ac- cessible to traditional priority rule-based job shop scheduling. LWKR: c = ( 1 . . . . . 1); s = 0 . 3.4. Major Results o f Priority Rule-Based Scheduling If one extends the approach of O'Grady et al. to the following priority function From the output of the numerous simulation-based studies concerning the effectiveness of priority rules, R. Haupt: Priority Rule-Based Scheduling 11 some main results the representativeness of which seems to be confirmed through various investigations, are shortly discussed here. The basic research by Con- way et al. [35, 33, 34, 37, 36], a pioneering and still today perhaps one of the most comprehensive experi- mental studies [38, p. 20], has dealt intensively with elementary processing and due-date attributes-oriented rules as well as with more sophisticated refinements. The majority of the following condusions refers mainly to their work. One major result is the remarkable efficiency of the SPT rule. Almost unanimously, the dominance of SPT with respect to the mean measures (F, L, T) and with respect to ft is pointed out. However, this advantage involves the evident disadvantage of prohibitively great flow-times and delays of individual jobs. Modifications of SPT in such a way as to overcome its weak points, have led to truncated versions of SPT, i.e. to alternative combinations with FCFS. On one hand, one has placed an upper bound on an operation's waiting-time. On the other hand, one has determined a lower bound for a queue's backlog. Thus, for jobs waiting for a long time and for queues with light load levels, SPT is replaced by FCFS. The corresponding simulation results exhibit obvious trade-offs between the variances and the means of flow-times and delays within the limits of the ex- treme cases (simple SPT rule, or simple FCFS rule, respectively) [36, p. 226]. While due-date-based rules, in particular SL and S/OPN, perform significantly worse than SPT with respect to the various means, they outperform SPT with respect to the tardiness and lateness variances and, under specific conditions, even with respect to the fraction of tardy jobs. An evident conclusion would be to combine both types of rules. While Conway et al. report good results for a linear combination of SPT and S/OPN [36, p. 233], Eilon et al. test an alternative combination of a SL-similar component (for tardy jobs) and SPT (for jobs ahead of schedule) [46]. Correspond- ingly, Oral et al. use an alternative combination version in the form Zi(t) = rain {SPY + r; S/OPN) (-o~ ~< r ~< ~), which involves, as expected, an increase of the tardiness variance and a decrease of T and ft with increasing SPT-weights (decreasing r) [ 100]. Another well-confirmed result is that SPT is signifi- cantly less sensitive to shop load level variations than slack-based approaches [36, p. 233; 100; 112]. Baker [ 11 ] explains this behaviour with the "anti-SPT"-logic that is inherent in slack-expressions: Among two jobs with equal allowances, SL gives preference to the one with longer processing times left, which corresponds to an LFF idea (see also [129]). A strategy to cope with load fluctuations is to reinforce the SPT-weight during periods of high congestions and to weaken it under light load levels [36, p. 233; 85]. Carroll [30] has proposed a specific way to overcome the anti-SPT-behaviour of slack-based rules. His COVERT or "c over t"-rule favors jobs according to the risk to become tardy (slack-intent); but among two jobs, both with zero slack, the one with smaller processing times is preferred (SPT-intent). Recently, Vepsalainen et al. have modified the COVERT-rule with remarkable success [ 129]. The question of scheduling performance under various due-date conditions has been studied rather thoroughly. In particular, the due-date tightness, i.e. the mean allowance assigned to an arriving job, and the method of assigning due-dates are two crucial variables. Once more, SPT exhibits an evident robustness with respect to due-date tightness [36, 47]. Conway et al. were among the first to discuss the influence of various modes of due-date setting on sequencing effectiveness. They distinguish externally set dates (by means of random or constant allowances) from internally set ones (by means of job characteristics as the total work content or the total number of operations of a job). Due-dates on the basis of the total work content ex- hibit the best scheduling results, whereas all rules, with the exception of SPT, worsen, as the total work con- tent method is replaced [36, p. 232]. Other authors deal with sequencing performances under current shop status-based due-dates, e.g. in terms of the workload [45,130, 88, 12, 20, 15]. Additionally, Baker [11] introduces different ways to determine operation due-dates. Among the possible combinations of job and operation due-date setting modes, the total work content approach in both dimen- sions significantly outperforms all other methods. A consequence is that the operationdue-date-based priority rules (OCR, ODD, OSL, etc.) can demonstrate their efficiency, if job due-dates and operation-milestones are set reasonably, i.e. related to the contained total work of a job or an operation. Recent shop management systems consider the releasing of jobs to the shop more crucial than the dispatching of jobs [81, 135, 134]. Those systems might be characterized as workload-oriented approaches which limit the job release for purpose of flow-time and work-in-process inventory control. Under such a backlog reduction, different dispatching priority rules obviously do not contrast with one another in a sig- nificant way, while under an arrival-time-oriented, instantaneous job release the shop management ef- ficiency depends mainly on the sequencing decision [82, 1, 2]. Summarizing, one has to point out once more again the performance and, in particular, the robustness of SPT. Finally, its advantages are obviously confirmed even under uncertainty of processing time prediction. Errors of processing time estimates do not present a 12 R. Haupt: Priority Rule-Based Scheduling serious problem for SPT. Of course, its performance worsens slightly with increasing uncertainty, but not as much as it is the case for other rules [36, p. 228; 96; 53; 54]. Hence, it seems that SPT remains a "mainstay" and a standard against which candidate procedures must demonstrate their virtue [38, 129]. 4. Selected Extended Models of Dynamic Job Shop Scheduling The relaxation of some of the assumptions (introduced in Sect. 1) conceming the job shop scheduling process leads to models for which more complex and more specific priority rules expressions might be adequate. The assumptions (6) ("no sequence-dependent setup- times") and (7) ("no assembly") describe conditions of the job and the manufacturing process, while the assumptions (5) ("no lack of operator, tool, or mate- rial") and (8) ("no alternate routing") restrict the shop management decision system to sequencing policies. Dealing briefly with the removal of these assumptions, we first discuss models of more complex shop con- ditions (i.e. sequence-dependent rather than sequence- independent setup-times, and tree-structured jobs rather than strictly-ordered sequences of operations). Second, we consider the interaction of the sequencing decision with other shop managerial policies (i.e. machine and labor (dual resource) constrained shops rather than machine limited systems, and alternate machine selec- tion rather than fixed job routings). The case of scheduling under sequence-dependent setup-times (including group technology-based char- acteristics of part family production [92]) were mainly studied by Baker [9] (see also [75, 56]). Typical priority rules under these conditions are the "shortest setup- time"-rule (MINSEQ) which can be considered as a dynamic version of the static heuristic "closest-un- visited-city"-algorithm, the "shortest sum of machine and setup-time"-rule, and the classical SPT-rule which has to be interpreted in this model environment as "shortest machine-time"-rule. All of these rules yield more or less the same remarkable results with respect to if, light shop load levels given. Contrarily, Baker found significant relative improvements of ff under heavy load levels by applying the job sequence of the static optimal solution (FIXSEQ). Equally, under all levels of work congestion, FIXSEQ was clearly best conceming the flow-time variance. Tree-structured jobs are characteristic of the as- sembly-shop. The processing of the parts of an assembly- type job is restricted by precedence-constraints which require that a given item of the job cannot be operated on before the completion of its partner-item. To allow for serial-parallel-routed shops (rather than serial-routed ones) means to deal with staging delays which result from already finished parts waiting for the completion of their partners within the same assembly group. A priority dispatching discipline focussing on min- imizing these staging delays should try to reduce the differences of the completion-times for the various com- ponent parts which go into one assembly. This means that an optimal procedure with respect t o / ] or T of the network-structured job, i.e. one allowing for both minimum queuing and minimum staging delays, has to look for completion of the branches of an assembly unit, not only on time, but also at the same time. The rationale of such a decision rule demands the continual comparison concerning the work progress over the as- sembly components. This procedure is referred to as "synchronization" meaning to speed up parts with lagging remaining work. While many articles deal with priority rule-based scheduling of assembly-type jobs [30, 128, 103, 105, 68, 69, 127, 17, 112, 61, 76, 23], only few authors investigate assembly-specific synchronizing rules, in particular Maxwell [84] and Maxwell et al. [85] (see also [123, 70, 71, 121]). A synchronizing rule selects the job with the largest lag or the smallest advance, respectively, of the corresponding part with respect to its other partner-components, where "lag" and "advance" may refer to remaining work differences, or remaining number of operation differences [84, 85], or finally to slack differences [70, 121 ]. Synchronizing procedures, when used as single-factor rules, do not result in satisfactory performance, due to the fact that they focus on the relative work progress of a part rather than on the overall delay structure of the job. But when used with modest weight within an additive combination with basic due-date-related terms, they provide remarkable improvements. The dispatching decision in an assembly-shop is most similar to the resource-constrained project schedul- ing (RCPS) problem. A slack-difference-based syn- chronization rule can be shown to correspond to a "least total slack"-heuristic in the RCPS case, if the RCPS-critical path-analysis is revised and updated at each activity completion [10, 121]. Dual resource constrained shops reflect a semi- automated real-world environment, where a given number of workers (l) operate a given number of machines (m) ( /