International Journal of Foundations of Computer Science c World Scientific Publishing Company °
APPROXIMATION ALGORITHMS FOR FLEXIBLE JOB SHOP PROBLEMS∗
KLAUS JANSEN † Universit¨ at zu Kiel,
[email protected] Kiel, Germany
and MONALDO MASTROLILLI Istituto Dalle Molle di Studi sull’Intelligenza Artificiale,
[email protected] Manno, Switzerland
and ROBERTO SOLIS-OBA ‡ University of Western Ontario,
[email protected] London, Canada
Received (received date) Revised (revised date) Communicated by Editor’s name ABSTRACT The Flexible Job Shop problem is a generalization of the classical job shop scheduling problem in which for every operation there is a group of machines that can process it. The problem is to assign operations to machines and to order the operations on the machines so that the operations can be processed in the smallest amount of time. This models a wide variety of problems encountered in real manufacturing systems. We present a linear time approximation scheme for the non-preemptive version of the problem when the number m of machines and the maximum number µ of operations per job are fixed. We also study the preemptive version of the problem when m and µ are fixed, and present a linear time approximation scheme for the problem without migration and a (2 + ε)-approximation algorithm for the problem with migration.
1. Introduction The job shop scheduling problem is a classical problem in Operations Research [17] in which it is desired to process a set J = {J1 , . . . , Jn } of n jobs on a group M = {1, . . . , m} of m machines in the smallest amount of time. Every job Jj ∗A
preliminary version of this paper appeared in the Proceedings of LATIN 2000. research was done while the author was at IDSIA Lugano, Switzerland. ‡ This research was done while the author was at MPII Saarbr¨ ucken, Germany. † This
1
consists of a sequence of µ operations O1j , O2j , . . . , Oµj which must be processed in the given order. Every operation Oij has assigned a unique machine mij ∈ M which must process the operation without interruption during pij units of time, and a machine can process at most one operation at a time. In this paper we study a generalization of the job shop scheduling problem called the flexible job shop problem [18, 22], which models a wide variety of problems encountered in real manufacturing systems [4, 22]. In the flexible job shop problem an operation Oij can be processed by any machine from a given group Mij ⊆ M (the groups Mij are not necessarily disjoint). The processing time of operation Oij on machine u ∈ Mij is puij . The goal is to choose for each operation Oij an eligible machine and a starting time so that the maximum completion time Cmax over all jobs is minimized. Cmax is called the makespan or the length of the schedule. The flexible job shop problem is more complex than the job shop problem because of the additional need to determine the assignment of operations to machines. Following the three-field α|β|γ notation suggested by Vaessens [22] and based on the one given by Graham et al. [8], we denote our problem as m1m|chain, op ≤ µ|Cmax . In the first field m specifies that the number of machines is a constant, 1 specifies that any operation requires at most one machine to be processed, and the second m gives an upper bound on the number of machines that can process an operation. The second field states the precedence constraints and the maximum number of operations per job, while the third field specifies the objective function. The following special cases of the problem are already NP-hard (see [22] for a survey): 2 1 2|chain, n = 3|Cmax , 3 1 2|chain, n = 2|Cmax , 2 1 2|chain, op ≤ 2|Cmax . The job shop scheduling problem has been extensively studied. The problem is known to be strongly NP-hard even if each job has at most three operations and there are only two machines [17]. Williamson et al. [23] proved that when the number of machines, jobs, and operations per job are part of the input there does not exist a polynomial time approximation algorithm with worst case bound smaller than 54 unless P=NP. On the other hand the preemptive version of the job shop scheduling problem is NP-complete in the strong sense even when m = 3 and µ = 3 [6]. The practical importance of NP–hard problems necessitates tractable relaxations. By tractable we mean efficient solvability, and polynomial time is a robust theoretical notion of efficiency. A very fruitful approach has been to relax the notion of optimality and settle for near–optimal solutions. A near–optimal solution is one whose objective function value is within some small multiplicative factor of the optimal value. Approximation algorithms are heuristics that in polynomial time provide provably good guarantees on the quality of the solutions they return. This approach was pioneered by the influential paper of Johnson [14] in which he showed the existence of good approximation algorithms for several NP–hard problems. He also remarked that the optimization problems that are all indistinguishable in the theory of NP–completeness behave very differently when it comes to approximability (assuming that P 6= NP). Remarkable work in the last three decades in both the design of approximation algorithms and proving inapproximability results has
2
validated Johnson’s remarks. The book on approximation algorithms edited by Hochbaum [11] gives a good glimpse of the current knowledge on the subject. Approximation algorithms for several problems in scheduling have been developed in the last three decades. In fact it is widely believed that the first optimization problem for which an approximation algorithm was formally designed and analyzed is the makespan minimization of the identical parallel machines scheduling problem, this algorithm was designed by Graham in 1966 [7]. This precedes the development of the theory of NP–completeness. A Polynomial Time Approximation Scheme (PTAS for short) for an NP-hard optimization problem in minimization (or maximization) form is a polynomial time algorithm which, given any instance of the problem and a value ε > 0 (or 0 < ε < 1), returns a solution whose value is within factor (1 + ε) (or (1 − ε)) of the optimal solution value for that instance. Jansen et al. [13] have designed a linear time approximation scheme for the job shop scheduling problem when m and µ are fixed. When m and µ are part of the input the best known result [5] is an approximation algorithm with worst case bound O([log(mµ) log(min{mµ, pmax })/ log log(mµ)]2 ), where pmax is the largest processing time among all operations. The flexible job shop problem is equivalent to the problem of scheduling jobs with chain precedence constraints on unrelated parallel machines. For this latter problem, Shmoys et al. [21] have designed a polynomial-time randomized algorithm that, ∗ with high probability, finds a schedule of length at most O((log2 n/ log log n)Cmax ), ∗ where Cmax is the optimal makespan. In this work we study the preemptive and non-preemptive versions of the flexible job shop scheduling problem when the number of machines m and the number of operations per job µ are fixed. We generalize the techniques and results described in [13] for the job shop scheduling problem and design a linear time approximation scheme for the flexible job shop problem. Our algorithm works also for the case when each job Jj has a delivery time qj . If in some schedule, job Jj completes its processing at time Cj , then its delivery completion time is equal to Cj + qj . The problem now is to find a schedule that minimizes the maximum delivery completion time L∗max . If in addition each job has a release time rj when it becomes available for processing, our algorithm finds in linear time a solution of length no more than (2 + ε) times the length of an optimum schedule. We also present a linear time approximation scheme for the preemptive version of the flexible job shop problem without migration. No migration means that each operation must be processed by a unique machine. Hence, if an operation is preempted, its processing can only be resumed on the same machine on which it was being processed before the preemption. We also study the preemptive flexible job shop problem with migration and in which every operation has both a release time and a delivery time. We describe a (2 + ε)-approximation algorithm for it. Both algorithms produce solutions with only a constant number of preemptions. The job shop problem with multi-purpose machines is a special instance of the flexible job shop problem in which the processing time of an operation is the same
3
regardless of the machine on which it is processed. We present a linear time approximation scheme for this problem that works for the case when each operation has a release time and a delivery time. At this point we should remark that even when our algorithms have linear running time, they are mainly of theoretical importance since the constants associated with their running times are very large. Our main contribution is to prove that it is possible to design approximation schemes, or algorithms with approximation ratio 2 + ε for the above problems. This knowledge, that polynomial time algorithms with good approximation ratios exist, might help in the design of near optimum algorithms with practical running times for these scheduling problems. The rest of the paper is organized in the following way. In Section 2 we introduce some notation and preliminary results. In Section 3 we describe a polynomial time approximation scheme for the non-preemptive version of the flexible job shop scheduling problem with delivery times. In Section 4 we consider the preemptive version of the problem with migration and, in Section 5, without migration. In Section 6 we consider the multi-purpose machines job shop problem, and we give our conclusions in Section 7. 2. Preliminaries Consider an instance of the flexible job shop problem with release and delivery times. Let L∗max be the length of an optimum schedule. For every job Jj , let Pj =
µ X i=1
[ min psij ] s∈Mij
(1)
P denote its minimum processing time. Let P = Jj ∈J Pj . Let rj be the release time of job Jj and qj be its delivery time. We define tj = rj + Pj + qj for all jobs Jj and tmax = maxj tj . Lemma 1 ½ ¾ P max , tmax ≤ L∗max ≤ P + tmax . (2) m P P Proof. We start by observing that L∗max ≥ m , since m is the length of an “ideal” schedule with no idle times, in which every operation is processed on its fastest machine, and in which all release and delivery times are 0. Clearly, L∗max ≥ tmax . To show that L∗max ≤ P + tmax we describe a simple algorithm that finds a schedule of length L ≤ P + tmax . Assign each operation Oij of every job Jj to its fastest machine (or to any of its fastest machines, if there are several of them). Then, schedule the jobs one after another, taking them in non-decreasing order of release times: the first operation of the first job is placed on its fastest machine and starts as soon as possible; when that operation finishes, the second operation of the first job is immediately scheduled on its fastest machine, and so on. Let Ji be the job with the largest delivery completion time L in this schedule, i.e., L = si + Pi + qi , where si is the time when job Ji starts its processing. Since jobs are scheduled, as soon as possible, on their fastest machines, and in non-decreasing order of their release times, then si ≤ ri + P − Pi , and the claim follows since ri + qi ≤ tmax . 2
4
Let us divide all processing, release, and delivery times by max by Lemma 1, 1 ≤ L∗max ≤ m + 1, and tmax ≤ 1.
©P
m , tmax
ª
, thus (3)
We observe that Lemma 1 holds also for the preemptive version of the problem with or without migration. 3. Non-preemptive Problem In this section we describe our linear time approximation scheme for the flexible job shop problem with delivery times. We assume that all release times are zero. The algorithm works as follows. First we show how to transform an instance of the flexible job shop problem into a variant of the problem without delivery times and in which we have a special non-bottleneck machine (described below). A solution for this problem can be easily translated into a solution for our original problem. To solve this new problem, we define a set of time intervals and assign operations to the intervals in such a way that operations from the same job that are assigned to different intervals appear in the correct order, and the total length of the intervals is no larger than the length of an optimum schedule. We perform this step by first fixing the position of the operations from a constant number of “long” jobs, and then using linear programming to determine the position of the remaining operations. When solving the above linear program we might obtain a (fractional) solution that splits some operations over several intervals. Furthermore, the solution of the linear program requires more than linear time. We deal with these problems by using the potential price directive decomposition method [9] to find in linear time a near optimal solution for the linear program. We shift around some of the fractional assignments in this solution to reduce the number of operations split across intervals, to a constant. We show that these operations are so small that by moving them to the beginning of the schedule, we only slightly increase the length of the solution. Next we use an algorithm by Sevastianov [20] to find a feasible schedule for the operations within each interval. Sevastianov’s algorithm finds for each interval a schedule of length equal to the length of the interval plus mµ3 pmax , where pmax is the largest processing time of any operation in the interval. In order to keep this enlargement small we remove from each interval a subset V of jobs with the largest operations before running Sevastianov’s algorithm. Those operations are scheduled at the beginning of the solution, and by choosing carefully the set of long jobs we can show that the total length of the operations in V is very small compared to the overall length of the schedule. Getting Rid of the Delivery Times. We can use a technique by Hall and Shmoys [10] to transform an instance of the flexible job shop problem into another instance with only a constant number of different delivery times. Any solution for this latter problem can be easily transformed into a solution for the former one, and this transformation increases the length of the solution by only ε/2. As we show below, having to deal with only a constant number of different delivery times greatly simplifies the problem. 5
Let qmax be the maximum delivery time and let ε > 0 be a constant value. The idea is to round each delivery time down to the nearest multiple of 2ε qmax to get at most 1 + 2/ε distinct delivery times. Next, apply a (1 + ε/2)-approximation algorithm for the flexible job shop problem that can handle 1 + 2/ε distinct delivery times (this algorithm is described below). Finally, adding at most 2ε qmax to the completion time of each job; this increases the length of the solution by at most ε 2 qmax . The resulting schedule is feasible for the original instance, so this is a (1+ε)approximation algorithm for the original problem. In the remainder of this paper, we shall restrict our attention to instances of the problem in which the delivery times q1 , ..., qn can take only χ ≤ 1 + 2ε distinct values, which we denote by δ1 > ... > δχ . The delivery time of a job can be interpreted as an additional delivery operation that must be processed on a non-bottleneck machine after the last operation of the job. A non-bottleneck machine is a machine that can process simultaneously any number of operations. Let D = {d1 , ..., dχ } be a set of delivery operations; each operation di has processing time δi and it must be processed by a non-bottleneck machine. Moreover, every feasible schedule for the jobs J can be transformed into another feasible schedule, in which all delivery operations finish at the same time, without increasing the schedule length: simply shift the delivery operations to the end of the schedule. Because of the above interpretation, for every instance of the flexible job shop problem we can get an equivalent instance without delivery times: just add to the set of machines a non-bottleneck machine, and add to every job a delivery operation of length equal to the delivery time of the job. For the rest of this section we assume that every job has a delivery operation, all delivery operations end at the same time, and there is a non-bottleneck machine to process the delivery operations. Let m + 1 be the non-bottleneck machine. Relative Schedules Assume that the jobs are indexed so that P1 ≥ P2 ≥ ... ≥ Pn . Let L ⊂ J be the set formed by the first k jobs, i.e., the k jobs with longest minimum processing time, where k is a constant to be determined later. We call L the set of long jobs. We note that if the number of jobs is smaller than k, then the problem can be solved in constant time by trying all possible schedules for the jobs and choosing the best one. Hence, we assume from now on that n is larger then k. An operation from a long job is called a long operation, regardless of its processing time. Let S = J \ L be the set of short jobs. We create a set JD of χ delivery jobs such that every job Jj ∈ JD consists of a single delivery operation dj . Consider any feasible schedule for the jobs in J . This schedule assigns a machine to every operation and it also defines a relative ordering for the starting and finishing times of the operations. A relative schedule R for L ∪ JD is an assignment of machines to long operations and a relative ordering of the starting and finishing times of the long and delivery operations, such that there is a feasible schedule for J that respects R. This means that for every relative schedule R there is a feasible schedule for J that: (i) assigns the same machines as R to the long operations, (ii) schedules the long operations in the same relative order as R, and (iii) ends all delivery operations at the same time.
6
Note that since there is a constant number of long and delivery jobs, there is only a constant number of different relative schedules. Lemma 2 The number of relative schedules for L ∪ JD is at most mkµ (2µk)!(2µk + 1)χ .
(4)
Proof. Fix the position of a long operation. The start and finishing times of this operation divide the time into three intervals. Take a second long operation: there are at most 32 choices for the starting and finishing intervals of the operation. These two operations define five time intervals, and so for the next long operation there are at most 52 possibilities for choosing its starting and ending intervals, and so on. Moreover, for every long operation there are at most m machines that can process it, therefore the number of relative schedules is at most mkµ (1 · 32 · 52 · · · (2µk − 1)2 )(2µk + 1)χ ≤ mkµ (2µk)!(2µk + 1)χ .
(5)
2 If we build all relative schedules for L ∪ JD , one of them must be equal to the relative schedule defined by some optimum solution. Since it is possible to build all relative schedules in constant time, we can run our algorithm for each one of these schedules, thus guaranteeing that it will be executed for a relative schedule R∗ such that some optimum schedule for J respects R∗ . Fix a relative schedule R∗ as described above. The ordering of the starting and finishing times of the long operations divide the time into intervals that we call snapshots. We can view a relative schedule as a sequence of snapshots N (1), N (2), . . . , N (g), where N (1) is the unbounded snapshot whose right boundary is the starting time of the first operation according to R∗ , and N (g) is the snapshot with right boundary defined by the finishing time of the delivery operations. Note that the number of snapshots g is g ≤ 2µk + 1 + χ ≤ (2µ + 1 + χ)k
(6)
for any k ≥ 1. 3.1. Scheduling the Small Jobs Given a relative schedule R∗ as described above, to obtain a solution for the flexible job shop problem we need to schedule the small operations within the snapshots defined by R∗ . We do this in two steps. First we use a linear program LP (R∗ ) to assign small operations to snapshots, and second, we find a feasible schedule for the small operations within every snapshot. To formulate the linear program we need first to define some variables. For each snapshot N (`) we use a variable t` to denote its length. For each Jj ∈ S we consider all possible assignments of its operations to snapshots. This is done by defining for each Jj a set of variables xj,(i1 ,...,iµ ),(s1 ,...,sµ ) , so that when xj,(i1 ,...,iµ ),(s1 ,...,sµ ) = f , 0 ≤ f ≤ 1, this would mean that an f fraction of the v-th operation of job Jj is completely scheduled in the iv -th snapshot on machine sv , for all v = 1, . . . , µ. Note 7
that these variables only indicate the snapshots in which (fractions of) operations of a given job are assigned, but they might not define a valid schedule for them. Let αj be the snapshot where the delivery operation of job Jj starts. Note that because of the rounding of the delivery times, a delivery job might be associated to several jobs. For every variable xj,(i1 ,...,iµ ),(s1 ,...,sµ ) we need 1 ≤ i1 ≤ i2 ≤ · · · ≤ iµ < αj to ensure that the operations of Jj are scheduled in the proper order. Let Aj = {(i, s) | i = (i1 , . . . , iµ ) 1 ≤ i1 ≤ . . . ≤ iµ < αj , s = (s1 , . . . , sµ ) sv ∈ Mvj and no long operation is scheduled by R∗ at snapshot iv on machine sv , for all v = 1, . . . , µ}. The load L`,h on machine h in snapshot N (`) is the total processing time of the operations from small jobs assigned to h during N (`), i.e., L`,h =
X
µ X
X
xjis psvjv ,
(7)
Jj ∈S (i,s)∈Aj v=1,iv =`,sv =h
where iv and sv are the v-th components of tuples i and s respectively. For every long operation Oij let αij and βij be the indices of the first and last snapshots where the operation is scheduled. Let pij be the processing time of long operation Oij according to the machine assignment defined by the relative schedule R∗ . We are ready to describe the linear program LP (R∗ ) that assigns small operations toPsnapshots. g t` Minimize P`=1 βij t` = pij , for all Jj ∈ L, i = 1, . . . , µ, s.t. (1) ij P`=α g (2) t = δj , for all dj , j = 1, . . . , χ P`=αj ` x = 1 , for all Jj ∈ S , (3) jis (i,s)∈Aj (4) L`,h ≤ t` , for all ` = 1, . . . , g, h = 1, . . . , m, (5) t` ≥ 0 , for all ` = 1, . . . , g , (6) xjis ≥ 0 , for all Jj ∈ S , (i, s) ∈ Aj . Lemma 3 An optimum solution of LP (R∗ ) has value no larger than the length of an optimum schedule S ∗ that respects the relative schedule R∗ . Proof. To prove the lemma we only need to show how to build a feasible solution for LP (R∗ ) that schedules the jobs in exactly the same way as S ∗ . First we assign to each variable t` value equal to the length of snapshot N (`) in schedule S ∗ . Then, we assign values to the variables xjis as follows. 1. For each operation Ouj , snapshot N (`), ` = 1, . . . , g, and machine h = 1, . . . , m, initialize fuj (`, h) to be the fraction of operation Ouj that is scheduled in snapshot N (`) and on machine h according to S ∗ . Initialize the variables xjis to 0. 2. For each job Jj and each (i, s) ∈ Aj : 3.
set xjis = f , where f := min{fuj (iu , su ) | u = 1, . . . , µ}, and
4.
set fuj (iu , su ) := fuj (iu , su ) − f for each u = 1, . . . , µ.
8
Pg Pm First, note that for any given feasible solution it is `=1 h=1 fuj (`, h) = 1, for each Jj ∈ J and 1 ≤ u ≤ µ. Each time steps (2), (3) and (4) are completed, the same fraction f of every operation of job Jj is assigned to the same snapshot and machine as in S ∗ ; furthermore for at least one operation Ouj the new value of fuj (iu , su ) P will be set to zero. Hence, (i,s)∈Aj xjis ≤ 1. To show that at the end of the above procedure min{fuj (iu , su ) | u = 1, . . . , µ} = 0 for all (i1 , . . . , iµ ), (s1 , . . . , sµ ) ∈ Aj , suppose the contrary, i.e. that there is a fraction fˆ > 0 of an operation of job Jj P that is not assigned to any snapshot and machine, i.e., (i,s)∈Aj xjis < 1. But then, the same fraction fˆ of every operation of job Jj is not assigned by this procedure to any snapshot and to any machine, and therefore, there is at least a pair (i1 , . . . , iµ ), (s1 , . . . , sµ ) ∈ Aj for which min{fuj (iu , su ) | u = 1, . . . , µ} > 0, which is a contradiction. 2 One can solve LP (R∗ ) optimally in polynomial time and get only a constant number of jobs with fractional assignments since a basic feasible solution of LP (R∗ ) has at most kµ + χ + n − k + mg variables with positive value. This comes from the fact that a basic feasible solution has at most as many variables with positive value as the number of constraints in the linear program. For LP (R∗ ), there are kµ constraints of type (1), χ constraints of type (2), n − k constraints of type (3), and mg constraints of type (4). Moreover, by constraints (3) every small job has at least one positive variable associated with it, and so there are at most mg + kµ + χ jobs with fractional assignments in more than one snapshot. A drawback of this approach is that solving the linear program might take a very long time. Since we want to get an approximate solution to the flexible job shop problem, it is not necessary to find an optimum solution for the linear program. An approximate solution for LP (R∗ ) would suffice for our purposes. We can use an algorithm by Grigoriadis and Khachiyan [9] to find a (1 + ρ)-approximate solution for LP (R∗ ) in linear time, for any value ρ > 0. Furthermore, by using the rounding technique described in [13] it is possible to find an approximate solution for LP (R∗ ) in which only a small number of jobs receive fractional assignments in more than one snapshot, thus proving the following lemma. Lemma 4 A (1 + ρ)-approximate solution for LP (R∗ ) can be computed in linear time for any value ρ > 0 and in which the set of jobs F that have fractional assignments in more than one snapshot has size |F |≤ mg. 3.2. Generating a Feasible Schedule To complete our solution, we need to compute a feasible schedule for the small operations that have been assigned to each snapshot. We first get rid of the operations that have been split across snapshots. Then, we use Sevastianov’s algorithm [20] to schedule the operations inside a snapshot. To ensure that the schedule produced by Sevastianov’s algorithm “almost” fits in the snapshots we need to eliminate the largest small operations first. We give the details below. We assume, without loss of generality, that our approximate solution for LP (R∗ ) has value no larger than m + 1, otherwise the solution obtained by scheduling the 9
jobs sequentially on the machines with the smallest processing time has a better makespan in the worst case. To get a feasible schedule from the solution of the linear program, let us first deal with the jobs F that received fractional assignments. Note that if we schedule the jobs in F at the beginning of our solution and append to this schedule a feasible schedule for the rest of the jobs we will get a valid schedule for the entire set of jobs. So, let us just assign each operation Oij of every job Jj ∈ F to its fastest machine (or to one of its fastest machines, if there are several of them). Then, we schedule the jobs one after another: the first operation of the first job is placed on its fastest machine; when that operation finishes the second operation of the first job is scheduled on its fastest machine, and so on. For every operation of the remaining small jobs (not in F ) consider its processing time according to the machine selected for it by the solution of the linear program. Let V be the set formed by all operations of small jobs with processing time larger than τ = 8µ3εmg . Note that |V | ≤
m(m + 1) 8µ3 m2 (m + 1)g = . τ ε
(8)
We remove from the snapshots all operations in V and place them at the beginning of the schedule as we did for F . We need to get rid of these lengthy operations V in order to be able to produce a near optimum schedule for the rest of the small operations. Let O(`) be the set of operations from small jobs that remain in snapshot N (`). Let pmax (`) be the maximum processing time among the operations in O(`). Observe that the set of jobs in every snapshot N (`) defines an instance of the job shop problem, since the linear program assigns a unique machine to every operation of those jobs. Hence we can use Sevastianov’s algorithm [20] to find in O(n2 µ2 m2 ) time a feasible schedule for the operations O(`); this schedule has length at most t¯` = t` + ρ0 + µ3 mpmax (`). We must further increase the length of every snapshot N (`) to t¯` to accommodate the schedule produced by Sevastianov’s algorithm. Summing up all these enlargements, we get: Lemma 5 g X ε ε µ3 mpmax (`) ≤ µ3 mgτ = ≤ L∗max . (9) 8 8 `=1
Note that the total length of the snapshots N (αij ), . . . , N (βij ) containing a long operation Oij might be larger than pij . This creates some idle times on machine mij . We start operations Oij for long jobs Jj ∈ L at the beginning of the enlarged P snapshot N (αij ). The resulting schedule is clearly feasible. Let S(J 0 ) = Jj ∈J 0 Pj be the total processing time of all jobs in some set J 0 ⊂ J when the operations of those jobs are assigned to their fastest machines. Lemma 6 A feasible schedule for the jobs J of length at most (1+ 83 ε)L∗max +S(F ∪ V ) can be found in O(n2 ) time. Proof. The small jobs F that received fractional assignments and the jobs in V are scheduled sequentially at the beginning of the schedule on their fastest machines. By Lemmas 4 (choosing ρ = ε/4) and 5 the claim follows. 2 10
Now we show that we can choose the number k of long jobs so that S(F ∪ V ) ≤ ε ∗ 8 Lmax .
Pn Lemma 7 [12] Let {d1 , d2 , . . . , dn } be positive values and j=1 dj ≤ m. Let q be 1 a nonnegative integer, α > 0 , and n ≥ (q + 1)d α e . There exists an integer k 0 ≥ 1 1 such that dk0 +1 + . . . + dk0 +qk0 ≤ αm and k 0 ≤ (q + 1)d α e . ³ 3 ´ ε and q = 8µ m(m+1) + 1 m(2µ + 1 + χ). By Lemma 4 Let us choose α = 8m ε 3
2
and inequalities (6) and (8) , |F ∪ V | ≤ mg + 8µ m ε(m+1) g ≤ qk. By Lemma 7 it 1 is possible to choose a value k ≤ (q + 1)d α e so that the total processing time of the jobs in F ∪ V is at most 8ε ≤ 8ε L∗max . This value of k can clearly be computed in constant time. We select the set L of long jobs as the set consisting of the k jobs Pµ with largest processing times Pj = i=1 [mins∈Mij psij ]. Lemma 8 ε S(F ∪ V ) ≤ L∗max . (10) 8 Theorem 1 For any fixed m and µ, there is a linear-time approximation scheme for the flexible job shop scheduling problem that computes for any value ε > 0, a feasible schedule with maximum delivery completion time of value at most (1 + ε)L∗max in O(n) time. Proof. By Lemmas 6 and 8, the above algorithm finds in O(n2 ) a schedule of length at most (1 + 21 ε)L∗max . This algorithm can handle 1 + 2ε distinct delivery times. By the discussion at the beginning of Section 3.1 it is possible to modify the algorithm so that it handles arbitrary delivery times and it yields a schedule ∗ of length at most (1 + ε)Cmax . For every fixed m and ε, all computations can be carried out in O(n) time, with the exception of the algorithm of Sevastianov that runs in O(n2 ) time. The latter can be sped up to get linear time by “merging” pairs of small jobs together as described in [13]. 2 We note that our algorithm is also a (2 + ε)-approximation algorithm when jobs have release and delivery times. Indeed by adding release times to jobs in the schedule found by the algorithm, the maximum delivery completion time cannot increase by more than rmax , where rmax is the maximum release time. Hence the length of the schedule is at most (2 + ε) times the optimal value, since rmax ≤ L∗max . 4. Preemptive Problem without Migration In the preemptive flexible job shop problem without migration the processing of an operation may be interrupted and resumed later on the same machine. We describe our algorithm only for the case when all release times are zero. As in the non-preemptive case we divide the set of jobs J into long jobs L and short jobs S, with the set L having a constant number of jobs. One can associate a relative schedule to each preemptive schedule of L by assigning a machine to each long operation, and by determining a relative order for the starting and finishing times of operations from L ∪ JD . The only difference with the non-preemptive case is that some infeasible relative schedules for the non-preemptive case are now
11
feasible. Namely, consider two long operations a and b assigned to the same machine according to a relative schedule R. Now, R is feasible even if a starts before b but ends after b. By looking at every time in the schedule when an operation from L∪JD starts or ends we define a set of time intervals, similar to those defined in the non-preemptive case by the snapshots. For convenience we also call these time intervals snapshots. Since L ∪ JD has a constant number of operations (and hence there is a constant number of snapshots), we can build all relative schedules for L ∪ JD in constant time. Thus by trying all relative schedules we guarantee that at least one of them, say R∗ , schedules the long operations within the same snapshots as in an optimum schedule. An operation of a long job is scheduled in consecutive snapshots i, i +1, . . . , i +t, but only a fraction (possibly equal to zero) of the operation need be scheduled in any one of these snapshots. However, and this is crucial for the analysis, in every snapshot there can be at most one operation from any given long job. Now we define a linear program as in the case of the non-preemptive flexible job shop. For each long operation Oij we define variables yij` for every αij ≤ ` ≤ βij , where relative schedule R∗ places operation Oij within the snapshots αij to βij . Variable yij` denotes the fraction of operation Oij that is scheduled in snapshot `. Let g be the number of snapshots, t` be the length of the `-th snapshot, and let Oh denote the set of long operations processed by machine h according to the relative schedule R∗ . Let L`,h be defined as in Section 3.1 and let L0`,h denote the total processing time of operations from long jobs that are executed by machine P h during snapshot `, i.e., L0`,h = Oij ∈Oh |αij ≤`≤βij yij` phij . The linear program to assign operations to snapshots is the following. Pg Minimize `=1 t` Pβij yij` = 1 for all Jj ∈ L, i = 1, . . . , µ, s.t. (1’) ij P`=α g t = δ , for all dj , j = 1, . . . , χ (2) j P`=αj ` for all Jj ∈ S , (3) (i,s)∈Aj xjis = 1, (4’) L`,h + L0`,h ≤ t` , for all ` = 1, . . . , g, h = 1, . . . , m, (5) t` ≥ 0 , for all ` = 1, . . . , g , (6) xjis ≥ 0 , for all Jj ∈ S , (i, s) ∈ Aj , (7) yij` ≥ 0, for all 1 ≤ i ≤ µ, Jj ∈ L, αij ≤ ` ≤ βij . Lemma 9 An optimum solution of this linear program has value no larger than the length of an optimum schedule S ∗ that respects the relative schedule R∗ . Proof. The proof is similar to that of Lemma 3. 2 Note that in any solution of this linear program the schedule for the long jobs is always feasible, since there is at most one operation of a given job in any snapshot. We can find an approximate solution for the linear program as described in the previous section. In this approximate solution there are at most mg small jobs that are preempted (Lemma 4). These jobs are placed at the beginning of the schedule, as before. Let τ = 8µ3εmg . As for the non-preemptive case, consider the set V of small
12
jobs containing at least one operation with processing time larger than τ according to the machine assigned to it by the linear program. The cardinality of this set is bounded by m(m+1) . Removing all jobs in V so the processing times of the τ remaining operations of small jobs are not larger than τ . The jobs in V are placed sequentially at the beginning of the schedule. Next find a feasible schedule for the jobs in every snapshot using Sevastianov’s algorithm, but since the length of the schedule produced by this algorithm depends on the value of the largest processing time, it might be necessary to split operations from long jobs into small pieces. So we split each long operation Oij into pieces of size at most τ and then apply Sevastianov’s algorithm. This yields a feasible schedule for the snapshot because there is at most one operation of each long job in it. In this schedule the number of preemptions is a constant equal to the number of times that long operations are preempted, and this number is at most βij µ X XX Jj ∈L i=1 `=αij
&
yij` phij τ
' ≤
βij µ X XX yij` phij ( + 1) τ i=1
Jj ∈L
≤
`=αij
m(m + 1) + kg, τ
since in each snapshot ` there are at most k different operations from long jobs, and the sum of the processing times yij` phij cannot be more than m(m + 1) (see Lemma 1). Hence, our solution has O(1) preemptions. Choosing the size of L as we did for the non-preemptive case we ensure that the length of the schedule is at most 1 + ε times the value of an optimum schedule. This algorithm runs in linear time. Theorem 2 For any fixed m and µ, there is a linear-time approximation scheme for the preemptive flexible job shop scheduling problem without migration that computes for any fixed ε > 0, a feasible schedule with maximum delivery completion time (1 + ε) · L∗max . Note that the algorithm described is also a (2 + ε)-approximation algorithm for the flexible job shop problem with release and delivery times. 5. Preemptive Problem with Migration In the preemptive flexible job shop problem with migration the processing of an operation may be interrupted and resumed later on any eligible machine. We consider the problem when each job Jj has a release time rj and a delivery time qj . In this section we describe a linear time (2 + ε) -approximation algorithm for the problem, when the number of machines and operations per job are fixed. Finding a Schedule When Release Times are Zero In the first step we use linear programming to assign operations to machines and to snapshots assuming that all release times are zero. We wish to make this assignment in such a way that the total length of the solution is minimized. We also assume that we have reduced the number of different delivery times to a constant χ as in the non-preemptive 13
case. Let t denote the value of the objective function of the linear program (defined below). Define χ snapshots as follows: [0, t − δ1 ], [t − δ1 , t − δ2 ], ..., [t − δχ−1 , t − δχ ] where δ1 > δ2 > ... > δχ are the delivery times. For each job Jj we use a set of decision variables xjis ∈ [0, 1] for tuples (i, s) ∈ Aj . The meaning of these variables is that xjis = 1 if and only if, for each 1 ≤ u ≤ µ, operation Ouj is scheduled on machine su , and in snapshot [0, t − δ1 ] if iu = 1,or in snapshot [t − δiu −1 , t − δiu ] if 1 < iu ≤ χ. Let the load L`,h on machine h in time interval ` be defined as the total processing time of all operations that are executed by machine h during time interval `. The linear program LP is as follows. Minimize (1) (2) (3) (4)
tP
(i,s)∈Aj xjis = 1 , L1,h ≤ t − δ1 , L`,h ≤ δ`−1 − δ` , xjis ≥ 0 ,
for for for for
all all all all
Jj ∈ J , h = 1, . . . , m, h = 1, . . . , m, ` = 2, . . . , χ, Jj ∈ J , (i, s) ∈ Aj ,
Lemma 10 An optimum solution of LP has value no larger than the maximum delivery time of an optimum schedule for J . Proof. Consider an optimum schedule S ∗ for J . We only need to show that for any job Jj ∈ J there is a feasible solution of the linear program that schedules all operations of Jj in the same positions as S ∗ . The proof is similar to that of Lemma 3. 2 We guess the value s of an optimum schedule. Let LP (s, λ) be the linear program with objective function: Minimize λ, and constraints (1), (4), and (5) from the above linear program plus the following two (2’) (3’)
L1,h s−δ1 ≤ L`,h δ`−1 −δ`
λ, ≤ λ,
for all 1 ≤ h ≤ m, for all 1 ≤ h ≤ m, ` = 2, 3, ..., χ.
The linear program LP (s, λ) has the structure required by the algorithm of Grigoriadis and Khachiyan [9]. Using the algorithm in [9] and binary search on the interval [1, 1 + m], we can find in linear time a value s ≤ (1 + 8ε )t∗ , where t∗ is the value of an optimum solution of LP , such that LP (s, 1 + ρ) is feasible for ε ρ = 1+ 8+ε . For this value s a solution of LP (s, 1+ρ) has L1,h ≤ (s−δ1 )(1+ρ) and ε L`,h ≤ (δ`−1 − δ` )(1 + ρ). Since ρ = 1 + 8+ε the total enlargement of all snapshots Pχ is at most (s − δ1 )ρ + `=2 (δ`−1 − δ` )ρ = (s − δχ )ρ ≤ sρ ≤ ρ(1 + 8ε )t∗ = (1 + 4ε )t∗ . ε Lemma 11 A solution for LP (s, 1 + ρ), with s ≤ (1 + 8ε )t∗ and ρ = 1 + 8+ε , of ε ∗ value at most (1 + 4 )t can be found in linear time. Using the rounding procedure of [13] it is possible to modify any feasible solution of LP (s, 1 + ρ) to get a new feasible solution with at most mχ variables xjis with fractional values. Moreover we can do this rounding step in linear time. Lemma 12 Any feasible solution of LP (s, 1 + ρ) can be transformed in linear time into another feasible solution in which the set of jobs F that receive fractional assignments has size |F | ≤ mχ. Let P denote the set of jobs from J \F for which at least one operation, according to the machine assignment computed in step 1, has processing time greater than 14
εs 4χµ3 m(1+ 8ε ) .
Let L= F ∪ P and S = J \L. We remove from the snapshots the jobs in L and then use Sevastianov’s algorithm to find a feasible schedule σS for S. The enlargement in the length of the schedule caused by Sevastianov’s algorithm is at most χµ3 mpmax , where pmax is the maximum processing time of operations in S. ε ∗ εs Since pmax ≤ 4χµ3 m(1+ ε , the length of the resulting schedule is (1 + 4 )t + 8) εs χµ3 mpmax ≤ (1 + 4ε )t∗ + 4(1+ ≤ (1 + 2ε )t∗ ≤ (1 + 2ε )L∗max . ε 8) Note that by considering release times different from zero this algorithm yields a schedule for S that is at most (2 + 2ε ) times the length of an optimal one, since the maximum release time cannot be more than L∗max . The algorithm of Sevastianov takes O(n2 ) time, but it can be sped up to get linear time by “merging” pairs of jobs as described in [13]. The cardinality of set L is bounded by a constant since |P|< m(1+ 4ε )t∗
4χµ3 m2 (1+ 8ε )(1+ 4ε ) ε
=
3 2 O( µ εm ), 2
and so |L|= |F |∪|P|
3 2 =O( µ εm ). 2
4χµ3 m(1+ 8ε ) εs
Scheduling the Jobs in L Now we ignore the delivery times and consider only the release times. In the following we show how to compute a schedule σL that minimizes the makespan for the jobs from L. As for the delivery times, the release time of a job can be interpreted as an additional operation of the job that it has to process on a non-bottleneck machine before any of its other operations. Because of this interpretation, we can add to the set OL of operations from jobs L, a set R= {O01 , O02 , . . . , O0χ } of release operations. The processing time of operation O0j is rj . Let us define a snapshot as a subset of operations from OL ∪ R such that two different operations of the same job do not belong to the same snapshot. A relative order of L is an ordered sequence of snapshots which defines for every operation Oij ∈ OL ∪ R the first αij and the last βij snapshots in which operation Oij can start and finish, respectively. Let g be the number of snapshots. A relative order is feasible if 1 ≤ αij ≤ βij ≤ g for every operation Oij , and for two operations Oij and Oi+1j of the same job Jj ∈ L, we have βij + 1 = αi+1j . Furthermore, every release operation starts in the first snapshot, i.e., α0j = 1 for every Jj ∈ L. 4 2 We observe that g can be bounded by (µ + 1)|L|, and therefore g = O( µ εm ). 2 Note that a relative order is defined without assigning operations of long jobs to machines. Since L has a constant number of jobs (and hence there is a constant number of snapshots), we can consider all relative orders for L. Any operation is scheduled in consecutive snapshots i, i + 1, . . . , i + t, but only a fraction (possibly equal to zero) of that operation need be scheduled in any of these snapshots. In every snapshot there can be at most one operation from any given job of L. Now we define a linear program. For each operation Oij we define variables xij`h for every αij ≤ ` ≤ βij and every h ∈ Mij . Variable xij`h denotes the fraction of operation Oij that is scheduled in snapshot ` on machine h. Let t` be the length of snapshot `. The load on machine h in snapshot ` is equal to the total processing time P P of the operations of jobs in L, i.e., L`,h = Jj ∈L i=1,...,µ|αij ≤`≤βij ,h∈Mij xij`h phij . The linear program for a given relative order R is the following. 15
≤
Minimize s.t.
Pg
`=1 t`
(1) t` ≥ 0, P Pβij (2) h∈Mij `=αij xij`h = 1, (3) L`,h ≤ t` , P h (4) h∈Mij xij`h pij ≤ t` , (5) xij`h ≥ 0, Pβ0j (6) `=1 t` = rj ,
1≤`≤g 1 ≤ i ≤ µ, Jj ∈ L, 1 ≤ ` ≤ g, 1 ≤ h ≤ m, 1 ≤ ` ≤ g , Jj ∈ L, αij ≤ ` ≤ βij , 1 ≤ i ≤ µ, Jj ∈ L, αij ≤ ` ≤ βij , Jj ∈ L.
Consider any feasible schedule of L according to a relative order R, and let xij`h phij be the total amount of time that machine h works on operation Oij in snapshot `, and let t` denote the length of snapshot `. It is clear that this assignment of values to t` and xij`h constitutes a feasible solution of the above linear program. We show now that for any feasible solution of the linear program, there is a feasible solution with the same values of t` and xij`h . It is sufficient to note that any feasible solution of the linear program defines in each snapshot an instance of the preemptive open shop scheduling problem. Hence, from results in [6, 16] the claim follows. Furthermore, by using the same procedure described in [16] it is possible 4 4 to construct a schedule with no more than O( µ εm ) preemptions. 2 For each relative order R we solve the linear program above, and then select the solution with the smallest length. All the algorithms used in this step require polynomial time in the size of the input, therefore the overall time is O(1) since the input size is O(1). Note that if we consider non-zero delivery times then the maximum delivery completion time of the resulting schedule σL is at most 2L∗max . Combining the Schedules The output schedule is obtained by appending σS after σL . The length of the solution is at most (2 + 2ε )L∗max since the maximum completion time of schedule σL is not greater than L∗max and the length of the schedule σS is at most (1 + 2ε )L∗max . Theorem 3 For any fixed m and µ, there is a polynomial-time approximation algorithm for the preemptive flexible job shop scheduling problem with migration that computes for any fixed ε > 0, a feasible schedule with maximum delivery completion time (2 + ε) · L∗max in O(n) time. 6. Multi-Purpose Machines Job Shop The job shop scheduling problem with multi-purpose machines [1] is a special case of the flexible job shop problem, in which the processing time of each operation pij does not depend anymore on the machine on which it is processed. For the nonpreemptive version of this problem, our techniques can also handle the case in which each job Jj has a release time rj and a delivery time qj . The main difference with 16
the previous algorithm is that we put each job Jj from the set F ∪ V (this set is defined as before) after their corresponding release operations and before their delivery operations. More precisely, the algorithm works as follows. We use the technique by Hall and Shmoys [10] to obtain a problem instance with only a constant number of delivery and release times. We divide the set of jobs J into long jobs L and short jobs S as before. Then, we compute all the relative schedules of the long jobs; a relative schedule is defined as before, but we add also a constant number of release operations. For each relative schedule we define a linear program as in the non-preemptive case. We solve and round the solution of the linear program to get a solution with a constant number of fractional variables. The solution of the linear program has a constant number of jobs with fractional assignments (set F ), and a constant number of jobs (set V ) with at least one operation with large processing time. We now use a very simple rounding procedure to obtain an integral (and possibly infeasible) solution for the linear program. If job Jj has more than one nonzero variable associated with it we set one of them to be one and the others to be zero in an arbitrary manner. The final schedule is built as follows. Assign jobs from F ∪ V to machines and snapshots according to the new integral solution. Build a left justified (that means without leaving idle times) feasible schedule for the jobs from F ∪ V . Assign the remaining small jobs according to the linear programming solution, and for each snapshot N (`), ` ∈ {1, . . . , g}, schedule operations assigned to N (`) after the maximum finishing time in N (`) of operations of jobs from F ∪ V (if any), by using Sevastianov’s algorithm. Again, from Lemma 7 it is possible to choose the number of long jobs such that the enlargement due to the jobs from F ∪V is “small” enough, and by using similar arguments as before, it is possible to show that the described algorithm is a polynomial time approximation scheme which runs in linear time. Theorem 4 For any fixed m and µ, there is a polynomial-time approximation scheme for the MPM job shop scheduling problem with release and delivery times that computes for any fixed real number ε, with ε > 0, a feasible schedule with maximum delivery completion time of at most (1 + ε) · L∗max in O(n) time. By using similar arguments as before, it is possible to provide a linear time approximation scheme also when preemption without migration is allowed. Theorem 5 For any fixed m and µ, there is a polynomial-time approximation scheme for the preemptive MPM job shop scheduling problem without migration with release and delivery times that computes for any fixed real number ε, with ε > 0, a feasible schedule with maximum delivery completion time of at most (1 + ε) · L∗max in O(n) time. 7. Conclusions In this paper we present linear time polynomial time approximation schemes for the non-preemptive flexible job shop scheduling problem when the number of machines and maximum number of operations per job are constant. Our algorithm can also handle delivery times for the jobs. We also consider the preemptive version
17
of the problem, and we design a polynomial time approximation scheme for the case without migration and a (2 + ε)-approximation algorithm for the case with migration, for any ε > 0. We consider a special case of the flexible job shop scheduling problem when the processing time of an operation does not depend on the machine that processes it. For this case our algorithm can handle release times for the operations as well. Even when our algorithms have O(n) running time, where n is the number of jobs, the hidden constant in the big-Oh notation is extremely large. This makes our results important mainly in a theoretical sense, since the running time of the algorithms is too high to be useful in practice. Hence, the main contribution of this work is to prove the existence of polynomial time approximation schemes and (2 + ε)-approximation algorithms for the above problems. Proving the existence of polynomial time algorithms with good approximation ratios for these complex scheduling problems also opens up the possibility of designing practical running time approximation algorithms with similar approximation ratios for them. For many problems, the discovery of inefficient (approximation) algorithms for them has motivated additional research activity on them that lead to the discovery of efficient running time (approximation) algorithms [2, 3, 15, 19]. We leave it as an open question to determine whether a low running time polynomial time approximation scheme for the flexible job shop scheduling problem exists. Acknowledgements This research was supported by Swiss National Science Foundation project 200021-104017/1, “Power Aware Computing”, and by the Swiss National Science Foundation project 200021-100539/1, “Approximation Algorithms for Machine scheduling Through Theory and Experiments”. References 1. P. Brucker, B. Jurisch, and A. Kramer. Complexity of scheduling problems with multi-purpose machines. Annals of Operations Research, 70:57–73, 1997. 2. M. Charikar, S. Guha, E. Tardos, and D. Shmoys. A constant-factor approximation algorithm for the k-median problem. In Proceedings of the 31th Annual ACM symposium on Theory of Computing, 1–10, 1999. 3. L.R. Ford and D.R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics, 8:399–404, 1956. 4. L. Gambardella, M. Mastrolilli, A. Rizzoli, and M. Zaffalon. An optimization methodology for intermodal terminal management. Journal of Intelligent Manufacturing, 12:521–534, 2001. 5. L. Goldberg, M. Paterson, A. Srinivasan, and E. Sweedyk. Better approximation guarantees for job-shop scheduling. SIAM Journal on Discrete Mathematics, 14(1):67–92, 2001. 6. T. Gonzalez and S. Sahni. Flowshop and jobshop schedules: complexity and approximation. Operations Research, 26:36–52, 1978.
18
7. R. Graham. Bounds for certain multiprocessing anomalies. Bell System Technical Journal (BSTJ), 45:1563–1581, 1966. 8. R. Graham, E. Lawler, J. Lenstra, and A. R. Kan. Optimization and approximation in deterministic sequencing and scheduling: A survey. In Annals of Discrete Mathematics, volume 5, 287–326. North–Holland, 1979. 9. M. D. Grigoriadis and L. G. Khachiyan. Coordination complexity of parallel pricedirective decomposition. Mathematics of Operations Research, 21:321–340, 1996. 10. L. Hall and D. Shmoys. Approximation algorithms for constrained scheduling problems. In Proceedings of the 30th IEEE Symposium on Foundations of Computer Science, 134–139, 1989. 11. D. Hochbaum, editor. Approximation Algorithms for NP-hard Problems. Publishing Company, Boston, 1995.
PWS
12. K. Jansen and L. Porkolab. Linear-time approximation schemes for scheduling malleable parallel tasks. In Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete Algorithms, 490–498, 1999. 13. K. Jansen, R. Solis-Oba, and M. Sviridenko. A linear time approximation scheme for the job shop scheduling problem. In Proceedings of APPROX’99, volume LNCS 1671, 177–188, 1999. 14. D. Johnson. Approximate algorithms for combinatorial problems. Computer and Systems Sciences, 9:256–278, 1974.
Journal of
15. M. Klein. A primal method for minimal cost flows with application to the assignment and transportation problems. Management Science, 14: 205–220, 1967. 16. E. Lawler and J. Labetoulle. On preemptive scheduling of unrelated parallel processors by linear programming. Journal of the ACM, 25:612–619, 1978. 17. E. Lawler, J. Lenstra, A. Rinnooy Kan, and D. Shmoys. Sequencing and scheduling: Algorithms and complexity. Handbook in Operations Research and Management Science, 4:445–522, 1993. 18. M. Mastrolilli and L. M. Gambardella. Effective neighbourhood functions for the flexible job shop problem. Journal of Scheduling, 3:3–20, 2000. 19. S. Sahni. Approximate algorithms for the 0/1 knapsack problem. Journal of the ACM, 22:115-124, 1975. 20. S. Sevastianov. Bounding algorithms for the routing problem with arbitrary paths and alternative servers. Cybernetics (in Russian), 22:773–780, 1986. 21. D. Shmoys, C. Stein, and J. Wein. Improved approximation algorithms for shop scheduling problems. SIAM Journal on Computing, 23:617–632, 1994. 22. R. Vaessens. Generalized Job Shop Scheduling: Complexity and Local Search. PhD thesis, Eindhoven University of Technology, 1995. 23. D. Williamson, L. Hall, J. Hoogeveen, C. Hurkens, J. Lenstra, S. Sevastianov, and D. Shmoys. Short shop schedules. Operations Research, 45:288–294, 1997.
19