Better Approximation Guarantees for Job-shop ... - Semantic Scholar

Report 6 Downloads 34 Views
Better Approximation Guarantees for Job-shop Scheduling Leslie Ann Goldberg y Mike Paterson z Aravind Srinivasan x Elizabeth Sweedyk { Abstract

Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the rst polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.

1 Introduction Job-shop scheduling is a classical NP-hard minimization problem [9]. We improve the approximation guarantees for this problem and for some of its important special cases, both in the sequential and parallel algorithmic domains; the improvements are over the current-best algorithms of Leighton, Maggs & Rao [10] and Shmoys, Stein & Wein [17]. In job-shop scheduling, we have n jobs and m machines. A job consists of a sequence of operations, each of which is to be processed on a speci c machine for a speci ed integral amount of time; a job can have more than one operation on a given machine. The operations of a job must be processed in the given sequence, and a machine can process at most one operation at any given time. The problem is to schedule the jobs so that the makespan, the time when all jobs have been completed, is minimized. An important special case of this problem is preemptive scheduling, wherein machines can suspend work on A preliminary version of this work appears in Proc. ACM-SIAM Symposium on Discrete Algorithms, pages 599{608, 1997. y Dept. of Computer Science, University of Warwick, Coventry CV4 7AL, UK. [email protected]. Part of this work was performed at Sandia National Laboratories and was supported by the U.S. Department of Energy under contract DE-AC04-76AL85000. Part of this work was supported by ESPRIT LTR Project no. 20244 | ALCOM-IT and ESPRIT Project no. 21726 | RAND-II. z Dept. of Computer Science, University of Warwick, Coventry CV4 7AL, UK. [email protected]. Part of this work was supported by ESPRIT LTR Project no. 20244 | ALCOM-IT. x Dept. of Information Systems and Computer Science, National University of Singapore, Singapore 119260. [email protected]. Parts of this work were done at (i) the National University of Singapore, supported in part by National University of Singapore Research Grants RP950662 and RP960620; (ii) Cornell University, Ithaca, NY, USA, supported by an IBM Graduate Fellowship; (iii) the School of Mathematics, Institute for Advanced Study, Princeton, NJ, USA, supported by grant 93-6-6 of the Alfred P. Sloan Foundation to the Institute for Advanced Study; (iv) DIMACS (NSF Center for Discrete Mathematics and Theoretical Computer Science), supported by NSF grant NSF-STC91-19999 to DIMACS and by support to DIMACS from the New Jersey Commission on Science and Technology; (v) the Sandia National Laboratories, New Mexico, USA; (vi) the Department of Computer Science, University of Warwick, Coventry, UK, supported in part by the ESPRIT Basic Research Action Programme of the EC under contract No. 7141 (project ALCOM II); and (vii) the Department of Computer Science, University of Melbourne, Victoria, Australia, sponsored by a \Travel Grants for Young Asian Scholars" scheme of the University of Melbourne; this part of the work was done while on study leave. { Department of Computer and Information Sciences, University of Pennsylvania. [email protected]. Supported by NSF Research Training Grant. Part of this was done while visiting Sandia National Laboratories. 

1

operations, switch to other operations, and later resume the suspended operations (if this is not allowed, we have the non-preemptive scenario, which we take as the default); in such a case, all operation lengths may be taken to be one. Even this special case with n = m = 3 is NP-hard, as long as the input is encoded concisely [13, 18]. We present further improved approximation factors for preemptive scheduling and related special cases of job-shop scheduling. Formally, a job-shop scheduling instance consists of jobs J1 ; J2 ; : : : ; Jn , machines M1 ; M2 ; : : : ; Mm , and for each job Jj , a sequence of j operations (Mj;1 ; tj;1 ); (Mj;2 ; tj;2 ); : : : ; (Mj; ; tj; ). Each operation is a (machine, processing time) pair: each Mj;k represents some machine Mi , and the pair (Mj;i ; tj;i) signi es that the corresponding operation of job Jj must be processed on machine Mj;i for an uninterrupted integral amount of time tj;i. The problem that we focus on throughout is to come up with a schedule that has a small makespan, for general job-shop scheduling and for some of its important special cases. j

1.1 Earlier work

j

As described earlier, even very restricted special cases of job-shop scheduling are NP-hard. Furthermore, the problem seems quite intractable in practice, even for relatively small instances. Call a job-shop instance acyclic if no job has more than one operation that needs to run on any given machine. A single instance of acyclic job-shop scheduling consisting of 10 jobs, 10 machines and 100 operations resisted attempts at exact solution for 22 years, until its resolution by Carlier & Pinson [6]. More such exact solutions for certain instances (with no more than 20 jobs or machines) were computationally provided by Applegate & Cook, who also left open the exact solution of certain acyclic problems, e.g., some with 15 jobs, 15 machines, and 225 operations [3]. Thus, ecient exact solution of all instances with, say, 30 jobs, 30 machines, and 900 operations seems quite out of our reach at this point; an obvious next question is to look at ecient approximability. De ne a -approximation algorithm as a polynomial-time algorithm that always outputs a feasible schedule with a makespan of at most  times optimal;  is called the approximation guarantee. A negative result is known: if there is a -approximation algorithm for job-shop scheduling with  < 5=4, then P = NP [19]. There are two simple lower bounds on the makespan of any feasible schedule: Pmax , the maximum total processing time needed for any job, and max , the maximum total amount of time for which any machine has to process operations. For the NP-hard special case of acyclic job-shop scheduling wherein all operations have unit length, a breakthrough was achieved in [10], showing that a schedule of makespan O(Pmax + max ) always exists! Such a schedule can also be computed in polynomial time [11]. However, if we drop any one of the two above assumptions (unit operation lengths and acyclicity), it is not known whether such a good bound holds. What about upper bounds for general job-shop scheduling? It is not hard to see that a simple greedy algorithm, which always schedules available operations on machines, delivers a schedule of makespan at most Pmax max ; one would however like to aim for much better. Let  = maxj j denote the maximum number of operations per job, and let pmax be the maximum processing time of any operation. By invoking ideas from [10, 15, 16] and by introducing some new techniques, good approximation algorithms were developed in [17]. Their deterministic approximation bounds were slightly improved in [14] to yield the following proposition. (To avoid problems with small positive numbers, henceforth let log x and log log x denote log2 (x + 2) and log2 log2 (x + 4) respectively, for x  0.) Proposition 1.1 ([17, 14]) There is a deterministic polynomial-time algorithm that delivers a

2

schedule of makespan

m)  log(minfm; p g)) O((Pmax + max )  loglog( max log(m) for general job-shop scheduling. If we replace m by n in this bound, then such a schedule can also be computed in RNC. This is a -approximation algorithm with  = O(log(m) log(minfm; pmaxg)= log log(m)). See [17, 8] for further results on approximating some special cases of shop scheduling that are not discussed here.

1.2 Our results Our rst result improves Proposition 1.1 by a doubly logarithmic factor and provides further improvements for important special cases. Theorem 1 There are the following deterministic algorithms for general job-shop scheduling, delivering schedules of makespan O((Pmax + max)  ): (a) a polynomial-time algorithm, with m)   log(minfm; pmax g)  ;  = loglog( log(m) log log(m) and if we replace m by n in this bound, then such a schedule can also be computed in NC, (b) a polynomial-time algorithm, with  = logloglogmm  log(minfm; pmaxg); and (c) an NC algorithm, with

 = logloglogmm  log(minfn; pmaxg):

Thus, part (a) improves on the previous approximation bound by a doubly logarithmic factor. The impact of parts (b) and (c) is best seen for preemptive scheduling, wherein pmax = 1, and for the related situations where pmax is \small". Our motivation for focusing on these cases is twofold. First, preemptability is known to be a powerful primitive in various scheduling models, see, e.g., [4]. Second, the above result of [10] shows that preemptability is powerful for acyclic job-shops. It is a major open question whether there is a schedule of makespan O(Pmax + max ) for general job-shop scheduling and, if so, in what cases it can be found eciently. In view of the above result of [10], one way to attack this question is to study (algorithmically) the problem parametrized by pmax, focusing on the case of \small" pmax. Recall that even the case of n = m = 3 with pmax = 1 is NP-hard. Parts (b) and (c) above show that, as long as the number of machines is small or xed, we get very good approximations. (It is trivial to get an approximation factor of m: our approximation ratio is O(log m= log log m) if pmax is xed.) Note that for the case in which pmax is small, part (c) is both a derandomization and an improvement of the previous-best parallel algorithm for job-shop scheduling (see Proposition 1.1). We further explore the issue of when good approximations are possible, once again with a view to generalize the above key result of [10]; this is done by the somewhat-technical Theorem 2. We take high probability to mean a probability of 1 ? , where  is a xed positive constant as small 3

as we please. This can be ampli ed by repetition to give any  which tends to zero exponentially in the size of the problem instance. Theorem 2 shows that if (a) no job requires too much of any given machine for processing, or if (b) repeated uses of the same machine by a given job are well-separated in time, then good approximations are possible. Say that a job-shop instance is w-separated if every distinct pair ((Mj;`; tj;`); (Mj;r ; tj;r )) of operations of the same job with the same machine (i.e., every pair such that Mj;` = Mj;r ) has j` ? rj  w. Theorem 2 There is a randomized polynomial-time algorithm for job-shop scheduling that, with high probability, delivers a schedule of makespan O((Pmax + max )  ), where (a) if every job needs at most u time units on each machine then  fm; pmaxg)  ;  = logloglogu u  log(min log log u (b) if the job-shop instance is w-separated and pmax = 1 then  = 1 if w  log(Pmax + max )=2; log(Pmax + max )  = w log(log( Pmax + max )=w) ; otherwise.

Most of our results rely on probabilistic ideas; in particular, we exploit a \random delays" technique due to [10]. We make four main contributions, which we rst sketch in general terms. The rough idea behind the \random delays" technique is as follows. We give each job a delay chosen randomly from a suitable range and independently of the other jobs, and imagine each job waiting out this delay and then running without interruption; next we argue that, with high probability, not too many jobs contend for any given machine at the same time [10, 17]. We then resolve contentions by \expanding" the above \schedule"; the \low contention" property is invoked to argue that a small amount of such expansion suces. Our rst main contribution is a better combinatorial solution to the above expansion problem, leading to a smaller expansion than does [17]. The second main contribution is in showing that a relaxed notion of \low contention" suces: we do not require that the contention on machines be low at each time step. The rst contribution helps prove Theorem 1(a); parts (b) and (c) of Theorem 1 make use of the second contribution. We de-randomize the sequential formulations using a technique of [2] and then parallelize. A simple but crucial ingredient of Theorem 1 is a new way to structure the operations of jobs in an initial (infeasible) schedule; we call this well-structuredness, and present it in Section 2. This notion is our third main contribution. Finally, Theorem 2 comes about by introducing random delays and by using the Lovasz Local Lemma (LLL) [7], which is also done in [10]; our improvements arise from a study of the correlations involved and by using Theorem 1(a). This study of correlations is our fourth main contribution. We present an improved approximation algorithm for general job-shop scheduling (Theorem 1(a)) and show further improvements for certain NP-hard special cases. In particular, parts (b) and (c) of Theorem 1 show the power of preemptability (or of small operation lengths) when m  . Theorem 2 generalizes the result in [10] showing the existence of an O(Pmax + max) makespan schedule. Its part (a) shows quantitatively the advantages of having multiple copies of each machine; in such a case, we can try to spread out the operations of a job somewhat equitably to the various copies. Part (b) of Theorem 2 shows that if we have some (limited) exibility in rearranging the operation sequence of a job, then it may pay to spread out multiple usages of the same machine. The rest of this paper is organized as follows. Section 2 sets up some preliminary notions, Section 3 presents the proof of Theorem 1, and Theorem 2 is proved in Section 4. 4

2 Preliminaries For any non-negative integer k, we let [k] denote the set f1; 2; : : : ; kg. The base of the natural logarithm is denoted by e as usual and, for convenience, we may use exp(x) to denote ex . As in [17], we assume throughout that all operation lengths are powers of two. This can be achieved by multiplying each operation length by at most two. This assumption on operation lengths will only a ect our approximation factor and running time by a constant factor. Thus, 0  2Pmax , 0max  2max , and p0max  2pmax Pmax, max and pmax should be replaced by some Pmax respectively, in the sequel. We have avoided using such new notation, to retain simplicity.

2.1 Convexity

Recall that function f : < ! < is convex in the interval [a; b] if and only for all a0 and b0 such that a  a0  b0  b and for all q 2 [0; 1], f (qa0 + (1 ? q)b0 )  qf (a0) + (1 ? q)f (b0). Lemma 2.1 If f is any convex function in the interval [0; b] and if X is a random variable taking values only in [0; b], then E [f (X )]  f (0) + f (b) ?b f (0) E [X ]:

Proof. Since f is convex in [0; b], we have f (x)  (1 ? xb )f (0) + xb f (b), for any x 2 [0; b]. Thus,

since X is distributed in [0; b], we have

f (X )  f (0) + f (b) ?b f (0) X:

(1)

The proof is completed by taking expectations on both sides of (1). 2 Corollary 2.2 Let f be any convex function in the interval [0; b]. Suppose x1 ; x2 ; : : : ; x` are such that for each i, xi 2 [0; b], then X

i

f (xi)  `f (0) + f (b) ?b f (0)

X

i

xi :

Proof. Use Lemma 2.1 with X as the uniform distribution over the multiset fx1 ; x2; : : : ; x`g. 2 2.2 Reductions

It is shown in [17] that, in deterministic polynomial time, we can reduce the general shop-scheduling problem to the case where (i) pmax  n, and where (ii) n  poly(m; ), while incurring an additive O(Pmax +max) term in the makespan of the schedule produced. The reduction (i) also works in NC. Thus, for our sequential algorithms we assume that n  poly(m; ) and that pmax  poly(m; ); while for our NC algorithms we assume only that pmax  n.

2.3 Bounds We use the following bounds on expectation and tails of distributions. Fact 2.3 [Hoe ding] Let X1 ; XX 2 ; : : : ; X` 2 [0; 1] be independent random variables with X = Pi Xi. Then for any  > 0, E[(1 + ) ]  eE[X ] .

5

Proof. By Lemma 2.1 and the convexity of f (x) = (1 + )x , E[(1 + )X ]  1 + E[Xi ]  eE[X ]. i

i

The fact then follows from the independence of X1 ; : : : ; X` . 2 We de ne G(; ) =: (e =(1+ )1+ ) : Using Markov's inequality and Fact 2.3, we obtain Cherno and Hoe ding's bounds on the tails of the binomial distribution (see [12]). Fact: P2.4 [Cherno , Hoe ding] Let X1 ; X2 ; : : : ; X` 2 [0; 1] be independent random variables with X = i Xi and E[X ] = . Then for any  > 0, Pr[X  (1 + )]  G(; ).

2.4 Random delays

Our algorithms use random initial delays which were developed in [10] and used in [17]. A B delayed schedule of a job-shop instance is constructed as follows. Each job Jj is assigned a delay dj in f0; 1; : : : ; B ? 1g. In the resulting B -delayed schedule, the operations of Jj are scheduled consecutively, starting at time dj . A random B -delayed schedule is a B -delayed schedule in which the delays have been chosen independently and uniformly at random from f0; 1; : : : ; B ? 1g. Our algorithms schedule a job-shop instance by choosing a random B -delayed schedule for some suitable B , and then expanding this schedule to resolve con icts between operations that use the same machine at the same time. For a B -delayed schedule S , the contention, C (Mi ; t), is the number of operations scheduled on machine Mi in the time interval [t; t + 1). (Recall that operation lengths are integral.) For any job Jj , de ne the random variable Xi;j;t to be 1 if some operation of Jj is scheduled on Mi in the time interval [t; t + 1) byPS , and 0 otherwise. Since no two operations of Jj contend for Mi simultaneously, C (Mi ; t) = j Xi;j;t . If the delays are chosen uniformly at random and B  max, then E[XPi;j;t] is at most the total processing time of Jj on Mi divided by max. Thus, E[C (Mi ; t)] = j E[Xi;j;t]  max =max = 1. We also note that the random variables fXi;j;t j j 2 [n]g are mutually independent, for any given i and t. We record all this as follows. Fact 2.5 If B  max Pand S is a random B -delayed schedule then for any machine Mi and any time t, C (Mi ; t) = j Xi;j;t, where the 0-1 random variables fXi;j;t j j 2 [n]g are mutually independent. Also, E[C (Mi ; t)]  1.

2.5 Well-structuredness

Recall that all operation lengths are assumed to be powers of two. We say that a delayed schedule S is well-structured if for each k, all operations with length 2k begin in S at a time instant that is an integral multiple of 2k . We shall use the following simple way of constructing such schedules from randomly delayed schedules. First create a new job-shop instance by replacing each operation (Mj;` ; tj;`) by the operation (Mj;` ; 2  tj;`). Suppose S is a random B -delayed schedule for this modi ed instance, for some B ; we will call S a padded random B -delayed schedule. From S , we can construct a well-structured delayed schedule, S 0 , for the original job-shop instance: simply insert (Mj;l ; tj;l ) with the correct boundary in the slot assigned to (Mj;l ; 2  tj;l ) by S . S 0 will be called a well-structured random B -delayed schedule for the original job-shop instance.

3 Proof of Theorem 1 In this section we prove Theorem 1. In Section 3.1 we give a randomized polynomial-time algorithm that proves part (b) of the theorem. In Section 3.2 we improve the algorithm to prove part (a). Finally we discuss the derandomization and parallelization of these algorithms in Section 3.3. Throughout, we shall assume upper bounds on n and pmax (i.e., pmax  n, and perhaps n  poly(m; ) and pmax  poly(m; )) as described earlier; this explains terms such as 6

log(minfm; pmax g) in the bounds of Theorem 1. Given a delayed schedule S , de ne C (t) =: maxi C (Mi ; t). Lemma 3.1 There is a randomized polynomial-time algorithm that takes a job-shop instance and produces a well-structured delayed schedule which has a makespan L  2(Pmax + max ). With high probability, this schedule satis es: (a) 8i 2 [m] 8t 2 f0; 1; : : : ; L ? 1g; C (Mi ; t)  ; and P ?1 (b) tL=0 C (t)  (Pmax + max), where = c1 log(m)= log log(m) and = c2 log m= log log m, for suciently large constants c1 ; c2 > 0. Proof. Let B = 2max and let S be a padded random B -delayed schedule of the new instance. S has a makespan of at most 2(Pmax + max). Let S 0 be the well-structured random B -delayed schedule for the original instance that can be constructed from S , as described in Section 2. The contention on any machine at any time under S 0 is clearly no more than under S . Thus S 0 satis es (a) and (b) with high probability since, by the following, S does. Part (a). The following proof is based on that of [17]. Fix any positive integer k, and any Mi . For any set U = fu1 ; u2 ; : : : ; uk g of k units of processing that need to be done on Mi , let Collide(U ) be the event that all these k units get scheduled at the same unit of time on Mi . It is not hard to see that even conditioning on u1 getting scheduled on Mi at any given time t0 , the conditional probability of Collide( U ) is at most (1=(2max ))k?1 ; thus, Pr[Collide(U )]  (1=(2max ))k?1 . Since ?2 max there are at most k ways of choosing U , we get !

Pr[9t : C (Mi ; t)  k]  2kmax (1=(2max ))k?1 ; and so Pr[9t : C (Mi ; t)  k]  2max =k!. Thus, Pr[9t 9i : C (Mi ; t)  k]  2mmax =k!: But max  npmax, which by our assumptions is poly(m; ) (recall that the reductions of Section 2 ensure that n and pmax are both at most poly(m; )). Since d e! > (m)c1 =2 for suciently large m or , we can satisfy (a) with high probability if we choose c1 suciently large. Part (b). Let = =2, where  is the desired constant in the probability bound. Let the constant c2 in the de nition of be suciently large so that > 1. Fix any Mi and t, and let  = E[C (Mi ; t)]. (By Fact 2.5,   1.) By Fact 2.3, with 1 +  = , E[ C (M ;t) ]  e( ?1)  e( ?1) : i

Hence, for any given t,

E[ C (t) ] 

X

i2[m]

E[ C (M ;t) ]  me ?1 : i

(2)

Since the function x 7! x is convex, by Jensen's inequality we get that E[ C (t) ]  E[C (t)] . If we choose c2 suciently large then  me ?1 and so, by (2), E[C (t)]  . By linearP ity Pof expectation, E[ t C (t)]  2 (Pmax + max ) and nally, by Markov's inequality, we have 2 Pr[ t C (t) > (Pmax + max )]  2 = = . 7

3.1 Proof of Theorem 1(b)

Recall that our goal is a polynomial-time algorithm which delivers a schedule with makespan O((Pmax + max )  logloglogmm  log(minfm; pmaxg)). Assume S is a delayed schedule satisfying the conditions of Lemma 3.1 with makespan L = O(Pmax +max). We begin by partitioning the schedule into frames, i.e., time intervals f[ipmax ; (i + 1)pmax ); i = 0; 1; : : : ; dL=pmax e ? 1g. By the de nition of pmax and the fact that S is well-structured, no operation straddles a frame. We construct a feasible schedule for the operations performed under schedule S for each frame. Concatenating these schedules yields a feasible schedule for the original problem. We give the frame-scheduling algorithm where, without loss of generality, we assume that its input is the rst frame. Let T be a rooted complete binary tree with pmax leaves labelled, from left to right, 0; 1; : : : ; pmax ? 1. Let u be a node in T and let l(u) and r(u) be the labels, respectively, of the leftmost and rightmost leaves of the subtree rooted at u. We shall associate the operations scheduled during the frame with the nodes of T in a natural way. For i = 1; : : : ; m we de ne Si (u) to be those operations O that are scheduled on Mi by S for precisely the time interval [l(u); r(u) + 1); each O scheduled by S in the rst frame is in exactly one Si (u). Let p(u) = (r(u) ? l(u) + 1)  maxi jjSi (u)jj, jjSi (u)jj denoting the cardinality of Si (u); p(u) is an upper bound on the time needed to perform the operations [iSi(u) associated with u. Let the nodes of T be Pnumbered as u1 ; u2 ; : : : in the preorder traversal of T . De ne f (u1 ) = 0 and for j  2, let f (uj ) = k<j p(uk ). The algorithm simply schedules the operations in Si (u) on machine Mi consecutively beginning at time f (u) and concluding no later than f (u) + p(u). Let S 0 be the resulting schedule. Part (b) of Theorem 1 follows from Lemma 3.1 and the following lemma. Lemma 3.2 S 0 is feasible and has makespan at most Pu2T p(u), which is at most (1 + log2 pmax)  Ppmax ?1 j =0 C (j ), where C (t) is the maximum contention at time t under schedule S . Proof. By construction, no machine performs more than one operation at a time. Suppose O1 and O2 are distinct operations of job J scheduled in the rst frame. Assume O1 2 Si (u) and O2 2 Sj (v), where possibly i = j . Assume O1 concludes before O2 begins under S ; thus u and v are roots of disjoint subtrees of T and u precedes v in the preorder traversal of T . Thus O1 concludes before O2 begins in S 0 and the new schedule is feasible. P 0 Clearly the makespan of S is at most u2T p(u). Fix a node u at some height k in T . (We take leaves to have height 0.) Then p(u) = 2k maxi jjSi (u)jj. Since the maximum number of jobs scheduled at any time t on any machine under S is C (t), we get that 8t 2 [l(u); : : : ; r(u)], maxi jjSi (u)jj  C (t). Thus, X p(u)  2k max jj S ( u ) jj  C (t): i i t2[l(u);:::;r(u)]

Since each leaf of T has (1 + log2 pmax) ancestors, the makespan of S 0 is at most X

u2T

p(u) 

X

X

u2T t2[l(u);:::;r(u)]

C (t) = (1 + log2 pmax) 

3.2 Proof of Theorem 1(a)

pmax X?1 t=0

C (t):

2

Recall that our goal is a polynomial-time algorithm which delivers a schedule with makespan m)  l log(minfm;pmax g) m). We give a slightly di erent frame-scheduling algoO((Pmax +max)  loglog( log(m) log log(m) rithm and show that the feasible schedule for each frame has makespan O(pmax dlog(pmax )= log e). 8

(The parameter is from Lemma 3.1, and is assumed to be a power of two without loss of generality.) Thus, under the assumption that pmax  poly(m; ), the nal schedule satis es the bounds of Theorem 1(a). The diculty with the algorithm given in Section 3.1 is that the operations may be badly distributed to the nodes of T by S so that S 0 is inecient. To clarify, consider the following situation. Suppose that u has left child v, p(u) is determined by Si(u), and p(v) is determined by Sj (v). The troubling case is when i 6= j . If, for instance, Sj (u) =  and Si(v) = , then Mi and Mj will have idle periods of p(v) and p(u), respectively. We can reduce the idle time by pushing some of the operations in Si (u) down to v. We give a push-down algorithm that associates operations Si0 (u) for machine i with node u. We begin by partitioning T into subtrees. Mark a node u if it is at height 0 mod log in T . Eliminating the edges between a marked node and its children partitions T into a collection of subtrees, each of height log , except possibly the one rooted at the root of T , which may have height less than log . The push-down algorithm will redistribute operations within each of these subtrees independently, as follows. Let T 0 be one of the subtrees of the partition. Initially each Si0 (u) is empty for all u 2 T 0 . Let v be a node in T 0. Assume v has height k in T 0 and that jjSi (v)jj = 2` , padding with dummy operations if necessary. If k  `, the algorithm distributes one operation of Si (v) to each Si0 (w), where w is a descendant of v at a distance ` below v. Otherwise it distributes 2`?k to Si0 (w), for each w that is a leaf in T 0 and a descendant of v. The algorithm repeats the procedure for each i = 1; : : : ; m and for each v in T 0 . We now view T once again as one complete binary tree. For each node u of T , let p(u) and f (u) be de ned as before but relative to Si0 (u), i = 1; : : : ; m. Run the scheduling algorithm described above to produce a schedule S 0 . Lemma 3.3 S 0 is a feasible schedule with makespan at most O(pmax dlog pmax= log e). Proof.P The proof that S 0 is feasible follows exactly as before. The makespan of S 0 is no more than u2T p(u). Consider a subtree T 0 of the partition. Assume the leaves of T 0 are at height j in T . Let w be a node in T 0 and let V be the subset of nodes of T 0 consisting of w and its ancestors in T 0 . First suppose w is a leaf. Let v be a node in V and assume that v has height k in T 0 with jjSi(v)jj = 2` . Then v contributes at most 2`?k operations to Si0(w) and each has length 2j+k . The time needed to perform these operations is 2`?k  2j +k = 2j 2` . By Lemma 3.1, part (a), P v2V jjSi (v)jj  2 . (The factor of 2 arises from the (possible) padding of Si (v) with dummy operations.) Thus p(w)  2j +1 . Now suppose w is at height r > 0 in T 0 . A v 2 V at height r + k in T 0 contributes at most one P log operation to Si0 (w) and its length is 2j +k+r . Thus p(w)  k=0 ?r 2j +k+r  2j +1 . Thus, if node w is at height r + j in T and is in the layer of the partition containing T 0 , then p(w)  2j+1 ; also, there are pmax=2r+j nodes at this height in T . The sum of these p(w)'s is thus at most 2 pmax =P 2r . Each layer therefore contributes at most 4 pmax , and there are d(log pmax)=(log )e layers. Thus v2T p(v) satis es the bound of the lemma. 2

3.3 Derandomization and parallelization

Note that all portions of our algorithm are deterministic (and can be implemented in NC), except for the setting of the initial random delays, which we show how to derandomize now. The method of conditional probabilities could be applied to give the sequential derandomization, however that result will follow from the NC algorithm that we present. As said before, we assume without loss 9

of generality that pmax is at most poly(m; ) and at most poly(n; ), for our sequential and parallel algorithms respectively. We begin with a technical lemma. P Lemma 3.4 Let x1 ; x2; : : : ; x` be non-negative integers such that i xi = `a, for some a  1. Let ?a P` ?x  k  a be any positive integer. Then, i=1 k  `  k . ? x : Proof. For real x,?xwe de ne, as usual, k = (x(x ? 1)    (x ? k + 1))=k!. We rst verify that  the function f (x) = k is non-decreasing and convex for x  k, by a simplePcheck that the rst ?  and second derivatives of f are non-negative for x  k. Think of minimizing i xk subject to the given P constraints. If xi  (k ? 1) for some i, then there should be an index j such xj  (k + 1), since i xi  `k. Thus, we can lessen the objective function by simultaneously setting xi := xi + 1 and xj := xj ? 1. Hence we may assume that all the integers x are at least k. By the convexity of P` i ?a f for x  k, we see that the objective function is at least i=1 k . 2 n De ne, for z = (z1 ; z2 ; : : : ; zn ) 2 < , a family of symmetric polynomials Sj (z ); j = 0; 1; : : : ; n, P : where S0 (z )  1, and for 1  j  n, Sj (z ) = 1i1 0, Pr[X  (1 + )]  G(; ). We now prove Theorem 2(b). Suppose that we have a w-separated job-shop scheduling instance with pmax = 1. Let B = max and partition the random max -delayed schedule S into frames of length w. Fix a machine Mi and a frame Fk . For each operation O that needs to be done on machine Mi , introduce an indicator variable XO for thePevent that this operations is scheduled in Fk on Mi. Thus, the contention for Mi in Fk is Ci;k = O XO . Note that E[Ci;k ]  w. If O and O0 are from the same job then the probability that they are both scheduled on Mi in Fk is zero, due to our given assumption on w. Furthermore, operations from di erent jobs are independent. Thus, the variables XO are negatively correlated in the sense of Lemma 4.7; hence Pr[Ci;k > w(1 + )]  G(w; ), for any  > 0. Note that, for a suitably large constant c00 , we can choose (i)  = c00 if w  log(Pmax + max)=2, and (ii)  = c00 log(Pmax + max)=(w log(log(Pmax + max )=w)) 2 max )?1 . Thus, by Lemma 4.2, there if w < log(Pmax + max )=2, to ensure that G(w; )  (eBPmax is a B -delayed schedule for the instance in which each frame has maximum machine contention at most w(1 + ). Choose such a setting, and focus on any frame. Note that every job can have at most one operation on any machine in this frame, and that all operations are of length one. Thus, by invoking the main result of [10], each frame can be made feasible with makespan O(w + w(1 + )) = O(w(1 + )). We nally concatenate all the feasible schedules, concluding the proof.

4.3 Constructive versions We start with some observations. Using reduction (i) from Section 2.2, we may assume that

Pmax and max are both bounded by poly(n; ), which is polynomial in the input size. Let P = m(Pmax + max). Then it will suce for the constructive versions of our randomized algorithms to run in time poly(n; ; P ) and succeed with probability at least 1 ? P ? , for any given constant  > 0.

We start by observing that the only non-constructive part of the algorithms that we have described comes from the use of Lemma 4.2 (which we proved using the Lovasz Local Lemma). In this section we will prove the following constructive version of Lemma 4.2, where c3 is taken to be a suciently large positive constant. Lemma 4.20 Suppose that B  2(Pmax + max) and that  is an integer. Suppose that in the random B -delayed schedule for a job-shop instance every machine Mi and frame Fk satis es Pr[Ci;k  ]  (Pmax + max )?c3 : Then there is a polynomial-time randomized algorithm which assigns a delay in the range f0; 1; : : : ; B ? 1g to each job, so that, with probability at least 1 ? P ? , the resulting schedule has Ci;k  3 for every machine Mi and every frame Fk .

Lemma 4.20 immediately implies the following constructive versions of Corollaries 4.3 and 4.4. Corollary 4.30 Suppose that B  2(Pmax + max) and that  is an integer. Let I be a jobshop instance with associated values Pmax and max and let I 0 be the modi ed instance formed by replacing each operation (Mj;`; tj;`) with the operation (Mj;` ; 2  tj;`). For a multiple ` of pmax, suppose that in a random B -delayed schedule for I 0 we have for all machines Mi and all length-` frames Fk , Pr[Ci;k  ]  (Pmax + max )?c3 : 15

Then there is a polynomial-time randomized algorithm which, with probability at least 1 ? P ? , produces a well-structured schedule S 00 for I with makespan B +2Pmax such that when S 00 is broken into frames of length `, we have Ci;k  3 for all Mi and Fk . Corollary 4.40 There is a polynomial-time randomized algorithm which takes an instance of general job-shop scheduling, and, with probability at least 1 ? P ? , produces a schedule with makespan  Pmax + max)   log(minfm; pmaxg)  : O (Pmax + max )  loglog( log(Pmax + max ) log log(Pmax + max ) We obtain the constructive version of Theorem 2 by going through the proof presented earlier, and replacing the use of Lemma 4.2 and Corollaries 4.3 and 4.4 with Lemma 4.20 and Corollaries 4.30 and 4.40 . We need only check that (i) each time we use Lemma 4.2 or Corollary 4.3 we can actually satisfy the stronger condition in Lemma 4.20 or Corollary 4.30 and (ii) each time we use Lemma 4.2 or Corollary 4.3 we can actually make due with the weaker guarantee in Lemma 4.20 or Corollary 4.30 . (The guarantee is weaker because of the factor of 3 in \3".) We make these checks here: (i) In the proof of Corollary 4.4, we already establish the stronger condition stated in Corollary 4.40 . Furthermore, the extra factor of 3 makes no di erence because the guarantee of the corollary is asymptotic. (ii) Where Corollary 4.3 is applied in the \base case" of our proof of Theorem 2(a), we can establish the stronger condition in Corollary 4.30 by taking c as a larger constant than before. In the resulting schedule, the contention on each machine in each frame is at most 6v so we get the same result as before. (iii) Where Corollary 4.3 is applied in the \recursive case" of our proof of Theorem 2(a), we can establish the stronger condition in Corollary 4.30 by taking c as a larger constant than before. In the resulting schedule, the contention on each machine in each frame is at most 3 times as large as it was in the original version of the proof, so we get the same asymptotic answer when we analyse the makespan of the nal schedule. (iv) Where Lemma 4.2 is used in the proof of Theorem 2(b) we can establish the stronger requirement of Lemma 4.20 by appropriately increasing the value of c00 . The contention in each frame is at most 3 times as large as it was before, so we get the result. In the remainder of this section, we will prove Lemma 4.20 . We will not specify the precise value of c3 that we require, but our proof will require in various places that c3 be larger than certain constants that do not involve c0 ; c or c00 . We may take such a large enough c3 , and then choose c0 ; c and c00 suitably as described in items (i){(iv) above; we will not discuss the actual choice of these constants. The remainder of this section is a proof of Lemma 4.20 . We start by describing some useful ideas from earlier constructivizations of the Lovasz Local Lemma.

4.3.1 Basic ideas from earlier constructivizations of the LLL This section is based on the work of [5, 1, 11]. Given an undirected graph G = (V; E ), recall that a set C  V is a dominating set of G if and only if all vertices in V ? C have some neighbour in C . For any positive integer `, we de ne G` to be the graph on the same vertex set V , with two vertices adjacent if and only if there is a path 16

of length at most ` that connects them in G. We let (G) denote the maximum degree of the vertices in G. Also, suppose R is some random process and that each vertex in V represents some event related to R. We say that G is a dependency graph for R if and only if any independent set of vertices in G correspond to a set of stochastically independent events in R; thus, if I is any independent set of vertices in G, then the probability that all of the corresponding events happen in R equals the product of the probabilities of occurrence of the individual events involved. Lemma 4.8 Given an undirected graph G1 = (V; E ) with a dominating set C , let G2 be the subgraph of G31 that is induced by C . Pick an arbitrary maximal independent set I in G2 , and let G3 be the subgraph of G32 induced by I . Suppose G1 has a connected component with N vertices. Then G3 has a connected component with at least N=(((G1 ) + 1)((G1 ))3 ) vertices. Proof. Let C1 = (U; E 0 ) be a connected component of G1 with N vertices. Then, the vertices in C \ U are connected in G2 , which is seen as follows. Suppose v1 ; u1 ; u2 ; : : : ; ut ; vt is a path in C1, where v1 and vt are in C \ U , and u1 ; : : : ; ut are all in U ? (C \ U ). Then, since C \ U is a dominating set in C1 , for 1 < i < t, ui must have some neighbour vi 2 C \ U . Hence, there are paths vi ; ui ; ui+1 ; vi+1 for 1  i < t so v1 and vt are connected in G2. Thus, all of the vertices in C \ U are connected in G2 . Since C \ U is a dominating set in C1 , it is also easy to check that jC \ U j  N=((C1 ) + 1)  N=((G1 ) + 1). Thus, C \ U yields a connected component C2 in G2 that has at least N=((G1 ) + 1) vertices. Since (C2 )  ((G1 ))3 ?1, one can similarly show that I \(C \U ) yields a connected component C3 in G3 that has at least

N jC \ U j  jC \ U j  3 (C2 ) + 1 ((G1 )) ((G1 ) + 1)((G1 ))3

vertices. 2 We present a key ingredient of [5, 1, 11]: Theorem 3 Let a graph G = (V; E ) be a dependency graph for a random process R, with the probability of occurrence of the event represented by any vertex of G being at most r. Run the process R, and let C  V be the vertices of G that represent the events (among the elements of V ) that occurred during the run. (Thus, C is a random subset of V with some distribution.) Let G1 be the subgraph of G induced by C [ C 0, where C 0 is the set of vertices of G that have at least one neighbour in C . Then, for any x  1, the probability of G1Phaving a connected component with at least x((G) + 1)((G))3 vertices, is at most jV j(G)?18 yx ((G)18 r)y . Proof. Observe that by construction, C is a dominating set for G1. Construct G2, I , and G3 as in the statement of Lemma 4.8. Note that deterministically, (G1 )  (G). Thus, by Lemma 4.8, we just need to bound the probability of G3 having a connected component with x or more vertices. Suppose that a size-y set S of vertices of G forms a connected component in G3 . Then there is a sub-tree T of G3 which spans the vertices in S . T can be represented by a list L which lists all of the vertices that are visited in a depth- rst traversal of T . Each vertex in T (except the root) is visited both before its children and after each child (the root is only visited after each child), so each vertex appears on L once for each edge adjacent to it in T . Thus, the length of L is 2(y ? 1). If two vertices are adjacent on L then they are adjacent in G3 , which implies that the distance between them in G is at most 9. Thus, given G, the number of possible sets S is at most the number of possible lists L, which is at most jV j (the number of choices for the rst vertex on L) times ((G)9 )2(y?1) (the number of choices for the rest of L). Thus, the number of sets S which could possibly correspond to size-y connected components in G3 is at most jV j(G)?18 (G)18y . 17

The de nition of I implies that the vertices in G3 form an independent set in G. Furthermore, given any independent set S of size y in G, the probability that they are all in G3 is at most ry . Thus, the probability that G3 has a connected component of size y is at most jV j(G)?18 ((G)18 r)y . 2

4.3.2 Proof of Lemma 4.20 Suppose that B  2(Pmax + max) and that  is an integer. Suppose that in the random B -delayed

schedule for a job-shop instance every machine Mi and frame Fk satis es Pr[Ci;k  ]  (Pmax + max )?c3 :

In this section, we will describe a polynomial-time randomized algorithm which assigns a delay in the range f0; 1; : : : ; B ? 1g to each job, so that, with probability at least 1 ? P ? , the resulting schedule has Ci;k  3 for every machine Mi and every frame Fk . The algorithm starts by ordering the jobs arbitrarily and proceeding to process the jobs in order. When it is job Jj 's turn, we give it a random delay from f0; 1; : : : ; B ? 1g, and check if this makes, for any pair (i; k), Ci;k >  (Ci;k here is of course measured with respect to the jobs that have been processed so far). If so, we temporarily set aside Jj and all yet-unprocessed jobs that use machine Mi. Let J1 denote the set of jobs which do get assigned a delay by this process. We shall basically show that the problem of assigning delays to the jobs not in J1 gets decomposed into a set of much smaller subproblems, with high probability. To this end, we rst set up some notation in order to apply Theorem 3. Let Ei;k denote the event that Ci;k >  became true at some point in the above process. Construct an undirected graph G with the events Ei;k as nodes, with an edge between two distinct nodes Ei;k and Ei0 ;k0 if and only if either (i) i = i0 , or (ii) there is some job that uses both the machines Mi and Mi0 . It is easy to check that G is a valid dependency graph for the events Ei;k . Since there are at most Pmax + B  3Pmax + 2max frames, the total number of vertices in G is at most m(3Pmax + 2max ). Note that at most max jobs use any given machine and that each such job uses at most Pmax ? 1 other machines. Thus, in the notation of a few lines above, each node can have at most Pmax + B ? 1 neighbours of type (i), and at most max(Pmax ? 1)(Pmax + B ) neighbours of type (ii). So (G) is at most Pmax + B ? 1 + max (Pmax ? 1)(Pmax + B ), i.e., at most max Pmax (3Pmax + 2max) ? 1  ((Pmax + max )2 =4)  (3(Pmax + max )) ? 1  (Pmax + max)3 ? 1: Run the above random process of ordering, randomly scheduling, and setting aside (if necessary) of the jobs. Let C be the set of events Ei;k that actually happened. Since Ci;k can never decrease as more and more jobs get initial delays, we see from the condition in Lemma 4.20 that, for any given i and k, Pr[Ei;k ]  q1 . Let C 0 be the set of nodes of G that have at least one neighbour in C , and let G1 be the subgraph of G that is induced by C [ C 0 . Thus, by applying Theorem 3 with jV j  m(3Pmax + 2max), (G)  (Pmax + max)3 ? 1, x = log m and r  (Pmax + max)?c3 , we see that Pr[G1 has a connected component with at least (Pmax + max)12 log m nodes]  P ? =3;

(7)

by taking c3 appropriately large. Why does this help? Let us rst give all the jobs in J1 their assigned delays, and remove them from consideration. By de nition of J1 , this imposes a total contention of at most  on any machine Mi in any frame k. The key observation is as follows. Fix any remaining job Jj . Then, for no two machines Mi and Mi0 that are both used by Jj , can we have two nodes Ei;k and Ei0 ;k0 in 18

di erent connected components of G1 . This is because Ei;k and Ei0 ;k0 are neighbours in G. Thus, the problem in each connected component of G1 can be solved completely independently of the other connected components. Assuming that G1 has all connected components with at most (Pmax + max )12 log m nodes (it will, with high probability: see (7)), suppose we repeat the above process on each connected component CCj of G1 separately. Let J2 be the total set of jobs which do get assigned a delay, in analogy with J1 . As above, we give the jobs in J2 their assigned delays and remove them from consideration; this imposes an additional contention of at most  on any machine Mi in any frame k. For any given CCj , we again apply Theorem 3 to bound the probability that it in turn leads to at least one \large" connected component: we take jV j  (Pmax + max )12 log m, (G)  (Pmax + max)3 ? 1, x = log((Pmax + max)12 log m) and r  (Pmax + max)?c3 to apply Theorem 3. We see that the probability that CCj leads to a connected component with more than (Pmax + max )12 log((Pmax + max )12 log m) nodes is at most ((Pmax + max) log m)? (1) . This probability can be brought down to P ? ?1 =9, by repeating this process O(log P ) times. Now, G has at most m(3Pmax + 2max) < 3P nodes, and hence G1 has at most 3P connected components; thus, the probability that at least one CCj leads in turn to a connected component with more than 12(Pmax + max )12 log(Pmax + max) + (Pmax + max )12 log log m nodes, is at most (3P )  P ? ?1 =9 = P ? =3: (8) Suppose all remaining connected components have at most 12(Pmax + max )12 log(Pmax + max ) + (Pmax + max)12 log log m nodes. Once again, note that all these components can be handled independently; we focus on any one such component CCj0 . There are two cases:

Case I: log log m  Pmax + max. In this case, the number of nodes in CCj0 is O((Pmax + max)13 ).

Thus, if we start with a random B -delayed schedule for the jobs associated with CCj0 , the probability that at least one \bad" event associated with CCj0 (i.e., at least one node of CCj0 ) happens is at most O((Pmax + max )13?c3 ), by the condition in Lemma 4.20 . By taking c3 suciently large and by repeating this O(log P ) times, this probability can be driven down to P ? ?1 =9. Once again, since the total remaining number of connected components (such as CCj0 ) is at most 3P , the probability that any remaining bad event occurs is at most (3P )  P ? ?1 =9 = P ? =3. By adding this to (7) and (8), we thus see that we have xed all the events with a probability of at least 1 ? P ? : for any i and k, Ci;k is nally at most 3 ( each due to J1 , J2 , and this nal phase). Case II: log log m > Pmax + max. Here, the number of nodes in CCj0 is O((log log m)13 ). Thus, the number of machines associated with CCj0 is also at most O((log log m)13 ), and hence the number of jobs associated with CCj0 is at most O((log log m)13 max), i.e., O((log log m)14 ). By Lemma 4.2, taking c3 suciently large will ensure the existence of a B -delayed schedule for the jobs associated with CCj0 such that none of the bad events in CCj0 happens. But now, there are at most O((log log m)14 ) jobs associated with CCj0 , and each has only B  2(Pmax + max ) = O(log log m) possible initial delays! Thus, exhaustive search can be applied to nd a \good" B -delayed schedule that we know to exist: the time needed for CCj0 is at most

(log log m)O((log log m)14 ) = mo(1) : Thus, in this case, (7) and (8) imply that for any i and k, Ci;k is nally at most 3 with probability at least 1 ? P ? . This concludes the proof. Acknowledgements. Aravind Srinivasan thanks David Shmoys for introducing him to this area, for sharing many of his insights, and for his several suggestions. He also thanks Cli Stein and Joel 19

Wein for their suggestions and many helpful discussions. We thank Uri Feige for bringing the work of [13] and [18] to our attention, and Bruce Maggs and Andrea Richa for sending us an updated version of [11].

References

[1] N. Alon, A parallel algorithmic version of the Local Lemma, Random Structures and Algorithms, 2 (1991), pp. 367{378. [2] N. Alon and A. Srinivasan, Improved parallel approximation of a class of integer programming problems, in Proc. International Colloquium on Automata, Languages and Programming, 1996, pp. 562{573. Full version to appear in Algorithmica. [3] D. Applegate and W. Cook, A computational study of the job-shop scheduling problem, ORSA Journal of Computing, 3 (1991), pp. 149{156. [4] A. Bar-Noy, R. Canetti, S. Kutten, Y. Mansour, and B. Schieber, Bandwidth allocation with preemption, in Proc. ACM Symposium on Theory of Computing, 1995, pp. 616{625. [5] J. Beck, An algorithmic approach to the Lovasz Local Lemma, Random Structures and Algorithms, 2 (1991), pp. 343{365. [6] J. Carlier and E. Pinson, An algorithm for solving the job-shop problem, Management Science, 35 (1989), pp. 164{176. [7] P. Erd}os and L. Lovasz, Problems and results on 3-chromatic hypergraphs and some related questions, in In nite and Finite Sets, A. Hajnal et. al., editors, Colloq. Math. Soc. J. Bolyai 11, North Holland, Amsterdam, 1975, pp. 609{627. [8] L. A. Hall, Approximability of ow shop scheduling, in Proc. IEEE Symposium on Foundations of Computer Science, 1995, pp. 82{91. [9] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan, and D. B. Shmoys, Sequencing and scheduling: algorithms and complexity, in Handbooks in Operations Research and Management Science, Volume 4: Logistics of Production and Inventory, S. C. Graves et al., editors, Elsevier, 1993, pp. 445{522. [10] F. T. Leighton, B. Maggs, and S. Rao, Packet routing and jobshop scheduling in O(congestion + dilation) steps, Combinatorica, 14 (1994), pp. 167{186. [11] F. T. Leighton, B. Maggs, and A. Richa, Fast algorithms for nding O(congestion + dilation) packet routing schedules. Technical Report CMU-CS-96-152, School of Computer Science, Carnegie-Mellon University, 1996. To appear in Combinatorica. [12] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, 1995. [13] G. Rayzman, Approximation techniques for job-shop scheduling problems, MSc Thesis, Department of Applied Mathematics and Computer Science, The Weizmann Institute of Science, Rehovot 76100, Israel, July, 1996. [14] J. P. Schmidt, A. Siegel, and A. Srinivasan, Cherno -Hoe ding bounds for applications with limited independence, SIAM J. Discrete Math., 8 (1995), pp. 223{250. 20

[15] S. V. Sevast'yanov, Ecient construction of schedules close to optimal for the cases of arbitrary and alternative routes of parts, Soviet Math. Dokl., 29 (1984), pp. 447{450. [16] S. V. Sevast'yanov, Bounding algorithm for the routing problem with arbitrary paths and alternative servers, Kibernetika, 22 (1986), pp. 74{79 (translation in Cybernetics 22, pp. 773{780). [17] D. B. Shmoys, C. Stein, and J. Wein, Improved approximation algorithms for shop scheduling problems, SIAM J. Comput., 23 (1994), pp. 617{632. [18] Yu. N. Sotskov and N. V. Shaklevich, NP-hardness of scheduling problems with three jobs, Vesti Akad. Navuk BSSR Ser. Fiz.-Mat. Navuk, 4 (1990), pp. 96{101 (in Russian). [19] D. P. Williamson, L. A. Hall, J. A. Hoogeveen, C. A. J. Hurkens, J. K. Lenstra, S. V. Sevast'yanov, and D. B. Shmoys, Short shop schedules, to appear in Operations Research, March/April 1997.

21