J Sched (2007) 10: 365–373 DOI 10.1007/s10951-007-0042-8
List scheduling for jobs with arbitrary release times and similar lengths Rongheng Li · Huei-Chuen Huang
Published online: 20 October 2007 © Springer Science+Business Media, LLC 2007
Abstract This paper considers the problem of on-line scheduling a list of independent jobs in which each job has an arbitrary release time and length in [1, r] with r ≥ 1 on m parallel identical machines. For the list scheduling algorithm, we give an upper bound of the competitive ratio for any m ≥ 1 and show that the upper bound is tight when m = 1. When m = 2, we present a tight bound for r ≥ 4. For r < 4, we give a lower bound and show that 2 provides an upper bound. Keywords On-line scheduling · List scheduling · Competitive ratio · Makespan
1 Introduction Since the classical on-line schedule problem on m parallel identical machines was proposed by Graham (1969), on-line scheduling has been studied extensively in many varieties and from many viewpoints. In the classical on-line scheduling problem, whenever a job arrives, it must be scheduled immediately on one of the machines without knowledge of
Li’s work was supported in part by the National Natural Science Foundation of China (No. 10771060). R. Li Department of Mathematics, Hunan Normal University, Changsha 410081, Hunan Provence, China e-mail:
[email protected] H.-C. Huang () Department of Industrial and Systems Engineering, National University of Singapore, 1 Engineering Drive 2, Singapore 117576, Singapore e-mail:
[email protected] any future jobs. The objective is to minimize the makespan, i.e., the maximum completion time over all jobs in a schedule. The simplest algorithm for on-line scheduling jobs on identical machines is the List Scheduling (LS) algorithm, which was introduced by Graham (1969). A generalization of the Graham’s classical on-line scheduling problem on m identical machines (Graham 1969) was proposed by Li and Huang (2004). The requests of all jobs appear in form of orders at time zero. For an order, the scheduler is informed of the release time and processing time of the job. The problem can be formally defined as follows. A sequence of jobs is to be scheduled on m parallel identical machines, {M1 , M2 , . . . , Mm }. We assume that the orders appear on-line. Whenever the request of an order is made for job Jj with release time rj and processing time pj , the scheduler has to assign a machine and a processing slot for Jj irrevocably without knowledge of any future job orders. In this on-line situation, the jobs’ release times are assumed arbitrary, whereas in the existing literature the jobs’ release times are normally non-decreasing. This problem is a generalization of the Graham’s classical on-line scheduling problem (Graham 1969) where all jobs’ release times are zero. Thus, we refer to it as a generalized on-line scheduling problem or an on-line scheduling problem for jobs with arbitrary release times. The quality of a heuristic algorithm A is measured by its competitive ratio R(m, A) = sup L
A (L) Cmax , OPT Cmax (L)
A (L) and C OPT (L) denote the makespans of the where Cmax max schedule produced by algorithm A and an optimal off-line algorithm, respectively.
366
For the general model where job processing times are arbitrary, the following results were shown in Li and Huang (2004): • The competitive ratio of algorithm LS is bounded by 3 − 1 m and this bound is tight. • A modified algorithm MLS is proposed which is better than LS for any m ≥ 2. • The competitive ratio of MLS is bounded by 2.9392 for any m ≥ 2. • No heuristic algorithm has a competitive ratio better than 2. From the results of Li and Huang (2004), algorithm LS is the best on-line algorithm for this general model when m = 1. It is known that, for the classical online problem, no algorithm can be better than algorithm LS for m ≤ 3. This arouses the motivation to consider whether the performance can be improved with the knowledge of some additional information. Therefore, the semi-online scheduling is proposed. In the semi-online version, the conditions to be considered online are somehow relaxed or some partial additional information about jobs is known in advance and one wishes to make improvement of the performance of the optimal algorithm with respect to the classical online version. Different ways of relaxing the conditions give rise to different semi-online versions (Kellerer et al. 1997). Several types of partial information are proposed to get algorithms with better performance such as, the total length of all jobs is known in advance (Kellerer et al. 1997); the largest length of jobs is known in advance (He and Zhang 1999); the lengths of all jobs are known in [p, rp] where p > 0 and r ≥ 1 (He and Zhang 1999; Kellerer 1991); all jobs come in the nonincreasing order of their lengths (Liu et al. 1996; He and Zhang 1999), etc. More recent publications on the semionline scheduling can be found in (Azar and Regev 2001; Dósa and He 2004; Seiden et al. 2000; Tan and He 2002). The problem of on-line scheduling with the job length in [p, rp] is called on-line scheduling for jobs with similar lengths. In this paper we consider the on-line scheduling for jobs with arbitrary release times and similar lengths on m identical machines. For simplicity of presentation, we assume that the job length is in [1, r] with r ≥ 1, i.e., we assume p = 1. In Sect. 2, we define the LS algorithm. In Sect. 3, we study the performance of algorithm LS. We first give an upper bound of the competitive ratio for general m r . When and show that the tight bound for m = 1 is 1 + 1+r m = 2, we present a tight bound of the competitive ratio, 5r+4 2(r+2) , for r ≥ 4. For r < 4, we give a lower bound and show that 2 provides an upper bound for the competitive ratio. It is interesting to note that the value of 2 also serves as a lower bound for the competitive ratio of the optimal algorithm with respect to the pure on-line version because no heuristic algorithm can have a competitive ratio better than 2
J Sched (2007) 10: 365–373
for the pure on-line version (Li and Huang 2004). In Sect. 4, we conclude the paper and suggest some problems for further research. 2 Algorithm LS Definition Suppose that Jj is the current job with release time rj and processing time pj . We say that machine Mi has an idle time interval for job Jj , if there exists a time interval [T1 , T2 ] satisfying the following conditions: 1. Machine Mi is currently idle in interval [T1 , T2 ] and a job has been assigned on Mi to start processing at T2 . 2. T2 − max {T1 , rj } ≥ pj . It is obvious that if machine Mi has an idle time interval [T1 , T2 ] for job Jj , then we can assign Jj to machine Mi in the idle interval. Now we will describe algorithm LS which is defined in Li and Huang (2004). Essentially, the algorithm assigns a job to be processed as early as possible when its order arrives. Algorithm LS Step 1. Assume that Li is the scheduled completion time of machine Mi (i = 1, 2, . . . , m). Reorder machines so that L1 ≤ L2 ≤ · · · ≤ Lm and let Jn be a new job given to the algorithm with release time rn and running time pn . Let s = max {rn , L1 }. Step 2. If there exist some machines which have idle intervals for job Jn , select a machine Mi which has an idle interval [T1 , T2 ] for job Jn with minimal T1 . Then we start job Jn on machine Mi at time max{T1 , rn } in the idle interval. Otherwise, we assign job Jn to machine M1 to start the processing at time s. In the following we let MiA = (Ji1 , Ji2 , . . . , Jiq ) denote the job list assigned on machine Mi by algorithm A, where Jis ∈ {J1 , J2 , . . . , Jn } (s = 1, 2, . . . , q). 3 Main results In the following, we always assume that the processing times of all the jobs are confined to be within the interval [1, r], where r ≥ 1. Let L be the job list with n jobs and Li be the completion time of machine Mi before job Jn is assigned and it is assumed that the machines are reordered so that L1 ≤ L2 ≤ · · · ≤ Lm . Let ui1 , . . . , uiki denote all the idle time intervals of machine Mi (i = 1, 2, . . . , m) just before Jn is assigned. The job started right after uij is denoted by Jij with release time rij and processing time pij . By the definitions of uij and rij , it is easy to know that rij is the
J Sched (2007) 10: 365–373
367
end point of uij . To simplify the presentation, we abuse the notation and use uij as the length of the particular interval as well. The following simple inequalities will be referred to later on. OPT pn ≤ Cmax (L),
OPT mCmax (L) ≥
n
pi + U,
(1)
i=1 OPT Cmax (L) ≥
ki
(uij + pij ),
i = 1, 2, . . . , m,
(2)
j =1
L1 + pn OPT (L) Cmax m (Li + pn ) ≤ i=1 OPT mCmax (L) k1 km n i=1 pi + i=1 u1i + · · · + i=1 umi + (m − 1)pn , = OPT (L) mCmax (3)
the optimal schedule will decrease by t0 , and correspondingly the competitive ratio of the makespans will increase. Hence the altered instance provides a minimal counterexample satisfying our assumption. Lemma 3.1 There exists no idle time interval with length greater than 2r when m ≥ 2 and there is no idle time interval with length greater than r when m = 1. Proof For m ≥ 2 if the conclusion is not true, let [T1 , T2 ] be such an interval with T2 − T1 > 2r. Let L0 be the job set which consists of all the jobs that are scheduled to start at or before time T1 . By Observation 3.1, L0 is not empty. Let L¯ = L \ L0 . Then L¯ is a counter-example, too, because L¯ has the same makespan as L for the algorithm LS and the optimal makespan of L¯ is not larger than that of L. This is a contradiction to the minimality of L. For m = 1, we can get the conclusion by employing the same argument. Now we are ready to prove Theorem 1.
where U is the total idle time in the optimal schedule. Theorem 1 For any m ≥ 2, we have m 3 − m1 − 1r , r ≥ m−1 , R(m, LS) ≤ (m−1)r 2r m 1 + 1+2r + m(1+2r) , 1 ≤ r < m−1 , and R(1, LS) = 1 +
Proof Let αr be the largest length of all the idle intervals. If α ≤ r−1 r , then by (1), (2) and (3) we have (4)
r 1+r .
We will prove this theorem by examining a minimal counter-example of (4). A job list L = {J1 , J2 , . . . , Jn } is called a minimal counter-example of (4) if (4) does not hold for L, but (4) holds for any job list L with |L | < |L|. In the following discussion, let L be a minimal counter-example of (4). It is obvious that, for a minimal counter-example L, the makespan is the completion time of the last job Jn , i.e., L1 + pn . Hence we have LS (L) L1 + pn Cmax = OPT . OPT Cmax (L) Cmax (L)
We first establish Observation 3.1 and Lemma 3.1 for such a minimal counter-example. Observation 3.1 If one of the machines has an idle interval [0, T ] with T > r, then we can assume that at least one of the machines is scheduled to start processing at time zero. Proof If there exists no machine to start processing at time zero, let δ be the earliest starting time of all the machines and t0 = min{δ, T − r}. It is not difficult to see that any job’s release time is at least t0 . Then we can alter the problem instance by decreasing the releasing times of all jobs by t0 . After the altering, the makespans of both the LS schedule and
k1 LS (L) Cmax i=1 u1i ≤ 1 + + ··· k1 OPT (L) Cmax m i=1 (u1i + p1i ) km um (m − 1)pn + + k i=1 i OPT m mC m i=1 (umi + pmi ) max (L) k1 u1i ≤ 1 + k i=1 + ··· 1 m i=1 (u1i + 1) km um (m − 1) + + k i=1 i m m m (umi + 1) i=1
≤1+ +
k1 αr + ··· m(k1 + k1 αr)
km αr (m − 1) + m(km + km αr) m
=2−
αr 1 1 1 + ≤3− − . m 1 + αr m r
OPT (L) ≥ 1 + αr instead of C OPT (L) ≥ p Next by use of Cmax n max and observe that pn ≤ r we have LS (L) Cmax k1 αr km αr ≤1+ + ··· + OPT m(k1 + k1 αr) m(km + km αr) Cmax (L)
+
(m − 1)r m(1 + αr)
=1+
αr (m − 1)r + . 1 + αr m(1 + αr)
368
J Sched (2007) 10: 365–373
So if m ≥ 2, r ≥
m m−1
and α ≥
r−1 r ,
we have
LS (L) (m − 1)r 1 αr Cmax 1 + ≤ 1 + =3− − OPT 1 + αr m(1 + αr) α= r−1 m r Cmax (L) r
(m−1)r αr because 1 + 1+αr + m(1+αr) is a decreasing function of α. m is proved. If Hence the conclusion for m ≥ 2 and r ≥ m−1 m m ≥ 2 and r < m−1 we have LS (L) (m − 1)r 2r Cmax + ≤1+ OPT 1 + 2r m(1 + 2r) Cmax (L) (m−1)r αr because α ≤ 2 by Lemma 3.1 and 1 + 1+αr + m(1+αr) is an increasing function of α. Hence the conclusion for m ≥ 2 is proved. For m = 1 we have
Fig. 1 The LS schedule and optimal schedule for the job list L(1)
LS (L) (m − 1)r r αr Cmax + ≤1+ ≤1+ OPT 1 + αr m(1 + αr) 1+r Cmax (L)
because α ≤ 1 by Lemma 3.1. Consider L = {J1 , J2 } with r1 = r − ε, p1 = 1, r2 = 0, p2 = r and let ε tend to zero. Then we can show that this bound is tight for m = 1. Theorem 2 For the algorithm LS, we have ⎧ 3r+3 , ⎪ ⎪ ⎨ r+3 R(2, LS) ≥ 7r+4 3r+4 , ⎪ ⎪ ⎩ 5r+4
1 ≤ r < 2, 2 ≤ r < 4,
(5)
r ≥ 4.
2(r+2) ,
Fig. 2 The LS schedule and optimal schedule for the job list L(2)
Proof To prove (5), we consider the following three job lists according to the values of r. For 1 ≤ r < 2, we consider job list L(1) = {J1 , J2 , J3 , J4 , J5 , J6 , J7 , J8 } with
For 2 ≤ r < 4, we consider the job list L(2) = {J1 , J2 , J3 , J4 , J5 , J6 , J7 , J8 } with
r1 = r2 = r − ε,
r5 = r8 = 0,
r5 = r8 = 0,
r3 = r4 = r + 2 − 2ε, r6 = r7 = r + 1 − ε,
p1 = p2 = p3 = p4 = p6 = p7 = 1,
M1OPT = (J5 , J1 , J6 , J3 ),
M2LS = (J2 , J4 , J5 , J8 ), M2OPT = (J8 , J2 , J7 , J4 ).
LS (L(1) ) = 3r + 3 − 2ε and C OPT (L(1) ) Thus we have Cmax max = r + 3. Let ε tend to zero. We have
R(2, LS) ≥
3r + 3 . r +3
3r + 1 − 2ε, 2 r6 = r7 = r + 1 − ε, r3 = r4 =
p1 = p2 = p3 = p4 = 1, p5 = p8 = r.
The LS schedule and an optimal schedule can be constructed as follows (see Fig. 1): M1LS = (J1 , J3 , J6 , J7 ),
r1 = r2 = r − ε,
p5 = p8 = 2p6 = 2p7 = r.
The LS schedule and an optimal schedule can be constructed as follows (see Fig. 2): M1LS = (J1 , J3 , J6 , J7 , J8 ), M1OPT = (J5 , J1 , J6 , J3 ),
M2LS = (J2 , J4 , J5 ), M2OPT = (J8 , J2 , J7 , J4 ).
Hence we have
(2) 7r LS + 2 − 2ε, L = Cmax 2
(2) 3r OPT Cmax + 2. L = 2
So R(2, LS) ≥ 7r+4−4ε 3r+4 . Let ε tend to zero, we have the conclusion for 2 ≤ r < 4.
J Sched (2007) 10: 365–373
369
idle time interval corresponding to the job. Under this definition, an idle time interval may have length zero and each machine has an idle time interval corresponding to the first job assigned on the machine in the LS schedule. Because ui1 , . . . , uiki denote all the idle time intervals of machine Mi just before Jn is assigned in the LS schedule, we have ki ≥ 1 and machine Mi is idle in interval [0, ui1 ] (i = 1, 2). By intuition, the less the total sum of the whole idle intervals on a machine in the LS schedule is, the better the competitive ratio would be. Because of this reason, we set OPT αi = Cmax (L) −
ki
uij ,
i = 1, 2.
(8)
j =1
Fig. 3 The LS schedule and optimal schedule for the job list L(3)
By (2), we have Next we consider job list r ≥ 4 with
L(3)
= {J1 , J2 , . . . , J7 } for OPT Cmax (L) ≥
ki
uij + ki .
j =1
p7 = 2p5 = 2p6 = r, p1 = p2 = p3 = p4 = 1, r r3 = r4 = r + 1 − 2ε, r1 = r2 = − ε, 2 r5 = r6 = r7 = 0.
Hence we have
The LS schedule and an optimal schedule can be constructed as follows (see Fig. 3):
Lemma 3.2 We have (7) if one of the following two conditions holds:
M1LS = (J1 , J3 , J5 , J7 ),
OPT (L) < p + 2; (a) α1 + α2 ≥ 5 and pn + 1 ≤ Cmax n OPT (L) < p + 1. (b) α1 + α2 ≥ 6 and Cmax n
αi ≥ ki .
M2LS = (J2 , J4 , J6 ),
M1OPT = (J5 , J6 , J1 , J3 ),
M2OPT = (J7 , J2 , J4 ).
OPT (L) < p + 2 ≤ Proof (a) We have kji=1 uij + αi = Cmax n r + 2(i = 1, 2). By (1), (2), and (3), we have
Hence we can conclude that R(2, LS) ≥
5r + 4 . 2(r + 2)
Theorem 3 For the algorithm LS and m = 2 we have R(2, LS) =
5r + 4 , 2(r + 2)
r ≥ 4.
r ≥4
k1 k2 LS (L) Cmax j =1 u1j j =1 u2j ≤ 1 + k + k OPT 1 2 Cmax (L) 2( j =1 u1j + α1 ) 2( j =1 u2j + α2 ) +
(6)
pn 2(pn + 1)
≤1+
By Theorem 2, we only need to prove that LS (L) Cmax 5r + 4 , ≤ OPT (L) 2(r + 2) Cmax
(9)
(7)
for any job list L when m = 2. If (7) does not hold, then there exists a minimal counterexample. In the following, let r ≥ 4 and L = {J1 , J2 , . . . , Jn } be a minimal counter-example and uf be the end point of the idle interval with the maximum value. All other notations remain the same as before. By the minimality of L, we can LS (L) = L + p . assume that Cmax 1 n For the convenience of discussion later on, we define that, in the LS schedule, if a job is started at its release time then the machine on which the job is assigned is said to have an
≤
r r + 2 − α1 r + 2 − α2 + + 2(r + 2) 2(r + 2) 2(r + 1)
1 5r + 4 5r + 4 − < . 2(r + 2) 2(r + 1)(r + 2) 2(r + 2)
(b) The proof is similar to Case (a) just by use of OPT (L) ≥ p . Cmax n In the following we refer to a job which is assigned to be processed after job J1k1 or job J2k2 in the LS schedule as a last idle later job (see Fig. 4). Note 1 In practice, we do not know exactly how big αi is. But we know αi ≥ ki (i = 1, 2) by (9). In addition, the following provides three ways to estimate αi .
370
J Sched (2007) 10: 365–373
rj0 + pj0 ≥ ui1 (i = 1, 2) by the rules of LS and, in any optimal schedule, at least one machine is idle in interval [0, rj0 ]. Suppose that job Jj0 is not assigned on machine Mi¯ in the LS schedule and Mi¯LS = (Jj1 , Jj2 . . . , Jjn(i)¯ ) before Jn is assigned. Then Fig. 4 Last idle later jobs on a machine in the LS schedule
(I) If k jobs are assigned to be processed after Jiki on one machine in an optimal schedule, then we can conclude that αi ≥ ki + k. (II) Suppose Ji(ki −1) is the unique job between intervals ui(ki −1) and uiki in the LS schedule. If a last idle later job J is assigned to be processed after Ji(ki −1) and k other jobs are assigned to be processed after J on one machine in an optimal schedule, then we can conclude that αi ≥ ki + k − 1 or α1 + α2 ≥ k1 + k2 + k + 1. This is because of the following two reasons: (a) If job J cannot be finished before the release time of job Jiki then we have αi ≥ ki + k − 1. (b) If job J can be finished before the release time of job Jiki then the finish time of Ji ki must be less than the finish time of Ji(ki −1) in the LS schedule by the rules of LS, where i = i and J is assigned on Mi in the LS schedule. Hence αi ≥ ki + k + 1 holds. Because only two machines are considered, we have α1 + α2 ≥ k1 + k2 + k + 1. (III) If there are k jobs which are assigned to be processed before job Jiki in the LS schedule other than Jij (j = 1, 2, . . . , ki − 1), than αi ≥ ki + k. Note 2 If there is no last idle later job on one of the two machines, say M1 , before Jn is assigned in the LS schedule, we have LS (L) 5r + 4 r1k1 + p1k1 + pn Cmax ≤2≤ . ≤ OPT (L) max{r1k1 + p1k1 , pn } 2(r + 2) Cmax
Similarly, it is not difficult to argue out the case when no last idle later job is assigned on M2 . Hence, we can assume that there are at least three last idle later jobs and before Jn is assigned in the LS schedule, there is at least one last idle later job on each machine in the following discussion. Note 3 Suppose k2 ≥ 2. By the minimality of L, we have u11 < u21 + Jj ∈S pj + u22 = r22 , where S is the set of jobs that are assigned between u21 and u22 in the LS schedule. This is because if u11 ≥ r22 we can get another counterexample by removing the jobs in S from job list L. Lemma 3.3 If k1 = k2 = 1 and r ≥ 4 then (7) holds. Proof Let rj0 = min{rj | j = 1, 2, . . . , n − 1}, i.e, job Jj0 has the least release time among jobs in L \ {Jn }. Then
¯
LS Cmax (L) ≤ ui1 ¯
+
n(i)
p j s + pn ,
s=1 OPT (L) ≥ rj0 2Cmax
+
n
¯
ps ≥ rj0 + pj0 +
s=1
n(i)
p j s + pn
s=1
¯
≥ ui1 ¯ +
n(i)
p j s + pn .
s=1
Hence we have LS (L) 5r + 4 Cmax . ≤2≤ OPT (L) 2(r + 2) Cmax
Lemma 3.4 For k1 + k2 ≥ 3 and r ≥ 4, we have (7) if one of the following two conditions holds: OPT (L) ≥ p + 2 and u ≤ r + 1; (i) Cmax n f (ii) uf ≥ r + 1.
k Proof It is obvious that ji=1 uij + ki ≤ uf + 1 (i = 1, 2). We will give the proof in two cases: Case 1. k1 + k2 ≥ 4. LS (L) u11 + · · · + u1k1 Cmax ≤1+ OPT 2(u11 + · · · + u1k1 + k1 ) Cmax (L)
+
u21 + · · · + u2k2 pn + OPT (L) 2(u21 + · · · + u2k2 + k2 ) 2Cmax
≤1+
2(uf + 1) − k1 − k2 pn + . OPT (L) 2(uf + 1) 2Cmax
Thus when (i) holds, we have LS (L) 2uf − 2 Cmax pn + ≤1+ OPT 2(uf + 1) 2(pn + 2) Cmax (L)
≤1+
r 5r + 4 2r + = . 2(r + 2) 2(r + 2) 2(r + 2)
When (ii) holds, we have LS (L) 2uf − 2 Cmax pn + ≤1+ OPT 2(uf + 1) 2(uf + 1) Cmax (L)
≤1+
2uf − 2 + r 5r + 4 ≤ , 2(uf + 1) 2(r + 2)
where the last inequality results from that creasing function of uf .
2uf −2+r 2(uf +1)
is a de-
J Sched (2007) 10: 365–373
371
Case 2. k1 = 1, k2 = 2 (or, symmetrically, k1 = 2 and k2 = 1). OPT (L) ≥ u + 2. Because there First we show that Cmax 11 are at least three last idle later jobs, on one of the machines there must be assigned at least two last idle later jobs, say J¯1 and J¯2 , in the optimal schedule. We consider the following two situations: (a) J11 and J22 are assigned on the same machine in the optimal schedule; (b) J11 and J22 are assigned on different machines in the optimal schedule. In case (a), we have that J11 can be assigned to be processed OPT (L) ≥ u +2 by before J22 by Note 3. Hence, we have Cmax 11 Note 1 (I). In case (b), either J11 or J22 must be assigned on the same machine with J¯1 and J¯2 in the optimal schedule. Hence we also have the same conclusion because none of J11 , J22 , J¯1 and J¯2 can be finished before u11 by the rules of LS. If u11 > r then u21 = 0 by Observation 1.1 and one machine has an idle time interval [0, t] with t ≥ u11 − r in the optimal schedule because only job J21 can have release time less than t. Set t (u11 ) = u11 − r if u11 > r and t (u11 ) = 0, n OPT (L) ≥ otherwise. By (1) we have 2Cmax i=1 pi + t (u11 ) 11 −t (u11 ) ≤ and it is easy to show that u2(u 11 +2) tion of t (u11 ). So by (1–3) we have
r 2(r+2)
by the defini-
LS (L) Cmax OPT (L) Cmax n pi + t (u11 ) + u11 − t (u11 ) + u21 + u22 + pn ≤ i=1 OPT (L) 2Cmax
M1OPT = (. . . , J¯, J, . . . , J11 , . . .) (or M1OPT = (. . . , J¯, J, . . . , J21 , . . .)), then the schedule of
uf − 1 u11 − t (u11 ) pn . ≤1+ + + OPT (L) 2(u11 + 2) 2(uf + 1) 2Cmax
M1OPT = (. . . , J¯, J11 , J, . . .) (or M1OPT = (. . . , J¯, J21 , J, . . .)),
Thus, when (i) holds, we have
is also an optimal schedule because J¯ cannot be finished before the release time of J11 (or J21 ) by the rules of LS. Conclusion 2. If the optimal schedule on M1 is
LS (L) uf − 1 pn Cmax r + + ≤1+ OPT 2(r + 2) 2(u + 1) 2(p Cmax (L) f n + 2)
≤1+
5r + 4 3r = . 2(r + 2) 2(r + 2)
M1OPT = (. . . , J11 , J¯, J, J , . . .) , M1OPT = (. . . , J21 , J¯, J, J , . . .)) (or
When (ii) holds, we have LS (L) uf − 1 Cmax r pn ≤1+ + + OPT (L) 2(r + 2) 2(uf + 1) 2(uf + 1) Cmax
≤1+
uf − 1 + r r 5r + 4 + ≤ , 2(r + 2) 2(uf + 1) 2(r + 2)
where the last inequality results from the fact that a decreasing function of uf .
OPT (L) < p + 2, job J occupies one machine Proof By Cmax n n with at most one other job in the optimal schedule. We will give the proof of this lemma according to the following three cases: Case 1. k1 + k2 ≥ 5. In this case it is easy to see that Lemma 3.2 (a) holds. Case 2. k1 = k2 = 2. If there are at least two jobs between any one of the two machines’ two neighboring idle intervals in the LS schedule, then (7) holds by Note 1 (III) and Lemma 3.2 (a). Hence from now on, we simply assume that there is only one job between any two neighboring idle intervals in the LS schedule. If at least one job is assigned to be processed after job J12 or J22 on one machine in the optimal schedule, say one job is assigned to be processed after J12 , then we have α1 ≥ k1 + 1 = 3 by Note 1 (I). Together with α2 ≥ k2 = 2, we have that Lemma 3.2 (a) holds. So we can assume that jobs J12 and J22 are the last jobs on machine M1 and M2 , respectively, in the optimal schedule. Without loss of generality, we assume that jobs Jn and J22 are assigned to be processed on M2 . By Note 2, J11 , J12 , J21 , and at least two last idle later jobs are assigned to M1 . Let J¯ be a last idle later job which is assigned on M1 in the optimal schedule. We have the following two conclusions: Conclusion 1. If the optimal schedule on M1 is
uf −1+r 2(uf +1)
is
OPT (L) < Lemma 3.5 If k1 + k2 ≥ 3, r ≥ 4, pn + 1 ≤ Cmax pn + 2 and uf ≤ r + 1, then (7) holds.
then we have α1 + α2 ≥ 5 by Note 1 (II). This means that Lemma 3.2 (a) holds. By Conclusion 1, we only need to consider the case that at most one last idle later job is assigned to be processed before J11 or J21 on M1 in the optimal schedule. Together with Conclusion 2, we conclude that we only need to consider the optimal schedule on M1 as M1OPT = (J¯1 , J11 , J21 , J¯2 , J12 ) or M1OPT = (J¯2 , J11 , J21 , J¯1 , J12 ), where J¯1 and J¯2 are two last idle later jobs on M1 and M2 , respectively. We only discuss the case M1OPT = (J¯1 , J11 , J21 , J¯2 , J12 ) in detail. The
372
J Sched (2007) 10: 365–373
other case M1OPT = (J¯2 , J11 , J21 , J¯1 , J12 ) can be discussed in a similar way. J¯2 cannot be finished before or at the end point of the idle interval u12 on M1 in the optimal schedule because it is assigned after job J21 . Let x be the processing time of J¯1 that is required after the end point of the idle interval u11 and y be the processing time of J¯2 that is required after the end point of the idle interval u12 in the optimal schedule. z and t are defined as follows: z = (u21 + p21 + u22 + p22 + p¯ 2 )
Hence if
− (u11 + p11 + u12 + p12 + p¯ 1 ),
5r+4 2(r+2) ,
t = (u21 + p21 + u22 + p22 ) − (u11 + p11 + u12 + p12 ), i.e., z is the absolute value of the difference between the scheduled finish times of M1 and M2 before job Jn is assigned and t is the difference between the finish time of job J22 and the finish time of job J12 in the LS schedule. By the definitions and our assumptions, it is easy to get that u11 = r¯1 + p¯ 1 − x,
u12 = x + p21 + p¯ 2 − y,
u11 + u12 = r¯1 + p¯ 1 + p21 + p¯ 2 − y. And p¯ 1 = t + p¯ 2 − z or p¯ 1 = t + p¯ 2 + z, so in any case we have p¯ 1 ≤ t + p¯ 2 + z. By the rule of the algorithm LS, we have u21 < r¯1 + p¯ 1 . If u22 ≥ p¯ 1 , then r¯1 + p¯ 1 > u21 + p21 + u22 by the rules of the algorithm LS. This means α2 ≥ 5 from M1OPT = (J¯1 , J11 , J21 , J¯2 , J12 ). Hence we can assume that u22 < p¯ 1 ≤ t + p¯ 2 + z. So u21 + u22 < r¯1 + p¯ 1 + p¯ 2 + t + z holds. By definition, it is easy to get the following two equations: u21 + p21 + u22 + p22 = (u11 + p11 + u12 + p12 ) + t = r¯1 + p¯ 1 + p¯ 2 + p21 + p11 + p12 + t − y, u11 + p11 + u12 + p12 + y = r¯1 + p¯ 1 + p¯ 2 + p21 + p11 + p12 .
≤1+
u11 + u12 + u21 + u22 + pn − z OPT (L) 2Cmax
≤1+
2(¯r1 + p¯ 1 + p¯ 2 ) + p21 + t − y + pn . 2(¯r1 + p¯ 1 + p¯ 2 + p21 + p11 + p12 + x ∗ )
p21 +t−y+pn 2(p21 +p11 +p12 +x ∗ )
≤ 1, we have
LS (L) Cmax OPT (L) Cmax
≤2≤
otherwise we have
LS (L)
Cmax ≤ 1 + 2 pn + 1 − p11 − p12 − p21 − x ∗ OPT Cmax (L) −1 + p21 + t − y + pn 2(pn + 1)
≤1+
5r + 4 3pn − 3 < 2(pn + 1) 2(r + 2)
2x+p21 +t−y+pn because 2(x+p ∗ is a decreasing function of x. 21 +p11 +p12 +x ) OPT (L) = r¯ + p¯ + p¯ + p + For t > y, by the use of Cmax 1 1 2 21 ∗ OPT (L) = r¯ + p¯ + p¯ + p11 + p12 + t − y + x instead of Cmax 1 1 2 p21 + p11 + p12 + x ∗ , where x ∗ ≥ 0, we can get the same conclusion. Case 3. k1 = 1, k2 ≥ 2 (or, symmetrically, k1 ≥ 2, k2 = 1). Suppose Jn is assigned on machine M2 in the opOPT (L) < p + 2, at least timal schedule. By Note 2 and Cmax n four jobs are assigned on machine M1 in the optimal schedule. By the rules of our algorithm, only job J21 can be finished before u11 in any optimal schedule. Thus we have OPT (L) ≥ u + 3 if J is not the first job on M or the Cmax 11 21 1 second job does not start before u11 on M1 in the optimal schedule. Hence α1 ≥ 3. Together with α2 ≥ k2 ≥ 2, we have that Lemma 3.2 (a) holds. Otherwise, we have OPT (L) ≥ u + 2. Furthermore, the second job can not Cmax 11 be finished before u22 on M1 in the optimal schedule because only if it is job J11 it can be finished before u22 by the rules of LS. Hence we have α2 ≥ 3. Thus, Lemma 3.2 (a) holds. OPT (L) < p + 1 and Lemma 3.6 If k1 + k2 ≥ 3, r ≥ 4, Cmax n uf ≤ r + 1, then (7) holds.
Hence we have OPT (L) Cmax r¯1 + p¯ 1 + p¯ 2 + p21 + p11 + p12 + t − y, ≥ r¯1 + p¯ 1 + p¯ 2 + p21 + p11 + p12 ,
LS (L) L1 + L2 + 2pn − z Cmax = OPT OPT (L) Cmax (L) 2Cmax n pi + u11 + u12 + u21 + u22 + pn − z = i=1 OPT (L) 2Cmax
t > y, t ≤ y.
OPT (L) = r¯ + p¯ + p¯ + p + p + When t ≤ y, let Cmax 1 1 2 21 11 OPT (L) ≥ p + 1, p12 + x ∗ , where x ∗ ≥ 0. Because Cmax n r¯1 + p¯ 1 + p¯ 2 ≥ pn + 1 − (p11 + p12 + p21 + x ∗ ) holds. Furthermore, we have
Proof In this case, one of the machines has only one job Jn and all other jobs occupy one machine in the optimal schedule. We will give the proof of this lemma according to three cases: Case 1. k1 + k2 ≥ 5. Because job J1k1 and job J2k2 must be assigned to the same machine, we have α1 ≥ k1 + 1 or α2 ≥ k2 + 1 by Note 1(I). Therefore, Lemma 3.2 (b) holds.
J Sched (2007) 10: 365–373
Case 2. k1 = k2 = 2. If there are at least two jobs between any of two neighboring idle intervals in the LS schedule, we can similarly consider this case as Case 1 by Note 1(III). Hence we can assume that there is only one job between any two neighboring idle intervals in the LS schedule. Because job Jn occupies a machine exclusively in the optimal schedule, we can assume that job J1k1 is assigned to be processed before job J2k2 on machine M1 in the optimal schedule. If at least one job besides J2k2 is assigned to be processed after J1k1 in the optimal schedule, then we have α1 ≥ k1 + 2 = 4 by Note 1(I). Together with α2 ≥ 2, we have that Lemma 3.2 (b) holds. If only J2k2 is assigned to be processed after J1k1 in the optimal schedule, then we can assume that the optimal schedule on machine M1 is M1OPT = (. . . , J21 , . . . , J¯, . . . , J1k1 , J2k2 ), where J¯ is a last idle later job. Hence α2 ≥ 3 holds by Note 1(II). Together with α1 ≥ k1 + 1 = 3, we have that Lemma 3.2 (b) holds. Case 3. k1 = 1, k2 ≥ 2 (or, symmetrically, k1 ≥ 2, k2 = 1). In this case, we can show Lemma 3.2 (b) holds similarly by the way used in Case 3 of Lemma 3.5, just considering the number of jobs assigned on machine M1 in the optimal schedule is at least five instead of four. Hence we can conclude that (6) holds for r ≥ 4 by Theorem 2 and Lemmata 3.3–3.6, i.e., Theorem 3 is proved. Theorem 4 For r ≤ 4, we have R(2, LS) ≤ 2. Proof By Theorem 3, we have R(2, LS) = 2 when r = 4. Hence we have R(2, LS) ≤ 2 for r ≤ 4 because R(2, LS) is a nondecreasing function of r.
373
4 Conclusion and future research In this paper, some competitive ratios of the algorithm LS have been derived. But a few interesting questions are still pending. For Theorem 3, we only give a tight bound for the case of r ≥ 4. For 2 ≤ r ≤ 4 we conjecture the tight bound is 7r+4 3r+4 . However, for 1 ≤ r < 2, the same lower bound does not appear to be tight and we do not have a good guess of its magnitude.
References Azar, Y., & Regev, O. (2001). On-line bin stretching. Theoretical Computer Science, 268, 17–41. Dósa, G., & He, Y. (2004). Semi-online algorithms for parallel machine scheduling problems. Computing, 72, 355–363. Graham, R. L. (1969). Bounds on multiprocessing timing anomalies. SIAM Journal on Applied Mathematics, 17, 416–429. He, Y., & Zhang, G. (1999). Semi on-line scheduling on two identical machines. Computing, 62, 179–187. Kellerer, H. (1991). Bounds for non-preemptive scheduling jobs with similar processing times on multiprocessor systems using LPTalgorithm. Computing, 46, 183–191. Kellerer, H., Kotov, V., Speranza, M. G., & Tuza, Z. (1997). Semi online algorithms for the partition problem. Operations Research Letters, 21, 235–242. Li, R., & Huang, H. C. (2004). On-line scheduling for jobs with arbitrary release times. Computing, 73, 79–97. Liu, W. P., Sidney, J. B., & Vliet, A. (1996). Ordinal algorithm for parallel machine scheduling. Operations Research Letters, 18, 223– 232. Seiden, S., Sgall, J., & Woeginger, G. J. (2000). Semi-online scheduling with decreasing job sizes. Operations Research Letters, 27, 215–227. Tan, Z. Y., & He, Y. (2002). Semi-online problem on two identical machines with combined partial information. Operations Research Letters, 30, 408–414.