(Probabilistic) Recurrence Relations Revisited
Shiva Chaudhuri1? and Devdatt Dubhashi2?? 1
Max{Planck{Institute fur Informatik, Im Stadtwald, 66123 Saarbrucken, Germany 2 BRICS??? , Department of Computer Science, University of Aarhus, Ny Munkegade, DK-8000 Aarhus C, Denmark
Abstract. The performance attributes of a broad class of randomised algorithms can be described by a recurrence relation of the form T (x) = a(x)+ T (H (x)), where a is a function and H (x) is a random variable. For instance, T (x) may describe the running time of such an algorithm on a problem of size x. Then T (x) is a random variable, whose distribution depends on the distribution of H (x). To give high probability guarantees on the performance of such randomised algorithms, it suces to obtain bounds on the tail of the distribution of T (x). Karp derived tight bounds on this tail distribution, when the distribution of H (x) satis es certain
restrictions. However, his proof is quite dicult to understand. In this paper, we derive bounds similar to Karp's using standard tools from elementary probability theory, such as Markov's inequality, stochastic dominance and a variant of Cherno bounds applicable to unbounded variables. Further, we extend the results, showing that similar bounds hold under weaker restrictions on H (x). As an application, we derive performance bounds for an interesting class of algorithms that was outside the scope of the previous results.
1 Introduction and Motivation Consider a randomised algorithm that works as follows: on an input of size x, it performs a(x) work to generate a subproblem of size H(x) (where H(x) is a random variable taking values in [0 x], whose distribution depends on the algorithm) and then solves the subproblem recursively. Then, the running time of the algorithm may be described by the (probabilistic) recurrence relation T(x) = a(x) + T(H(x)): (1) Hence, T(x) is a random variable whose distribution depends on the distribution of H(x). The performance of the randomised algorithm can be described in by the ESPRIT Basic Research Actions Program of the EC under contract No. 7141 (project ALCOM II). ??
[email protected]. Work done while at the Max{Planck{Institute f ur Informatik supported by the ESPRIT Basic Research Actions Program of the EC under contract No. 7141 (project ALCOM II). ??? Basic Research in Computer Science, Centre of the Danish National Research Foundation. ?
[email protected]. Supported
terms of certain statements on the distribution of this random variable. For instance, one may compute the expected running time, or we may give more precise information on the tail of the distribution of this random variable. Such a recursion also describes succinctly, the size or structure of certain randomly generated combinatorial structures, for instance the structure of random permutations of objects or the sizes of cliques generated by a random greedy process. In the literature, the analysis of many randomised algorithms t this framework (see x 2 below for some typical examples, or numerous ones exhibited in [1]). However, their analyses are frequently carried out by disparate ad hoc techniques. Karp, [1] recognised that all these algorithms can be analysed uniformly in the above framework and gave general theorems which could be applied in the fashion of a \cook-book" substitution to give the desired performance guarantees on the algorithms. To state the hypothesis and results of Karp, we introduce some notations and de nitions. In the following, T(x) satis es equation (1), where a is a xed function, H(x) is a random variable taking values in [0; x], and E[H(x)] m(x), for a xed function, m, satisfying 0 m(x) x. Also, a and m are non-decreasing functions. The equation (x) = a(x) + (m(x)) (2) can be regarded as the deterministic counterpart of the probabilistic recurrence (1). Intuitively, it is an equation governing the expected values. Whenever this equation hasPa solution, it has a unique least non-negative solution u(x), given by u(x) = i0 a(m(i) (x)), where we de ne m(0) (x) := x and m(i+1) (x) := m(m(i) (x)) for i 0. Karp proved
Theorem 1 Karp [1]. If m(x) and a(x) are continuous functions satisfying (1) m(x)=x is nondecreasing and (2) a(x) is strictly increasing on fx j a(x) > 0g. Then for every positive real x and every positive integer w,
Pr[T(x) u(x) + wa(x)] (m(x)=x)w : This theorem gives very precise bounds on the performance attributes of algorithms. It also admits a ne{tuned tradeo between the relaxation permitted in the running time and the high probability guarantee. However, the method used to prove the result, while ingenious, oers no intuition about why the result holds, and the proofs are dicult to follow. Further, the conditions (1) and (2) in the theorem are technical arti ces introduced by the methods of proof. In particular, for weaker conditions on m(x)=x, very similar bounds hold, as shown in Theorem 2, below. Speci cally, condition (1) prevents the application of Theorem 1 whenever m(x) grows more slowly than x. For instance, it prevents a direct application of Karp's results to an interesting class of randomized algorithms based on a probabilistic strategy called the Rodl Nibble , [3]. We give an alternative analysis that yields comparable, although somewhat weaker, bounds. We essentially reduce the problem to the analysis of waiting
times between successes in a sequence of Bernoulli trials. The reduction is obtained using essentially three components: Markov's inequality, a \Folklore Lemma" on stochastic majorisation and a variant of the Cherno bound applicable to unbounded random variables. The structure of the proof is thus strongly intuitive, re ecting the behaviour of the randomised process. It is also quite general, in that when m(x)=x is non-decreasing, it yields bounds comparable to Theorem 1, and when m(x)=x satis es a weaker condition, the same proof yields exponentially decreasing bounds. In particular, it covers the case of the Rodl Nibble algorithms mentioned above. Our results, by comparison with Theorem 1 above,are:
Theorem2. Let = (x) := maxbyx(m(y)=y), where b is the terminating point of the recurrence. Then, for suciently large positive integers k,
Pr[T(x) ku(x)] k2 ?1: In the case that m(x)=x is non{decreasing, (x) = m(x)=x and we get the following bounds to compare with those of Theorem 1: Pr[T(x) ku(x)] (m(x)=x) k2 ?1 : Our bounds are not quite as precise as Karp's nor do they admit as ne a tradeo as Karp's between the running time and the probability guarantee. However, in situations where they are applied, the results obtained are often comparable.
2 Example Applications In this section, we give some illustrations of how our theorems can be applied in a \cook-book" fashion to yield high-probability statements to the running time of randomised algorithms or to the size and structure of randomly generated combinatorial structures. Example 2.1 is a typical example of the analysis of many randomised algorithms where the problem size is reduced (expected) by a constant fraction at each iteration. Example 2.2 is an example of a combinatorial structure generated by a random process which also has an expected constant decrease in problem size at each iteration. Example 2.3 is an instance where the expected decrease is not constant. All these examples are also used for illustrating the technique in Karp's paper, [1]. We give the probability bounds obtained from our analysis, for comparison with the bounds of Karp. In Example 2.4 we give an example where Karp's Theorem does not apply, but our's does.
2.1 Maximal Independent Set Luby, [2] gives a randomised parallel algorithm for constructing a maximal independent set in a graph.The algorithm proceeds in stages, where at each stage, the algorithm deletes some of the edges of the current graph and continues until
all edges have been deleted. The work at each stage is (proportional to) the number of edges in the current graph. Luby shows that each stage, the fraction of edges deleted is at least 18 . Let T(G) and T 0(G) denote respectively, the number of iterations abd the total amount of work executed by Luby's algorithm applied to a graph G. Then for any (suciently large) positive integer k, and for any graph G with m edges, { Pr[T(G) > ln(81=7) k lnm] < ( 87 ) k2 ?1. { Pr[T 0 (G) > 8km] < ( 78 ) k2 ?1.
2.2 Greedy Clique Finding The following is a greedy algorithm to nd a maximal clique in a graph. Starting with the empty set of vertices, iteratively, select a random vertex v of the current graph, add it to the current clique, and delete all vertices not adjacent to v, until all vertices are deleted. Consider the behaviour of the algorithmon a random graph Gn;p on n vertices in which each edge is present with probability p. At a step when the vertex set of the current graph has size m, the expected number of vertices that do not get deleted is p(m ? 1). Let T(n) denote the size of the clique obtained when the algorithm is applied to Gn;p, for any xed p. Then T(n) = 1 + T(H(n)) for n 1 where E[H(n)] = p(n ? 1) pn. Let T 0 (n) denote the number of adjacency comparisons of the algorithm, then T 0(n) = n ? 1 + T(H(n)) (for the same H). Hence we get the bounds, for any suciently large integer k, { Pr[T(n) > ln(11=p) kn] < p k2 ?1. { Pr[T 0 (n) > 1?1 p kn] < p k2 ?1 .
2.3 Abstract Independence Systems In [6], two randomised parallel algorithms are presented for nding a maximal independent set in an abstract independence system. If T1 (n) and T2 (n) are respectively, the number of iterations required by the two algorithms when applied to an n{element independence system, the analysis in [6] shows the following recurrences hold: Ti (n) = 1 + Ti (Hi (n)); i = 1; 2: where the mi (n) := E[Hi(n)]; i = 1; 2 satisfy p m1 (n) n ? c n for a constant c > 0 and m2 (n) n ? Hn n where Hn := 1 + 1=2 + + 1=n is the nth harmonic number. Applied to these recurrences, our bounds give, for a suciently large integer k:
p p Pr[T1(n) > 2c k n] < (1 ? c= n) k2 ?1 exp(?c( k2 ? 1)n?1=2) and for any value of d > 1=2 and n; k suciently large,
?( k2 ? 1) ): Pr[T2 (n) > dk(lnn)2 ] < (1 ? H1 ) k2 ?1 exp( H n
n
2.4 Edge Colouring of Graphs In [3], a randomised distributed edge{colouring algorithm is described, based on a probabilistic strategy called the Rodl Nibble . The algorithm proceeds in stages. At each stage, each vertex has available to it, a palette of colours. Each vertex then chooses a small subset (\nibble") of incident edges to colour, and tentatively assigns them a colour chosen uniformly and independently at random from its current palettes. The colour becomes nal if it is admiussible at the other endpoint and there are no other edges whose tentative colours con ict with it. The edges which are successfully coloured are then deleted and the palettes are correspondingly updated. It is shown in [3] that the palette sizes (and hence the vertex degrees) obey the following decay law : If k denotes the (expected) palette size at stage k and is the maximum degree of the input graph, then ) : k+1 exp(? k k Hence, for the number of rounds of the distributed protocol, we have a recurrence of the form T(n) = 1 + T(H(n)) with E[H(n)] e?cn n for a constant c > 0. In this example, the function m(x)=x = e?cx (for some constant c > 0) is a decreasing function and hence Theorem 1 is inapplicable. Applying our theorem, and stopping the recurrence when k = , as is needed in the algorithm in [3], we get the tail probability bounds: Pr[T() > 1c k 1 ln( 1 )] < exp(?c( k2 ? 1)):
3 Some Probabilistic Lemmas Notation: In the rest of the paper we use the following notational convention: If A
is a random variable, and B is a set of random variables, then Pr[A = ajexc(B)] is the conditional probability that A = a, given the values of all the other relevant random variables, with the exception of those in B. We use the following \Folklore Lemma" repeatedly:
Lemma 3. For a positive integer n, let X ; ; Xn and Y ; ; Yn, be discrete random variables such that the r.v. s Xi stochastically dominate the r.v.s Yi , in the sense that for each i 2 [n], and for each a 2 R, Pr[Yi a j exc(fXi ; Yi g)] Pr[Xi a j exc(fXi ; yig)]: 1
P
1
P
Let X := i Xi ; Y := i Yi . Then for each real a,
Pr[Y a] Pr[X a]: The usefulness of this lemma comes from the fact that we do not need any assumptions of independence. It can used to derive tail bounds for the sum of weakly or negatively correlated random variables, by letting the Yi's be the correlated variables and de ning independent variables, Xi , that stochastically dominate the Yi 's. For instance it can be used to derive the following result of Sanjay Jain, cited in [7]: Let a1; an be n random trials (not necessarily independent) such that the probability that trial ai succeeds is bounded above by a constant p regardless of the outcomes of other trials. Then, if X is the r.v. representing the number of successes in these n trials and Y is the binomial variable with parameters (n; p), then Pr[X k] Pr[Y k] for 0 k n. It can also be used to derive the results of [8] giving Cherno{Hoeding bounds for the so{called {correlated variables. Lemma 3 is an instance of stochastic majorisation ; a proof and various generalisations can be found in [4], especially x 17C. We give a proof here for completeness. We will use the following Lemma 4. Let Z be a discrete random variable and suppose the r.v. U stochastically dominates the r.v. V in that, for any a 2 R and any z in the range of Z, Pr[V a j Z = z] Pr[U a j Z = z] Then, for any a 2 R,
Pr[Z + V a] Pr[Z + U a]: Proof. We compute,
Pr[Z + V a] =
X z
X z
Pr[Z = z]Pr[V a ? z j Z = z] Pr[Z = z]Pr[U a ? z j Z = z]
= Pr[Z + U a] Proof. (of Lemma 3) Apply Lemma 4 with U := Xi ; V := Yi , and Z := Y1 +
+ Yi?1 + Xi+1 + + Xn for each 1 i n. (The notation implies that when i = 1, Z = X2 + + Xn and when i = n, Z = Y1 + + Yn?1 .) Then
under the assumptions on the variables Xi ; Yi, the conditions of the lemma are satis ed, in fact, we only require Pr[Yi a j Y1 + + Yi?1 + Xi+1 + + Xn = z] Pr[Xi a j Y1 + + Yi?1 + Xi+1 + + Xn = z] for any a; z in the appropriate ranges, and we obtain Pr[Y1 + +Yi +Xi+1 + +Xn a] Pr[Y1 + +Yi?1 +Xi + +Xn a] for each 1 i n and for each a 2 R. Taken together, these inequalities yield Lemma 3. The following lemma, that gives Cherno-like bounds for the sum of random variables with a geometric distribution, was proved in [5]. Before we extract the bounds that are actually useful to us in this paper, we give a direct simple proof of exact bounds on a somewhat more general version that may be useful in other applications. Lemma 5. Let Z := (Z1; ; Zn) be a collection of independent random variables which are geometrically distributed in the following way: for each i; 1 i n, there exist non{negative reals zi such that for any positive integer l, Pr[Zi = lzi ] = (1 ? p)pl?1 for a real p; 0 < p < 1. Then, letting Z := Z1 + + Zn , and z := z1 + + zn , 1. If zi = z for each i, then for any t 0, X k k + 2n ? 1 1 ? p n Pr[Z t] = ( p ) p : n k t 2. If z1 > zi for i > 1, then for any t 0,
z
Pr[Z t] = F(p; n; z1; ; zn ) p z1 ?1 ; Q where F(p; n; z1; ; zn ) := 1 zi for i > 1, then we have X X X X pl1 ++ln pl1 ++ln = l1 z1 ++ln zn t ln 1 l2 1 l1 (t?(l2 z2 ++ln zn )=z1 ) X X X m+(t?(l2 z2 ++ln zn ))=z1 +l2 ++ln = p ln 1 l2 1 m0 X (1?zn=z1 )ln X (1?z2 =z1 )l2 X m t=z1 p p p = p ln 1
?zn =z1 = 1 ?p p1?zn=z1 1
l2 1 1?z2 =z1 1 ?p p1?z2 =z1
m0
1 ?1 p pt=z1
Substituting into equation (3) and simplifying gives the second part. The form actually useful to us here is obtained by substituting t := lz into the ?l zz1i =(1 ? second part and noting that for suciently large positive integers l, p z p1? z1i ) 1. Corollary 6. For the variables Z; Zi with z1 > zi for i > 1 as above, and for suciently large positive integers l, Pr[Z lz] pl?1 :
4 Preliminaries Given a function m : R ! R such that m(x) x, de ne an auxiliary function m^ as follows: First, de ne as in Theorem 2: (x) := bmax (m(y)=y) yx and then set
p m(x) ^ := m(x)= (x): The function m^ interpolates between the values x and m(x) in such a way that both of the following properties hold: (1) Applied twice, m^ drops below m and (2) there is a nite probability for the event \H(Xi+1 ) < m(X ^ i )" via Markov's inequality. The following propositions establish these claims.
Proposition7. For all x 0, m( ^ m(x)) ^ m(x): Proof. We compute:
p ^ m( ^ m(x)) ^ = m(m(x))= ^ p (m(x)) m(x) ^ ^ p as m(y) (y)y p(m(x)); ^ = m(x) (m(x)= (x); by de nition of m( ^ ) m(x); as (z) is non-decreasing
Proposition8. For all x > 0, let H = H(x) be a random variable with mean at most m(x). Then,
p Pr[H(x) > m(x)] ^ (x):
Proof. This is simply an application of Markov's inequality:
p Pr[H(x) > m(x)] ^ m(x)=m(x) ^ = (x):
5 The Generic Bound In this section we obtain a generic bound of the form Pr[T(x) A] B. The idea of the proof is as follows. At stage i of the recurrence, let Xi denote the current value of the random variable.The work done at stage i is a(Xi ). A random experiment determines the value of Xi+1 , given the value of Xi . Call the experiment a success if Xi+1 m(X ^ i ), and denote the probability of success by p. Previewing some notation from below, de ne y0 := x and yi+1 := m(y ^ i ); i 0. Divide the process into phases , where phase i consists of those stages j, at which the random variable Xj lies between yi and yP i+1 , i 0. If phase i lasts k stages, we have that the work done in phase i, Si := yi+1 <Xj yi a(Xj ) ka(yi ). If the experiments were independent, we would have that Pr[k l] (1 ? p)l?1 , hence also Pr[Si la(yi )] (1 ? p)l?1 . Now, once again, assuming the experiments in dierent phases are independent, we can bound the probability that the sum of the Si s exceeds any given value by employing Corollary 6 giving probability bounds for the sum of geometrically distributed random variables, to get the required tail probability bounds. In fact, the experiments are not independent. However, below, we show that the assumption of independence is unnecessary, and that one can obtain the same conclusions by repeatedly invoking Lemma 3. De ne the random variables X0 = x; Xi = H(Xi?1); i > 0. Let x0 = x; xi = m(xi?1 ); i > 0, y0 = x; yi = m(y ^ i?1); i > 0, and de ne s(x) = maxfj : m(j ) (x) > bg, r(x) = maxfj : m^ (j ) (x) > bg, where b isPthe boundary value (x) beyond which the recurrence stops. Observe that u(x) = si=0 a(xi ). Let the random variable s denote the largest index such that Xs > b. Then P P T(x) = si=0 a(Xi ), and we wish to upper bound Pr[ si=0 a(Xi ) A]. De ne
the random variables Yj as follows: ^ j ) and Yj = 0 otherwise P Yj = 1 if Xj +1 m(X and de ne Ui = a(yj ) if j ik=0 Yk j + 1, i.e. if the jth success in the sequence occurred before the ith trial but the j + 1th occurs on or after the ith trial. Observe that for each i, Ui a(Xi ), since if j successes occur before i trials, Xi yj . Hence s s X X (4) Pr[T(x) A] = Pr[ a(Xi ) A] Pr[ Ui A]: i=0
i=0
We will later nd a value p such that Pr[Yi = 1jexc(fYi g)] p (the notation exc(fYi g) is as de ned in Section 3). For the moment suppose that we have such a value p. De ne independent random variables Zi with Pr[Zi = 1] = p, and P de ne Vi = a(yj ) if j ik=0 Zk j + 1. Claim: s s X X (5) Pr[ Ui A] Pr[ Vi A]: i=0 i=0 P P Proof. De ne Ai = ij =0 Yi and Bi = ij =0 Zi . Since Pr[Yj = 1jexc(fYj g)] p and Pr[Zj = 1] = p, we may apply Lemma 3 to conclude Pr[Ai a] Pr[Bi a], for any a. Now, Pr[Ui a(yk )] = Pr[Ai k] Pr[Bi k] = Pr[Vi a(yk )], for any k. Since this holds for each i, we may apply Lemma 3 again to obtain the claim. P De ne, for i = 0; 1; :::;r(x), Si = j a(yi ), where the sum is taken over all indices j such that Vj = a(yi ). Note that the random variables Si are independent, and rX (x) s X Vi = Sj : (6) i=0
p(1 ? p)c?1,
j =1
Then, Pr[Si = c a(yi )] = since this is P just the probability that the next success occurs after c trials. Writing v(x) = rj (=0x) a(yi ), and applying Corollary 6, we obtain rX (x) (7) Pr[ Si lv(x)] (1 ? p)l?1 i=1
for suciently large positive integers l. Since m( ^ m(x)) ^ m(x), it follows that y2i+1 y2i xi, for each i, and that r(x) 2s(x). Then dr(X x)=2e sX (x) v(x) [a(y2j ) + a(y2j +1 )] 2a(xj ) 2u(x): (8) j =0
j =0
Hence, combining inequalities (4)-(8), we obtain Pr[T(x) 2lu(x)] (1 ? p)l?1 >from which it follows that Pr[T(x) k u(x)] (1 ? p) k2 ?1:
(9)
5.1 The proofs We will now obtain the value p referred to in Section 5, such that Pr[Yi = 1jexc(fYi g)] p. Recall that Xi = H(Xi?1 ) so that E[Xi ] m(Xi?1 ) and Yi = 1 i Xi m(X ^ i?1 ) so that Pr[Yi = 1jexc(fYi g)] = Pr[Xi m(X ^ i?1 )jexc(fXi g)]: By Proposition 8 we have,
p Pr[Xi m(X ^ i?1 )jexc(fXi g)] 1 ? (x): Then, substituting into (9) gives Theorem 2 Let = (x) := maxbyx(m(y)=y), where b is the terminating point of the recurrence (1). Then, for suciently large k Pr[T(x) ku(x)] k2 ?1: Remark 1: The theorem (and also Karp's Theorem) are essentially tight (upto constants) under the weak hypothesis on only the expectation as the following example demonstrates. Let a(x) := x, and suppose the r.v. has the \two{ point" distribution Pr[H(X) = 0] = 1=2 = Pr[H(X) = X]. So, E[H(X) j X] = X=2. One can easily compute that Pr[T(x) = lx] = 2?l for any positive integer l, hence Pr[T(x) lx] = 2?(l?1) . Our theorem gives Pr[T(x) lx] = Pr[T(x) l 2x] 2?(l=4?1). Of course, with more information on the distribution, one can 2 improve the probability bound, as the trivial example Pr[H(x) = m(x)] = 1 indicates. Remark 2: Normally, one would like to see a large deviation result of the form Pr[T(x) > E[T(x)] + ] < . So the natural question is: how is the solution to the deterministic equation (2) related to E[T(x)]. We can give the following partial answer:
Proposition9. Let a and m both be concave functions. Then E[T(x)] u(x). Proof. First observe that the stochastic process described by the probabilistic
recurence (1), determines a sequence of non{increasing random variables x =: X0 ; X1; ; Xi; such that E[Xi+1 j Xi ] m(Xi ) for each i 0. Hence we have
(10)
E[Xi+1 ] = E[E[Xi+1 j Xi ]] E[m(Xi )]; using (10) m(E[Xi ]); since m is concave: By induction then, E[Xi ] m[i] (x)
for each i 0. Finally then, since
T(x) =
we have E[T(x)] =
X i0
X i0
X i0
X i0
(11)
a(Xi )
E[a(Xi )] a(E[Xi ]); since a is concave a(m[i] (x)); using (11)
= u(x) Hence in this situation (a; m concave), Theorem 1 yields the large deviation bounds in the usual form. However it would be nice to replace these conditions on a; m by more natural ones or perhaps to remove them altogether. We note that Proposition 9 was independently observed by Prabhakar Ragde .
6 Conclusion We have shown that by applying standard tools from Probability Theory, namely Markov's Inequality, Stochastic Dominance and a Cherno Bound for unbounded variables, we can obtain tail probability bounds on the performance of randomised algorithms comparable to those derived by Karp. In many situations, our results dier from those of Karp only by constant factors. We have made no attempt to optimise these constant factors. It is likely that the same techniques can be applied to probabilistic recurrence relations describing algorithms that generate more than one subproblem, and to versions of the recurrences describing the performance of parallel algorithms. Work on this is in progress.
7 Acknowledgement We thank Torben Hagerup for pointing us to the work in [5].
References 1. Richard M. Karp, \Probabilistic Recurrence Relations", Proc. 23 ACM Symp. on The Theory of Computing , pp. 190{197, 1991. 2. Mike Luby, \A Simple Parallel Algorithm for the Maximal Independent Set Problem", SIAM J. Comput., Vol 15, pp. 1036{1053, 1986. 3. Devdatt Dubhashi and Alessandro Panconesi, \Near Optimal Distributed Edge Colouring", unpublished manuscript, 1993 4. I.W. Marshall and I. Olkin, Inequalities: Theory of Majorization and its Applications , Academic Press, New York, 1979. 5. Martin Dietzfelbinger and Friedhelm Meyer auf der Heide: "A new universal class of hash functions and dynamic hashing in real time". 17th ICALP (1990), LNCS Vol. 443, pp. 6-19. 6. R.M. Karp, E. Upfal, A. Wigderson, \The Complexity of Parallel Search", J. Comp. System Sci., Vol 36(2) 1988, pp 225-253. 7. R. Raman, \The Power of Collision: Randomized Parallel Algorithms for Chaining and Integer Sorting", in Proceedings, 10th Annual FST&TCS Conference , Lecture Notes in Computer Science #472, Springer{Verlag, Berlin, Dec. 1990, pp. 161{175. 8. A. Panconesi and A. Srinivasan, \Fast Randomized Algorithms for Distributed Edge Coloring", in Proceedings of the ACM Symposium on Principles of Distributed Computing , 1992, pp. 251{262.
This article was processed using the LATEX macro package with LLNCS style