A New Algorithmic Approach to the General ... - Semantic Scholar

Report 2 Downloads 85 Views
A New Algorithmic Approach to the General Lovfisz Local Lemma with Applications to Scheduling and Satisfiability Problems* [Extended Abstract] t Christian Scheideler Dept. of Mathematics & Computer Science and Heinz Nixdorf Institute Paderborn University 33095 Paderborn, Germany chrsch @ uni-paderborn.de

Artur Czumaj~ Dept. of Computer and Information Science New Jersey Institute of Technology University Heights Newark, NJ 07102-1982, USA CZUmaj @ cis. njit.ed u

we demonstrate how our results can be used to solve satisfiability problems.

ABSTRACT The LovAsz Local L e m m a (LLL) is a powerful tool t h a t is increasingly playing a valuable role in computer science. It has led to solutions for numerous problems in many different areas, reaching from problems in pure combinatorics to problems in routing, scheduling and approximation theory. However, since the original lemma is non-constructive, many of these solutions were first purely existential. A breakthrough result by Beck and its generalizations have led to polynomial time algorithms for many ~f these problems. However, these methods can only be applied to a simple, symmetric form of the LLL. In this paper we provide a novel approach to design polynomial-time algorithms for problems t h a t require the LLL in its general form. We apply our techniques to find good approximate solutions to a large class of N P - h a r d problems called minimax integer programs (MIPs). Our method finds approximate solutions t h a t are - - especially for problems of non-uniform character - - significantly better than all methods presented before. To demonstrate the applicability of our approach, we apply it to transform important results in the area of job shop scheduling t h a t have so far been only existential (due to the fact t h a t the general LLL was used) into algorithms t h a t find the predicted solutions (with only a small loss) in polynomial time. Fhrthermore,

1.

INTRODUCTION

The probabilistic m e t h o d is used to prove the existence of objects with desirable properties by showing that a randomly chosen object from an appropriate probability distribution has the desired properties with positive probability. In most applications, this probability is not only positive but is actually high and frequently tends to 1 as the parameters of the problem tend to infinity. In such cases, the proof usually supplies an efficient randomized algorithm for producing a structure of the desired type. There are, however, certain examples, where one can prove the existence of the required combinatorial structure by probabilistic arguments t h a t deal with rare events; events that hold with positive probability which is exponentially small in the size of the input. This happens often when using the Lov£sz Local L e m m a (LLL) [10]. We will use this lemma in its general form. LEMMA 1 (LOV~,SZ LOCAL LEMMA). Let A 1 , . . . , An be a set of "bad" events in an arbitrary probability space and let G be a dependency graph for the events A 1 , . . . , An. (That is, Ai is independent of any subset of events A j with (i, j ) G.) Assume there exist xi E [0, 1) for all 1 < i < n with

¢Work partly done while the author was with Heinz Nixdorf Institute and Department of Mathematics and Computer Science at the Paderborn University, Germany. *Research partially supported by DFG-Sonderforschungsbereich 376 "Massive Parallelit/it: Algorithmen, Entwurfsmethoden, Anwendungen." tSee w~w.upb, d e / c s / c h r s c h . h t r a l for a full version of this paper.

Pr[di]<xi

1-I ( 1 - x j ) (i,j)eG

for all i. Then with positive probability no bad event occurs. In its symmetric form, the LLL is defined as follows. LEMMA 2 (SYMMETRIC L L L ) . Let A z , . . . ,An be a set of "bad" events with Pr[Ai] ~_ p for all i. If each Ai is mutually independent of all but at most d of the other events A j and ep(d + 1) _< 1, then with positive probability no bad event occurs.

Permission to make digital or hard copies ofali or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the lull citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC 2000 Portland Oregon USA Copyright ACM 2000 1-58113-184-4/00/5...$5.00

Many applications of the LLL can be found in the literature (see, e. g., [2; 3; 4; 8; 10; 11; 12; 17; 19; 20; 21; 22; 24; 26; 29; 30]). To turn proofs using the LLL into efficient algorithms, even random ones, proved to be difficult for many of these

38

applications. In a breakthrough paper [7], Beck presented a method of converting some applications of the LLL into polynomial-time algorithms (with some sacrifices made with regards to the constants in the original application). Alon [1] provided a parallel variant of the algorithm and simplified the arguments used. His method was further generalized by Molloy and Reed [23] to yield efficient algorithms for a number of applications of the symmetric form of the LLL. There have been only very few cases for which polynomial time algorithms for applications of the general LLL have been found without requiring a reduction to the symmetric LLL. Molloy and Reed [23] found methods for the problems of/?-frugal coloring and acyclic edge coloring that could possibly be applied to problems t h a i require the general LLL, but as it was pointed out by the authors, they may require to prove some (possibly difficult) concentration-like properties for each problem under consideration. Recently, the authors [9] were able to design and analyze a polynomialtime algorithm for the problem of 2-coloring non-uniform hypergraphs and related coloring problems. In Section 2 of this paper, we will present a constructive form of the general LLL t h a t cannot be proved (without significant modifications) by the method in [9]. We will demonstrate in Section 3 how to use this result to construct efficient approximation algorithms for so-called minimax integer programs (MIPs). For every k E IN, let [k] represent the set { 1 , . . . ,k}. Definition 1. A MIP has variables {xi,j : i E [n], j E [g~]}, for some integers ~ . Let N -= ~ e [ - ] ~ and let x denote the N-dimensional vector of the variables x~,j. A MIP seeks to minimize a real Y subject to:

by Feige and Scheideler [12]. Their solution (which heavily uses the general LLL) allowed to prove upper bounds on the makespan of job shop schedules t h a t significantly improved the previously best results (see Section 4.1). Solving these problems with the techniques given in [21; 22] gives an approximation ratio t h a t is as large as e ( log1°~ ) (n is log n the problem size), whereas an approximation ratio of partly as low as 1 + o(1) is required for the proofs in [12] to be constructivized. Our m e t h o d presented in this paper will achieve this goal.

1.1

New results

Our main technical contribution is a novel approach to design polynomial-time algorithms for problems that require the general LLL (see Section 2). We will apply this approach to find good approximation algorithms for MIPs: Consider any MIP. Our strategy is to start with an optimal solution {X;,j : i E [n], j E Jill} to the LP relaxation of the MIP (i. e., the integrality constraints in Definition 1 (3) are removed). The resulting LP optimum y* is clearly a lower bound for y, the optimum of the (integral form of the) MIP. We will present a randomized rounding algorithm that exploits the dependencies among the rows of matrix A to find a good approximate solution for y. These dependencies are defined as follows. Definition 2. Given a MIP instance I as defined above, let the dependency graph Gz of I consist of the node set [m] and an edge set that contains edge (r, s) if and only if there are i , j , S so that ar,(i,j) > 0 and as,u,j,) > 0, and r ~ s or jCj'. Note that Gz may contain self-loops. This simplifies our proofs. However, it would also be possible to avoid them. Based on our LLL techniques, we will prove the following main result.

1. a system of linear inequalities A m 0 the number of trials covered by any P-component after Step 2 is at most O(((~lnm)l/~), and

41

• (A, B) E E2 for the basic events A, B E BT U DT if and only if the trial sets of A and B overlap, A and B do not have core events in a single 1-component, and one of the two cases holds:

2. for any 2-component C after Step 2 of size O(ln 1/~ m), with probability at least 1 - 1 / I n a m for any constant (~ > 0 the number of trials covered by any 2-component in C after applying Step 2 to C is at most O ( ( ( ~ + l / c ) . lnln m)l/~).

1. A E BT and B E DT and A is the event of smallest index among the basic events that had a core event in D before B was discovered to be dangerous, or 2. A E DT and B E BT and B is the event of smallest index in its 1-component, among those intersecting A, that was added to D due to A.

In order to prove the lemma, we intend to bound the expected number of possibilities of choosing sets of core events that are able to establish a 2-component. We will do this by counting all possible sets of basic events that could represent the core events of a 2-component. There are two main problems that have to be solved for the counting:

Edge set E1 is called the set of l-component edges, and edge set E2 is called the set of 2-component edges.

1. First of all, the counting has to provide an ordering for these basic events. The ordering must be able to uniquely determine how the 2-component must be constructed by the algorithm. 2. A unique way of establishing a 2-component still does not uniquely determine the core events, since it can happen that there is a core event in this 2-component whose basic event has trials that are also covered by other 2-components.

Due to the property that every basic event can only h a v e one core event (see Remark 1 (3)), 1- and 2-component witnesses are always guaranteed to be trees. Obviously, every 2component has a unique 2-component witness. On the other hand, every 2-component witness T implies a 2-component C that is unique concerning

These two problems are the main reasons why the analysis used in [9] does not apply here. There, the problem under consideration did not require to provide an ordering of the basic events, and a basic event corresponding to a core event cannot have trials belonging to two different 2-components. This made the proof significantly easier. In order to solve the problems above, we introduce the following structures.

1. its decomposition into 1-components, 2. the order in which events are added to each of its 1components in Build_l-Component, 3. the order in which its 1-components axe constructed, and 4. the round in which each 1-component is added to C in Build_2-Component.

The 1-component witness

Item (1) follows from the fact that the 1-component witnesses in T are separated via 2-component edges. Furthermore, item (2) holds because Build_l-Component ensures that the order in which new events are added to a 1-component in one round is determined by their indices. Item (3) is true, since Step 2.1 ensures that the order in which the 1-components axe constructed is determined by the indices of their initial events. Finally, item (4) holds, because Step 2.2 ensures that the initial 1-component of C is represented by the 1-component witness in T with the initial event of minimum index, and the round in which every other 1-component C' is added to C is determined by the number of 1-component witnesses t h a t have to be passed in T from the initial 1-component witness to the 1-component witness representing C I. However, a 2-component witness may not result in a 2component with uniquely determined trial sets for the core events. The reason for this is that it can happen that a basic event can be a core event in one 2-component (say, C) and overlap with the trials of another 2-component (say, C'). This can happen if C ' is constructed before C. Hence, in this case, a single 2-component witness is not able to uniquely specify the trials covered by the core events of a 2component. Therefore, we also introduce 3-component witnesses.

Assume we aredgiven a 1-component C with a set of core events B e = Ui=o Ei, where E0 consists of the initial core event of C and each Ei with i >_ 1 represents the set of core events added to C in round i of Build_l-Component. A graph T = (BT, E) is called a 1-component witness of C if T is a directed tree with the property that • the node set BT of T is the set consisting of the basic events of all core events in B e , and • (A, B) E E for the basic events A and B if and only if - there is an i >_ 0 such that, for the core events A' and B' of A and B, A' E E~ and B ' E E i + l , and - A is the event of smallest index with a core event in Ei that overlaps with B.

The 2-component witness Furthermore, assume we are given a 2-component D consisting of a set of 1-components So = Ui=o d si, where So consists of the initial 1-component of D and each Si with i _> 1 represents the set of 1-components added to D in round i of Build_2-Component. A graph T = (BT,DT,E1,E2) is called a 2-component witness of D if T is a tree of directed edges with the property that • the node set BT contains the basic events of all core events in D, • the node set DT of T contains the basic events of all dangerous events considered for D (see (*) of Build_2Component), which may include some of the basic events in BT, • ( A , B ) E E1 for the basic events A , B E BT if and only if ( A , B ) is an edge of the 1-component witness of a 1-component in D, and

The 3-component witness Given a 2-component D, the 3-component witness of D is a directed tree T = (BT, DT, El, E2, E3) that is iteratively constructed as follows: Initially, T is equal to the 2-component witness of D. As -long as there is some 2-component C not already in T that

42

is in the neighborhood of an event in BT, we add a 3component edge ( A , B ) to E3 for an arbitrary pair of intersecting basic events A in T and B having a core event in C, and we add the 2-component witness of C to T. Given a 3-component witness T, let VT denote the set of all trials covered by the basic events in BT. 3-component witnesses have the following important property.

In order to count 3-component witnesses T, we start with the basic event that represents the initial event of the initial 1-component of T. Given any set of basic events, we count the possibilities of forming 1-, 2-, and 3-component edges from the basic events to neighboring basic events. That is, we count 3-component witnesses level-wise. Given a set of basic events forming level i, we call the set of basic events forming level i + 1 its witness extension. Recall that the trial sets covered by the core events of a 2-component are disjoint. Therefore, the probability of a basic event to be true restricted to the trial set of its core event is independent of other events. This allows us to bound the expected number number of witness extensions )r of E of order w to at most e -~/~ • e ~ / s , using the following two vital claims that follow from the conditions of Theorem 2.

LEMMA 5. Every 3-component witness uniquely determines the 2-component of its initial 2-component witness. PROOF. As noted above, a 2-component witness already completely specifies a 2-component up to the trial sets covered by its core events. Since a 3-component witness provides a complete time ordering in which events are added to the 2-components it is witnessing, • each time a core event is added to a 1-component, all trials of its basic event are taken that are not already covered by other 1-components, and • there are no more 2-components that overlap with events in T, •

CLAIM 1. For every event A and every k > 0 it holds

[{B E N ( A ) :

ABE { 1 , . . . ,k}}[ < e ~'k~.A~4.

CLAIM 2. Every event A fulfills AA > (1/6) 1/~.

It can then be shown via standard calculations that the expected number of 3-component witnesses T of w-order w is at most e -~/4s. On the other hand we know that

a 3-component witness ensures the unique determination of the trials covered by the core events of the 2-component corresponding to its initial 2-component witness (and all other witnessed 2-components). []

AEBTUDT

This implies the following result.

AEB T

AEB T

Since ~-~AeBT [TA[~ >-- ( ~ A e S T [TA[)~ and ~-~AeBT [TA[ > 1/e [VT[, it follows that [VT[ < w T . Together with Lemma 6, these facts can be used to prove Lemma 4.

COROLLARY 2. For any 2-component C constructed by the algorithm, there is a 3-component witness that uniquely determines all core events in C.

3.

Corollary 2 implies that if there is no 3-component witness including a given set of basic events B, then B cannot form the core of a 2-component. This does significantly help to bound the size of 2-components, since instead of counting combinatorial structures based on core events, we axe allowed to count combinatorial structures based on basic events, which is much easier. Before we do this, we introduce some notation. For every basic event A, let AA = ITA[~ be called the order of A and any of its reduced events. Recall that the probability of an event A or any of its reduced events to be true is at most PA ABT., which is at least the sum of the orders of the core events of the 2-component represented by its initial 2-component witness, it holds:

THE MIP ALGORITHM

In this section we present an algorithm that fulfills Theorem 1. First we give a high-level description of the algorithm. Given an arbitrary MIP, we start with finding an optimal solution {x*,j : i E [n], j 6 [t?i]} to the LP relaxation of the MIP (i. e., the integrality constraints in Definition 1 (3) are removed). The resulting LP optimum y* is clearly a lower bound for y, the optimum of the (integral form of the) MIP. Afterwards, our algorithm works in 3 steps. Its structure is similar to the algorithm in [21], but the techniques used here are completely different. Step 1 rounds the x*j's to multiples of some number in f~(1/In m). Step 2, which contains the main novel contribution of our paper, rounds the xi,j*'s until all of them axe above some constant 7 < 1 or 0. Step 3 finally rounds the x~,j's to values in {0, 1}.

Step h Initial Rounding For every r 6 { 1 , . . . , m}, let w~ : maxc a .... First, we scale A to a matrix A O) = t(a(1)~r,jc by multiplying row r of A with w71 for all r. This ensures that maxc a r~c (1) : 1 for all r. Let the vector y(1) be equal to A(1)x *. For the remaining steps we will assume that the approximation vector a fulfills the following property for some 0 < e < 1 (see Theorem 1):

LEMMA 6. For any 2-component C, its corresponding 3component witness T ensures that WT > A B e .

~ > crT with

Hence, in order to bound the expected number of 2-components of order at least A, it suffices to bound the expected number of 3-component witnesses of w-order at least I. This will enable us to prove Lemma 4.

cr~ = max [(y(1))-1 (y~1))-(1-~)/2]

Q

Let # : max~ r(6 in rn)/(min[a~, are]y(r))]. Since rain[c%, a2]. yO) > 1for every r, # < rClnm]. I f # = 1, then we simply use the randomized rounding strategy by Raghavan and

43

of all events it is not connected with in G. Assume t h a t the a p p r o x i m a t i o n vector a is chosen such t h a t inequality (1) in T h e o r e m 1 holds. As will be shown in the next subsection, in this case T h e o r e m 2 predicts t h a t a vector x' can be provided in polynomial time so t h a t for all r,

T h o m p s o n [25] to transform w* into a n integral vector x 0). _(1) T h a t is, for every i ~ [hi we set the variable xi,j to 1 a n d the other x~,j,-(1)'s to 0 with probability x~,j. In this case, pr[(A(1)xO))~ > (1 + a~)y (1)]
o Yr,k. We define the event E~,k to be true if a n d only if

rounding

This is the first step t h a t requires the LLL (and the consideration of a d e p e n d e n c y graph t h a t m a y have already been significantly reduced by the previous step). Before we perform any further rounding, we first t r a n s f o r m x (~) into a vector x (~) t h a t has only values in {0, 1/#}. This is simply _(1)j > 1 / # to # . xi, _(1)j m a n y xi, _(2)j, 's. done by e x p a n d i n g every xi, Let the m a t r i x A (1) be e x p a n d e d to A (2) in a similar way so t h a t (A(2)w(z))~ = (A(~)~c(1))~ for all r ~ [m]. T h e aim of Step 2 is to r o u n d the ;hi, _(2),~ from values in {0, 1/#} to j values in {0, ~f} for some c o n s t a n t ~, < 1. This will be done in several phases, s t a r t i n g with phase 1. Initially, we set z (°) equal to x (~) a n d lzo equal to #. T h e outcome vector of phase he _> 1 is defined by z (~). T h e final vector z (v) of Step 2 is set to x (3). T h e task of phase ~o is to r o u n d a vector z (~-1) of values in {0, 1 / # ~ - 1 } to a vector z (~) of values in {0, 1/#~} with #v = [ P ~ - I l n # ~ - l ] . This is done as follows. For simplicity, set w = z ( e - l ) , x ' = z (v), # = # v - l , #' = #~, a n d A = A (e). T h e r a n d o m e x p e r i m e n t for applying the LLL is to select a vector S = (Si)ie[~] of sets of size #' uniformly at r a n d o m out of J1 x ... x J,~, where Ji contains ! all j with xi,j > 0. Given a vector S, set xi,j = 1 / # ' for all i ~ In] with j ~ Si a n d the rest to 0. Consider a system of b a d events E l , . • • , E,~ with E~ being true if a n d only if

(

(Ax')~ > \~ +

~/~.(~/(~+~))~

(Bkw')r > yr,k + 6 m i n [ a ~ , 1] 'Y~ . ~/2(1-~) k i n # Assume t h a t all events E~,~ were false. T h e n it follows t h a t

as desired. By our choice of B ~ a n d x ' , every factor b(k)~,c" (x')c has a value of at most ( # ' ) - 1 2 - k . Thus, according to the Chernoff b o u n d s a n d since y~,k _ ~,ek # eYr• >-- IMr,k] ~" 2-~ •

44

Clearly, the same inequality holds for ~ > 1. Thus, we have Pr[E~,k] < e -IMÈ,~l . On the other h-and, the minim u m number of nonzero x it j 1s necessary to obtain a value o f 6 m i n [ ~ r , 1]/(~¢/2(1-~)k ln#).y~ is > 6.2~k# ~ m i n [ ~ , 1]yr, which is more than IM~,kl~. Next we show how to transform inequality (1) in Theorem 1 into-a condition that matches the requirements of Theorem 2. First, we define the dependency graph. Suppose that we are dealing with a MIP instance I. Then we define the dependency graph G~ to contain the edge ((r, k), (s, g)) if and only if (r, s) E Gi and either r ¢ s or k ~/?. (Here we use the fact that Gz may have self-loops.) It is easy to check that G~ covers all dependencies among the E~,k events. Let p~, q~, and z~ be defined as in Theorem 1. That is, Pr

=

e-

min[Otr'~2]y(rl)/3

,

-- |n~: p ~ - I

qr = Pr

, and zr

:

qr



Set P~,k = e x p ( - 2 ~k+l mining, 1lye). Since we assumed at the end of Step 1 that y, _> (1 + a ~ ) y (~), it holds that PT,k _< 2ek

Pr

. Let qr,k = e qr,k

e in

--I

p, ,k , and z~,k = qSr,k. Then we have

r,

2e2~

e-m~P-~ -< e-~2~2k m~p:l =q~



Step 3: Final rounding Recall that X (3) is the vector resulting from x (1) in Step 2. Let Bk, Y~,k, and y~ be defined as in the previous step. Step 3 works like a phase in Step 2 with the difference that now the random experiment simply consists of rounding x (3) to an integral vector x in a way that for every i, xi,j is set to 1 (and the other xl,j,'s to 0) with probability x~3). Assuming

(2)

Since z~ _< 1/3 by Theorem 1, ~ k > o z ~ ~2k e -3/~2~b'~

an event Er,kls is found bad in the LLL algorithm, then all of the x~,j's that have some x'i, j, counting for ET,kls have to be added to the 1-component. Hence, with regard to the insertion of an event, the set of trials that has to be considered is Tr,k instead of MT,k. This changes the number of trials covered by a 2-component C from at most ~](~,k)esc IMr,kl to at most ~--](~,k)~Bc ITr,kl ~ /-t ~(r,k)EBc IM~,kl• Thus, the size of the final 2-components changes to O ( p ( [ In In m)1/~). If # > x/]-ogm, it may not be possible any more to find a final solution via exhaustive search. However, every event participating in t h i s 2-component has a probability of at most pr,k _< e - " to be true. Furthermore, it follows from Claim 1 that a 2-component of q nodes can have at most q • e5"k~ events of order at most k, and each of these events has a probability of at most e -k to be true. Hence, an expected constant number of random experiments suffice for each 2-component to obtain a final solution. Thus, Step 2 of the MIP algorithm can be done in polynomial time.

> (1 - z~) 3/~2~b .

k>_0

that yr _> (1 + a r ) y (1) for all r it holds that

On the other hand, (1 - z~,~) > e-(4/3)z-, k for all z~,k since e -(4/3)* < 1 - 4 x / 3 + 8x2/9 < 1 - x for all x < 1/3. Hence,

Pr

l-Ik>0(1 - z,,k) _> (1 - z~) 2/~2. Furthermore, due to (2) it holds for all (i, k) that

< _

qr,k = q~,k 1-J y ~ , ~ [ s + 6rain[at, 11 "Y~. ~/2 (1-~)k l n # This ensures that, under the assumption that an event E~,~ can be decomposed into at most some constant number of events ET,k[s that are false (as claimed by Theorem 2), we obtain the desired asymptotic approximation ratio. We already showed that if IS[ > [M~,k[ ~ then the above definition yields Pr[E~,~[s] _< pr,~. Otherwise, we can set Pr[E~,~[s] = 0, since more than [M~,~[~ trials are needed to violate our deviation hound. There is one aspect of our application that does not directly match the requirements of Theorem 2. We demand that if

45

for all r, then there is an algorithm that finds a vector x in polynomial time so that ( A x ) ~ >_ (1 - O(a~))y~ for all r.

THEOREM 3. In polynomial time, acyclic job shop schedules can be computed with the following upper bounds on the makespan for any constant e > O:

PROOF. The proof of the theorem is almost identical to the proof of Theorem 1. Since the random variables considered for the Chernoff bounds in Section 3 are weakly negatively correlated, techniques based on [27] have to be applied. Furthermore, we have to change to the rule that if at most IMp,k[~ many xi,j's in some set S are set to 0 (instead of 1/#' for MIPs), then it does not affect the deviation bound for E~,kls. []

1. For any acyelic JSS problem, L

= O

[

( C + D ( l o g log p ) l T e ) log P

\

l o g ( m i n [ C / m + ( l o g log p ) l + e , p ] ) ]

,

implying that L = O(Ib log lb(loglog Ib)l +~). Observe that if C > D . P6 for some constant 5 > O, then L = O(C). 2. I f operation lengths depend only on the machine on which the operation is performed, then L = O(C + D(log log p)1+¢).

An instance of MAX SAT is defined by (C,w), where g is a set of m boolean clauses such that each clause C E g is a disjunction of literals with a positive weight w(C). Let X = { x l , . . . ,xn} be the set of boolean variables in the clauses of C. For every i, x~ --- 1 means xi is true and xi = 0 means x~ is false. A literal is a variable x E X or its negation = 1 - x . For each xi, we define x~,o = 2i andxi,1 =--xi. We assume that no literals with the same variable appear more than once in a clause in g. Clause j in g is denoted by

3. With preemption, L = O(C + D ( l o g l o g p ) l + ¢ ) . In order to prove Theorem 3, we have to constructivize the existence proof given for an intermediate schedule in [12]. We do this with the help of Theorem 1 by refining subschedules of length a multiple of d = log 1/~ D instead of log D. The rest of the proof is omitted in this extended abstract.

4.2

Cj

Satisfiability

Many approaches have already been presented that provide good approximation algorithms for MAX SAT problems. This was pioneered by Johnson [18] and Raghavan and Thompson [25] and further improved by Yannakakis [31], Goemans and Williamson [14; 15] and Asano et al. [5; 6]. Many of these approaches use at the end a simple randomized rounding strategy. Our approach may help to improve this last step: Consider any boolean formula in CNF. We will give an example of how to express the MAX SAT problem for this formula as a reverse form of a MIP, called maximin integer program (MIP), that is defined as follows.

~- x i j , l , l j , 1 V xij,2,1j, 2 V . . • V X l j , k j , ~ j , k j



We would like to find an optimal solution to the following integer program: Maximize ~j~=l w j y j subject to kj

E

xi~,s,lj, > yj

x~,~, y~ e {0, 1}

Vj

(4)

(5)

If we replace the conditions in' (5) by 0
Recommend Documents