Graph Expansion and the Unique Games Conjecture Prasad Raghavendra∗
David Steurer†
December 15, 2010
Abstract Abstract In this work, we investigate the connection between Graph Expansion and the Unique Games Conjecture. The edge expansion of a subset of vertices S ⊆ V in a graph G measures the fraction of edges that leave S . In a d-regular graph, the edge expansion/conductance Φ(S ) of a subset S ⊆ V ,V\S )| is defined as Φ(S ) = |E(Sd|S . Approximating the conductance of small | linear sized sets (size δn) is a natural optimization question that is a variant of the well-studied Sparsest Cut problem. However, there are no known algorithms to even distinguish between almost complete edge expansion (Φ(S ) = 1 − ε), and close to 0 expansion. – We show that a simple decision version of the problem of approximating small set expansion reduces to Unique Games. Thus if approximating edge expansion of small sets is hard, then Unique Games is hard. Alternatively, a refutation of the Unique Games Conjecture will yield better algorithms to approximate edge expansion in graphs. This is the first non-trivial “reverse” reduction from a natural optimization problem to Unique Games. – Under a stronger variant of the Unique Games Conjecture that assumes mild expansion of small sets, we show that it is UG-hard to approximate small set expansion. – On instances with sufficiently good expansion of small sets, we show that Unique Games is easy by extending the techniques of [AKK+ 08].
∗
Georgia Institute of Technology, Atlanta, GA. Research done while at Microsoft Research New England. Microsoft Research New England, Cambridge, MA. Research done while at Princeton University and while at Microsoft Research New England as a student intern. Supported by NSF grants 0830673, 0832797, 528414. †
Contents 1
Introduction 1.1 Results . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Towards an Equivalence . . . . . . . . . . 1.2 Unique Games with Small Set Expansion are Easy 1.3 Subsequent Work . . . . . . . . . . . . . . . . . .
. . . .
1 4 4 5 6
2
Preliminaries 2.1 Techniques and Proof Overview . . . . . . . . . . . . . . . . . . . . . . .
6 8
3
Reduction from Graph Expansion to Partial Unique Games 3.1 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 11 12
4
Partial 2-Prover Games
17
5
Unique Games with Small Set Expansion 5.1 Influences and Noise Stability . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Hardness Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20 20 22
6
Putting it Together
26
7
Unique Games with Sufficient Local Expansion are Easy 7.1 Rounding via Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Correlations and Small-Set Expansion . . . . . . . . . . . . . . . . . . . .
28 29 30
References
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
31
1
Introduction
The Unique Games Conjecture of Khot [Kho02] is among the central open problems in hardness of approximation, and has fuelled many developments in the area in recent years. Roughly speaking, the conjecture asserts that a certain constraint satisfaction problem called the Unique Games is NP-hard to approximate in a strong sense. An instance of Unique Games consists of a graph Υ = (V, E), a finite set of labels {1, . . . , R} and a permutation on the label set for each edge. A labelling F : V → [R] of the vertices of the graph is said to satisfy an edge (v, w), if πv←w (F(w)) = F(v). The objective is to find an assignment that satisfies the maximum number of edges. The Unique Games Conjecture asserts that if the label set is large enough then even if the input instance has a labelling satisfying almost all edges, it is NP-hard to find a labelling that satisfies a non-negligible fraction of edges. The significance of the Unique Games Conjecture stems from the suprising implications it has on the approximability of fundamental combinatorial optimization problems. While the Unique Games Conjecture has been shown to imply optimal inapproximability results for classic problems like Max Cut [KKMO07], Vertex Cover [KR08] and Sparsest Cut [KV05], more recent work has demonstrated that the conjecture yields tight hardness of approximation results for entire classes of problems like constraint satisfaction problems [Rag08]. While the implications of the conjecture are well-understood, there has been much slower progress towards its resolution. Results supporting the truth of Unique Games Conjecture have been especially difficult to show. In particular, it is unknown whether many of the implications of Unique Games Conjecture is equivalent to the conjecture. In other words, it is entirely consistent with existing literature that all the implications of Unique Games Conjecture on problems like Max Cut and Vertex Cover hold, but the conjecture itself is false. More precisely, although the Unique Games problem is known to efficiently reduce to classic problems like Max Cut and Vertex Cover, there are no known “reverse” reductions from these problems back to Unique Games. One of the appealing features of the theory of NP-completeness is the computational equivalence of all NP-complete problems. Specifically, a polynomial time algorithm for any NP-complete problem would yield an efficient algorithm for all NP-complete problems. In contrast, Unique Games based hardness results do not have this property precisely due to the lack of reverse reductions to Unique Games. The only reverse reduction towards which there is any literature is the reduction from Max Cut to Unique Games. Note that the Max Cut problem is a special case of Unique Games over the binary alphabet {0, 1}. Hence a Max Cut instance is readily reduced to a Unique Games instance by using parallel repetition. With a sufficiently strong parallel repetition theorem (conjectured by Feige et al. [FKO07]), this would yield a reverse reduction from Max Cut to Unique Games. Unfortunately, a strong parallel repetition theorem of this nature was shown not to hold by Raz [Raz08]. Subsequent work by Barak et al. [BHH+ 08] almost entirely ruled out this approach to reduce Max Cut to Unique Games. In this work, we obtain a reduction from a natural problem related to approximating edge expansion in graphs to Unique Games. Graph Expansion. The phenonmenon of vertex and edge expansion in graphs has been a subject of intense study with applications pervading almost all branches of theoretical computer science. From an algorithmic standpoint, approximating expansion or lack thereof 1
(finding good cuts or separators) is a fundamental optimization problem with numerous applications. Yet, the computational complexity of detecting and approximating expansion in graphs is not very well understood. Among the two notions of expansion, this work will concern mostly with edge expansion. For simplicity, let us first consider the case of a d-regular graph G = (V, E). The edge expansion of a subset of vertices S ⊆ V measures the fraction of edges that leave S . Formally, the edge expansion Φ(S ) of a subset S ⊆ V can be defined as, Φ(S ) =
|E(S , V \ S )| , d|S |
where E(S , V \ S ) the set of edges with one endpoint in S and the other endpoint in V \ S . The conductance or the Cheeger’s constant associated with the graph G is the minimum of h(S ) over all sets of at most half the vertices, i.e., ΦG = min Φ(S ) . |S |6n/2
The definitions of conductance of sets Φ(S ) and the graph ΦG can be extended naturally to non-regular graphs, and finally to arbitrary weighted graphs (see Section 2). Henceforth in this section, we will use the notation µ(S ) to denote the normalized set size µ(S ) = |S |/n. The problem of approximating ΦG also referred to as the the uniform Sparsest Cut, is among the fundamental problems in approximation algorithms. Efforts towards approximating ΦG have led to a rich body of work with strong connections to spectral techniques and metric embeddings. The first approximation for the conductance was obtained by discrete analogues of the Cheeger inequality [Che70] shown by Alon-Milman [AM85] and Alon [Alo86]. Specifically, they show the following: Theorem 1.1 (Cheeger’s Inequality). If λ2 denotes the second largest eigenvalue of the suitably normalized adjacency matrix of a graph G then, p 1 − λ2 6 ΦG 6 2(1 − λ2 ) . 2 Since the second eigenvalue λ2 can be efficiently computed, Cheeger’s inequality yields an approximation algorithm for ΦG , indeed one that is used heavily in practice for graph partitioning. However, the approximation for ΦG obained via Cheeger’s inequality is poor in terms of an approximation factor, especially when the value of ΦG is small (λ2 is close to 1). An O(log n) approximation algorithm for ΦG was obtained by Leighton and Rao [LR99]. Later work by Linial et al. [LLR95] and Aumann and Rabani [AR98] established a strong connection between the Sparsest Cut problem and the theory of metric spaces, inturn spurring a large and rich bodypof work. More recently, in a breakthrough result Arora et al. [ARV04] obtained a O( log n) approximation for the problem using semidefinite programming techniques. Small Set Expansion. Note that the ΦG is a fairly coarse measure of edge expansion, in that it is the worst case edge expansion over sets S of all sizes. In a typical graph (say a random d-regular graph), smaller sets of vertices expand to a larger extent than sets with half the vertices. For instance, all sets S of δn vertices in a random d-regular graph have Φ(S ) ≈ 1 − d2 with very high probability, while the conductance ΦG of the entire graph 2
is roughly 12 . Moreover, the stronger expansion exhibited by small sets has numerous applications in graph theory. A natural finer measure of the edge expansion of a graph is its expansion profile. Specifically, for a graph G the expansion profile is given by the curve ΦG (δ) = min Φ(S )
∀δ ∈ [0, 1/2] .
µ(S )6δ
The problem of approximating the expansion profile is seemingly far-less tractable than approximating ΦG itself. For instance, there is no known algorithm for the following easily stated decision problem concerning the expansion profile: Problem 1.2 (Gap-Small-Set Expansion (η, δ)). Given a regular graph G = (V, E) on n vertices, distinguish whether – (Expansion close to zero) There exists a set of δn vertices S ⊆ V such that Φ(S ) < η. – (Expansion close to one) For every subset S ⊆ V of at most 2δn vertices, Φ(S ) > 1 − η. Spectral techniques fail in approximating the expansion of small sets in graphs. On one hand, even with the largest possible spectral gap, the Cheeger’s inequality cannot yield a lower bound greater than 1/2 for the conductance ΦG (δ). More importantly, there exists graphs such as hypercube where there are sets S of half the vertices with small conductance (Φ(S ) < ε), yet every sufficiently small set satisfies Φ(S ) > 1 − ε. This implies that ΦG (and the second eigen value λ2 ) do not yield any information about ΦG (δ). Expansion and Unique Games. Vertex and edge expansion in graphs appear to be closely tied to the hard instances for linear and semidefinite programming relaxations. Many integrality gap instances have been constructed for problems like Vertex Cover or Max Cut against the linear programming hierarchies such as Lovasz-Schriver and Sherali-Adams hierarchies (see [CMM09, STT07] and the references therein). Not only do most of these instances consist of expanding graphs but the arguments rely crucially on either vertex or edge expansion. The situation is little bit more subtle in case of semidefinite programming. Semidefinite programs can approximate Max Cut well on instances that have very good conductance (spectral gap). Hence, the SDP integrality gap instances known are graphs where small sets expand well, while the larger sets do not. Indeed, SDP integrality gap constructions for Max Cut [FS02, KV05, RS09b], Vertex Cover [GMPT07], Unique Games [KV05, RS09a] and Sparsest Cut [KV05] all have near-perfect edge expansion for small sets. In case of Unique Games, not only do all known integrality gap instances have nearperfect edge expansion of small sets, even the analysis relies directly on this property. Furthermore, it is known that the best possible soundness for Unique Games with label size 1 R and completeness 1 − ε is a constant η(R, ε) which is roughly Rε/2 . The constant 1 − η is 1 exactly equal to the expansion of sets of size R in a certain graph defined over the Gaussian space. While this suggests that Unique Games is closely tied to expansion of small sets in graphs, somewhat contrastingly, Arora et al. [AKK+ 08] show that Unique Games is easy when the constraint graph involved is a good spectral expander, i.e., has a non-trivial spectral gap for the Laplacian. Motivated by the above reasons, we investigate the connection between graph expansion and Unique Games in this work. 3
1.1
Results
The main result of this work is a reduction from the problem of approximating expansion of small sets to the Unique Games problem. The main implication of the result can be succinctly stated as follows. Let us consider the following hardness assumption about the complexity of the Gap-Small-Set Expansion problem. Hypothesis 1.3 (Gap-Small-Set Expansion Hypothesis). For every η > 0, there exists δ such that the Gap-Small-Set Expansion (η, δ) is NP-hard. Then, an immediate consequence of the reduction presented in this work is, Theorem 1.4. The Gap-Small-Set Expansion hypothesis implies the Unique Games Conjecture. To the best of our knowledge, this is the first non-trivial “reverse” reduction from a natural combinatorial optimization problem to Unique Games. On one hand, it connects the somewhat non-standard problem of Unique Games to the much well-studied problem of approximating graph expansion. Furthermore, the result makes the conspicious presence of small set expansion in SDP integrality gaps for Unique Games and related problems. While a confirmation of Unique Games Conjecture was known to imply optimal inapproximability results for fundamental problems, a refutation of Unique Games Conjecture was not known to formally imply any new algorithmic technique. An important implication of the above result is that a refutation of the Unique Games Conjecture would yield an algorithm for approximating edge expansion (only in a certain regime) in graphs – a classic optimization problem. This implication and slightly stronger versions of it are stated formally in Section 6. 1.1.1
Towards an Equivalence
A natural question that arises from Theorem 1.4 is whether the Unique Games Conjecture is equivalent to the Gap-Small-Set Expansion conjecture. More specifically, could it be true that Unique Games Conjecture implies the Gap-Small-Set Expansion conjecture. Showing a result of this nature amounts to obtaining a reduction from Unique Games to Gap-Small-Set Expansion. Despite the plentitude of UG reductions and a fairly thorough understanding of how to construct reductions from Unique Games, obtaining reductions to graph expansion problems is often problematic. The main issue is that hardness reductions via local gadgets do not alter the global structure of the graph. For example, if the Unique Games instance is disconnected, the resulting graph produced by a gadget reduction is also disconnected. Hence, to show a UG-hardness result for a global property such as expansion, it often seems necessary to assume the corresponding global property on the constraint graph of the Unique Games. More specifically, in our case we require that the Unique Games instance has good local expansion, in that sufficiently small sets have conductance close to 1. The formal statement of the modified Unique Games Conjecture with mild expansion on small sets is given below: Conjecture 1.5 (Unique Games with Small Set Expansion Conjecture). For every ε > 0, there exists δ such that for all η > 0 the following problem is NP-hard with a sufficiently large R = R(ε, η): 4
Given an instance Υ of Unique Games with alphabet size R distinguish between the following cases: – Completeness There exists a labelling that satisfies at least 1 − η fraction of edges. – Soundness Every labelling satisfies at most η fraction of edges and the constraint graph (V, E) of Υ satisfies the following expansion property: – (Local Expansion Property) For every set S ⊂ V with µ(S ) ∈ [ 2δ , 2δ], Φ(S ) > 1 − ε. The above conjecture assumes fairly mild expansion in that sufficiently small sets have conductance close to 1. In fact, existing SDP integrality gap instances for Unique Games (see [KV05, RS09a]) satisfy the above local expansion property. Under this stronger Unique Games Conjecture stated above, we show the following hardness result for Gap-Small-Set Expansion. Theorem 1.6. The Unique Games with Small Set Expansion conjecture implies the GapSmall-Set Expansion hypothesis.
1.2
Unique Games with Small Set Expansion are Easy
We further explore the connection between Unique Games and small set expansion, by studying the effect of small set expansion in the instance on the unique games conjecture. As already mentioned, Arora et al. [AKK+ 08] showed that the Unique Games problem can be efficiently solved when the associated constraint graph is a good spectral expander. Here, we generalize their result and show that Unique Games is easy even when the instance has sufficiently high expansion of small sets. Formally, we show that: Theorem 1.7. Let U be a regular unique game with semidefinite value sdp(U) > 1 − ε and constraint graph G. Then, for every δ > 0 satisfying √ ΦG (δ) > O ε log(1/δ) , (1.1) √ we can compute an assignment for U with value at least Ω(δ2 ) − O( ε). The algorithm uses the natural semidefinite program for Unique Games, and shows that if the constraint graph is a small-set expander then the SDP vector solution can be rounded. The approach is similar to the work of Arora et al. [AKK+ 08] for Unique Games on expanders. Specifically, the SDP vectors have correlations locally on the edges of the constraint graph. However, since the constraint graph has expansion, this local correlation implies a global correlation across all the vertices in the graph. Global correlation of SDP vectors associated with the vertices of the UG instance makes the task of rounding them easy. We wish to point out that the above algorithm does not contradict the Unique Games with Small Set Expansion conjecture (Conjecture 1.5). To see this, observe that the expansion ΦG (δ) required by the algorithm in Theorem 1.7 is a growing function of 1/δ. However in Conjecture 1.5, the set size δ is chosen to be sufficiently small depending on the expansion (ε) required. Arora et al. [AIMS10] also exhibited an algorithm for unique games when the constraint graph has good local expansion. 5
1.3
Subsequent Work
In a subsequent work with Tulsiani [RST10], the authors show an equivalence between Gap-Small-Set Expansion hypothesis and Unique Games with Small Set Expansion (Conjecture 1.5). To this end, this work exploits the additional structure that exists in unique games instances generated by the reduction in Theorem 1.4. Consequently, the Gap-Small-Set Expansion hypothesis is shown to imply inapproximability results for problems such as Minimum Linear Arrangement and Balanced Separator. Showing inapproximability results for these problems required Unique Games with an additional expansion assumption on the constraint graphs. Therefore Gap-Small-Set Expansion hypothesis serves as a natural hardness assumption that yields all the implications of Unique Games Conjecture, along with inapproximability results for other problems such as Minimum Linear Arrangement. The Gap-Small-Set Expansion hypothesis is a natural way to introduce expansion in to unique games instances. The Gap-Small-Set Expansion hypothesis as stated in this work is qualitative in that there is no prescribed quantitative dependence between the parameters η and δ. It was subsequently shown in [RST10] that the hypothesis is equivalent to a stronger hypothesis with a specific quantitative dependence of the set size δ on the completeness 1 − η. This is analgous to the result of Khot et al. [KKMO07] showing the equivalence of the qualitative version of Unique Games to a stronger quantitative version. Arora et al. [SAS10] obtained a sub-exponential time algorithm for Unique Games. As a first step, this work obtained a sub-exponential time algorithm for the Gap-Small-Set Expansion problem. Furthermore, their algorithm for Unique Games proceeds by decomposing the constraint graph in to a pieces each of which is a small set expander.
2
Preliminaries
Notation. Let G = (V, E) be a undirected d-regular graph. We will use v ∼ V to denote a vertex sampled uniformly from V. Similarly, the notation e ∼ E will denote an edge sampled uniformly from E. We denote by E(A, B) the set of edges with one endpoint in A and the other endpoint in B. Definition 2.1. For a vertex subset S ⊆ V, we define def
∂(S ) =
|E(S , V \ S )| |E|
def
µ(S ) =
and
|S | . n
Definition 2.2 (Conductance). The conductance/Cheeger’s constant associated with a subset S ⊆ V is given by ∂(S ) Φ(S ) = . µ(S ) The conductance/Cheeger’s constant for the graph G is ΦG = minµ(S )6 1 Φ(S ). 2
Remark 2.3. The definitions of conductance and expansion can be appropriately extended to weighted graphs. Let G = (V, E) be a undirected weighted graph with edge weights P {W{u,v} }u,v∈V . The degree of a vertex v is given by deg(v) = u∈V W{u,v} . By suitable normalization, we always assume that, X
deg(v) = 1
and
v∈V
X e
6
We =
1 . 2
The weighted graph G can be thought of as a reversible Markov chain. Then the deg(v) is proportional to the probability of occurrence of v in the stationary distribution associated with G. For a vertex subset S ⊆ V, the quantity ∂(S )/µ(S ) is the following conditional probability: given that the random walk v is within the set S what is the probability that it exits S in one step. Formally, ∂(S ) = [w < S |v ∈ S ] µ(S ) v∼V,w∈N(v) All the results in this work can be extended to weighted graphs with minor modifications. 2-Prover Games, Partial Strategies. Definition 2.4. A general 2-prover game Γ is specified by sets of vertices VA , VB , an edge set E ⊆ VA × VB , an alphabet Σ, and a collection of predicates Πu,v : Σ × Σ → {0, 1} indexed by vertex pairs u, v ∈ E. A strategy for Γ is specified by two assignments A : VA → Σ and B : VB → Σ. The value of the game Γ is defined as the maximum success probability n o (A(u), B(v)) ∈ Πu,v , uv∈E
over all strategies A : VA → Σ and B : VB → Σ. As usual, the notation u ∼ VA means that u is sampled with probability proportional to its degree. A partial strategy of the prover is a partial assignment to the vertices in the game. Equivalently, in a partial strategy, the provers refuse to answer some questions of the verifier. The partial value models a game where the provers are allowed to refuse to answer the question (i.e., they win either if their answers satisfy the predicate or if both refuse to answer). To avoid trivial strategies where the provers always refuse to answer, we require that they have to answer with probability at least α. The precise definition of these notions are as follows. Definition 2.5. An α-partial strategy for a 2-prover game Γ is a pair of assignments A : VA → Σ ∪ {⊥} and B : VB → Σ ∪ {⊥} such that {A(u) , ⊥} > α and {B(v) , ⊥} > α v∼VB
u∼VA
Here, ⊥ is a designated symbol that is not a member of Σ. The α-partial value of the game is defined as the maximum success probability n o (A(u), B(v)) ∈ Π A(u) ∈ Σ ∨ B(v) ∈ Σ , uv∈E
u,v
over all α-partial strategies A and B. Unique Games Conjecture. For the sake of concreteness, we formally state the unique games conjecture here. The Unique Games problem is defined as follows: Definition 2.6. An instance of Unique Games represented as Υ = (V, E, Π, [R]) consists of a graph over vertex set V with the edges E between them. Also part of the instance is a set of labels [R] = {1, . . . , R}, and a set of permutations πv←w : [R] → [R] for each edge e = (w, v) ∈ E. An assignment A of labels to vertices is said to satisfy an edge e = (w, v), if πv←w (A(w)) = A(v). The objective is to find an assignment A of labels that satisfies the maximum number of edges. 7
As is customary in hardness of approximation, one defines a gap-version of the Unique Games problem as follows: Problem 2.7 (Unique Games (R, 1 − ε, η)). Given a Unique Games instance Υ = (V, E, Π = {πv←w : [R] → [R] | e = (w, v) ∈ E}, [R]) with number of labels R, distinguish between the following two cases: – (1 − ε)- satisfiable instances: There exists an assignment A of labels that satisfies a 1 − ε fraction of edges. – Instances that are not η-satisfiable: No assignment satisfies more than a η-fraction of the edges E. The Unique Games Conjecture asserts that the above decision problem is NP-hard when the number of labels is large enough. Formally, Conjecture 2.8 (Unique Games Conjecture [Kho02]). For all constants ε, η > 0, there exists large enough constant R such that Unique Games (R, 1 − ε, η) is NP-hard.
2.1
Techniques and Proof Overview
The reduction from Gap-Small-Set Expansion to Unique Games and its analysis are surprisingly simple. For conceptual clarity, we subdivide the reduction in to two parts: From Gap-Small-Set Expansion to Partial Unique Games. First we reduce the GapSmall-Set Expansion problem to a 2-prover partial unique game. Given a 2-prover game Γ, a partial game is one in which the provers are permitted to refuse to answer the question. The provers win only if both of them refuse to answer the question or the answers given by them are correct as per the game Γ. To avoid the trivial strategy that always refuses to answer questions, we require that the provers answer the question for at least an α-fraction of the questions. Let G = (V, E) be an instance of the Gap-Small-Set Expansion (η, δ) problem. For the sake of simplicity, let G be a d-regular unweighted graph. We define the following unique game: – Fix R = d 1δ e. The referee/verifier picks R edges M = {(u1 , w1 ), . . . , (uR , wR )} uniformly at random from G. – The referee sends a permutation of the tuple U = (u1 , . . . , uR ) to the first prover, and a random permutation of the tuple W = (v1 , . . . , vR ) to second prover. – The provers are to pick one of the vertices of the tuple. More precisely, the provers are to pick an index of a vertex in the tuple given to them. – The provers win if they pick two vertices (ui , wi ) corresponding to some edge in the set M. Let us suppose there exists a set S of δn vertices such that Φ(S ) 6 ε. In this case, the provers can use the following simple strategy: If exactly one vertex from S appears in the tuple, return that vertex, else refuse to answer the question. The set S is of size δn, and a question U ∈ V R has roughly 1δ vertices. Therefore, the above strategy is a valid partial strategy in that, with constant probability (at least 0.01) 8
exactly one of the vertices from S appears in the question U, and the provers answer the question. Observe that if the set S has small conductance Φ(S ) 6 ε, a random edge incident on S is with high probability completely contained within S . Thus if ui ∈ S then with very high probability its neighbour vi also belongs to S . This implies that whenever the first prover decides to answer the vertex ui , the second prover also answers vi with very high probability. Thus a small non-expanding set S translates in to a strategy for the unique game. Let A, B : V R → [R] be the strategies of the two provers that succeeds with probability at least 12 . For a vertex sequence U ∈ V R−1 , let U +i x ∈ V R denote the vertex sequence obtained by inserting x at index i. For every vertex sequence U ∈ V R−1 and an index i, the strategy A defines a subset of vertices as follows: AU (x) = 1 if F(U +i x) = i else it is 0 Specifically, AU is the indicator function of the set of vertices that the strategy decides to pick up, when inserted at the ith location in U. For the strategy proposed in the completeness case, for every setting of U, the set AU is either the non-expanding set S or the empty set. Extrapolating from here, it is natural to look for non-expanding sets by picking a random U ∈ V R−1 and i and checking the set AU . This is the basic intuition behind the soundness analysis presented in Section 3.2. Over a random choice of U ∈ V R−1 , the expected size of AU is indeed roughly Θ( R1 ) = Θ(δ). This is easy to see since the provers pick exactly one of the R vertices given to them. However, it could be possible that with very high probability over the choice of U, AU is either too large or too small a set. To eliminate this possibility, we modify the game by introducing random noise in to the questions. More specifically, the referee adds εR random vertices independently to the tuples U and V. It is easy to check that in the completeness case, the success probability of the strategy changes only by O(ε) due to random noise. Observe that, since for every fixing R − 1 vertices in the tuple U, the remaining εR + 1 vertices in the tuple are identically distributed. In other words, even for a fixed U, the provers pick one vertex out of the remaining εR + 1 1 in the question. This can be used to show that the size of the sets AU are bounded by εR . On the downside, the introduction of random noise, require us to deal with fuzzy sets aU : V → [0, 1]. In Lemma 3.2, we abstract the properties of the distribution over sets ({AU } for random U) that would be sufficient to extract a small non-expanding set. In Lemma 3.4 we show how to extract a non-expanding set from functions that satisfy these properties. The details of the reduction and the proof of the following theorem are outlined in Section 3. Theorem 2.9. For every δ, ε, η > 0, let set r = d 1δ e R = r/(1 + ε). Given a graph G = (V, E), the Unique Games instance ΥR,ε generated by the 3 reduction is such that: – Completeness: If Φ(S ) 6 η for some S with µ(S ) = δ then there exists a 1/4-partial strategy of value at least 1 − ε − η. – Soundness close to 1: If there exists a α-partial strategy with value 1 − η for the Unique Games instance Υ, then there exists a set S ⊆ V with Φ(S ) 6 5η and µ(S ) ∈ [αεδ/4, 8δ/ε]. – Soundness close to 0: If there exists a α-partial strategy with value η for the Unique Games instance Υ, then there exists a set S ⊆ V with Φ(S ) 6 1 − η/8 and µ(S ) ∈ [αεδ/4, 8δ/ε].
9
From Partial Unique Games to Unique Games. We show a general reduction from a partial 2-prover game Γ to a corresponding 2-prover game Γ0 . Furthermore, this reduction preserves the uniqueness of the game. The formal statement of the result that we show in Section 4 is as follows: Theorem 2.10. For all positive integers c, given a 2-prover game Γ with n vertices, there is a reduction to another 2-prover game Γ0 running in time nO(c) such that: – Completeness If the α-partial value of Γ is at least 1 − ε, then the value of Γ0 is at least 1 − ε − e−α·c . – (Soundness close to 1) If the value of Γ0 is at least 1 − η, then the 1/2c-partial value of Γ is at least 1 − 4η. – (Soundness close to 1) If the value of Γ0 is at least η, then the 1/2c-partial value of Γ is at least η/4. Furthermore, the reduction preserves the uniqueness property of the games. Recall that in the partial game Γ, the provers are allowed to refuse to answer questions. However, in the game Γ0 no such choice should be available. To achieve this, the referee gives multiple questions from Γ simultaneously and the provers have to answer any question of their choice. Specifically, the questions in Γ0 consists of a sequence of c questions from the game Γ. The provers are required to choose one of the c questions to answer, and return the answer. The provers win only if they pick the corresponding pair of questions (say the ith question), and also answer the question correctly as per the game Γ. In the completeness case, suppose S is an α-partial strategy for Γ. The two provers can use the following strategy for Γ0 : they answer the first question in the sequence for which S does not refuse, but outputs a valid answer. If the strategy S refuses to answer all the c questions in the sequence, the provers answer arbitrarily. For a sufficiently large choice of c, with very high probability there will be one question in the sequence that S does not refuse to answer. Hence, for sufficiently a large c, S0 achieves a value close to that achieved by S. In the soundness analysis, starting from a good strategy for Γ0 we construct a partial strategy for Γ. To this end, given a question for Γ, the provers use shared randomness to sample remaining R − 1 questions, embeds the original question at a random index. If the strategy for Γ0 returns an answer to the embedded question, then the provers return it as answer to Γ, else the provers return ⊥. Eliminating the shared randomness is slightly more involved, since the definition of partial strategies requires controlling two quantities simultaneously: the number of questions answered and the fraction of these questions answered correctly.
3
Reduction from Graph Expansion to Partial Unique Games
In this section, we present the reduction from Gap-Small-Set Expansion problem to Partial Unique Games. Let G = (V, E) be an instance of the Gap-Small-Set Expansion probem. Let R ∈ and r = d(1 + ε)Re. For a permutation π : [R] → [R] and a sequence A ∈ V R , we write A0 = π.A to denote the permutation of A according to π, i.e., A0π(i) = Ai for all i ∈ [R]. We define a Unique Games instance Υ = ΥR,ε (G) with vertex set V r and alphabet Σ = [r]. The edge constraints in Υ are given by the tests performed by the following verifier. 10
SSE-to-UG Reduction Let A : V r → Σ and B : V r → Σ denote the strategy of the two provers on the unique game Υ. The value of the assignment is the success probability of the following test: 1. Sample R random edges {ei = (ui , wi )|i ∈ [R]} independently from E and let U = (u1 , . . . , uR ), W = (w1 , . . . , wR ) and M := (e1 , . . . , eR ). 2. (Noise Vertices) Sample t = dεRe pairs {(u` , w` )|R + 1 6 ` 6 R + t} independently at ˜ = (w1 , . . . , wR+t ). Let us denote random from V × V. Let U˜ = (u1 , . . . , uR+t ) and W ∗ ∗ U = (uR+1 , . . . , uR+t ) and W = (uR+1 , . . . , uR+t ). 3. Sample two permutations π1 , π2 : [r] → [r]. 4. Output Success if −1 ˜ ˜ π−1 1 A(π1 .U) = π2 B(π2 .W) .
(3.1)
In the following sections, we prove the following theorem about this reduction. The proof is an immediate consequence of Lemma 3.1, Lemma 3.5 and Lemma 3.6.
3.1
Completeness
Lemma 3.1. For every set of vertices S ⊆ V with µ(S ) = δ and Φ(S ) 6 η, there exists a α-partial strategy A, B : V r → Σ ∪ {⊥} with value at least (1 − 2rδ(ε + η)) for α = (1 − rδ)rδ. Proof. We may assume r < 1/δ, because the lemma is trivial otherwise. Consider the following partial strategy A : V r → [r] ∪ {⊥} for Υ, if {i} = { j ∈ [r] | x j ∈ S } , i A(x1 , . . . , xr ) = ⊥ otherwise. The strategy of the other prover is identical, i.e., B(x1 , . . . , xr ) = A(x1 , . . . , xr ). First, let us verify that A, B form an α = (1 − rδ)rδ-partial strategy. This is easy to see since, ! r {A(U) , ⊥} = (1 − δ)r−1 δ = (1 − δ)r−1 rδ > (1 − rδ)rδ . U∼V r 1 The value of the partial strategy is given by: n o ˜ = π−1 (B(π2 .W)) ˜ A(π1 .U) ˜ ∈ [r] ∨ B(π2 .W) ˜ ∈ [r] π−1 (A(π . U)) 1 1 2 ˜ W ˜ U, π1 ,π2
(3.2)
Notice that the strategies A and B are invariant under permutation of coordinates, i.e., for example A(π.X) = π(A(X)) for all permutations π and all X ∈ V R with A(X) , ⊥. Hence, the value of the partial strategy can be rewritten as: n o ˜ = B(W) ˜ A(U) ˜ ∈ [r] ∨ B(W) ˜ ∈ [r] A(U) (3.3) ˜ W ˜ U,
˜ , ⊥. In this case, there exists exactly one vertex ui in U˜ that Let us suppose A(U) belongs to the set S . Further, let us suppose that none of the noise vertices belongs to S , i.e., U ∗ ∩ S = W ∗ ∩ S = ∅. This implies that the vertex ui is a non-noise vertex ui ∈ U. ˜ ∩S = {wi } unless one of the edges in M crosses the cut (S , V \S ). Furthermore, we will have W ˜ ˜ if U ∗ ∩ S = W ∗ ∩ S = ∅ and M ∩ E[S , V \ S ] = ∅. Hence, we will have A(U) = B(W) 11
Define the event E to be: E = (U ∗ ∩ S = ∅) ∧ (W ∗ ∩ S = ∅) ∧ (M ∩ E[S , V \ S ] = ∅) The probability that U ∗ ∩ S , ∅ or W ∗ ∩ S , ∅ is less than 2tδ. Since µ(S ) = δ and Φ(S ) 6 η, the probability that a random edge e ∈ E belongs to the cut E[S , V \ S ] is at most ηδ. Therefore, the probability that at least one of the R edges in M belongs to E[S , V \ S ] is at most Rηδ. Summing up, we have [E] > 1 − 2tδ − Rηδ . Consequently, we can write n o ˜ = B(W) ˜ A(U) ˜ ∈ [r] ∨ B(W) ˜ ∈ [r] A(U) ˜ W ˜ U, n o ˜ = B(W) ˜ A(U) ˜ ∈ [r] ∨ B(W) ˜ ∈ [r], E − [E] > A(U) ˜ W ˜ U,
> 1 − 2tδ − Rηδ = 1 − 2εRδ − Rηδ > 1 − 2rδ(ε + η)
(3.4)
3.2
Soundness
Let A, B : V r → [r] ∪ {⊥} be a partial strategy for the unique game Υ. For U ∈ V r−1 and x ∈ V, we let a(U, x) ∈ [0, 1] denote the probability that A selects the coordinate of x after we place it at a random position of U and permute the sequence randomly, i.e., def
a(U, x) = {A(π.(U +i x)) = π(i)} . i∈[r] π
(3.5)
Here, U +i x denotes the vertex sequence in V r obtained from U by inserting x as the i-th coordinate (the original coordinates i, . . . , r − 1 of U are moved by one to the right). Note that above probability does not change if we instead place x at the end of U, since π is a random permutation. Similarly define b(W, x) starting from B for W ∈ V r−1 Let M = (e1 , . . . , eR−1 ) ∈ E R−1 be a tuple of R − 1 edges. Furthermore, let U = (u1 , . . . , uR−1 ) and W = (w1 , . . . , wR−1 ) where ui , wi are endpoints of the edge ei . For M ∈ E R−1 , we define a function a M : V → [0, 1] as follows def
a M (x) =
U ∗ ∼V t
a(U + U ∗ , x) .
(3.6)
Here U + U ∗ denotes the concatenation of the tuples U and U ∗ . Along similar lines, let us define the function b M : V → [0, 1]. Intuitively, the functions a M and b M are to be thought of as indicator functions of subsets. In fact, it is easy to verify that for the strategy exhibited in the completeness argument, the functions a M , b M are exactly the indicator functions of the non-expanding set S . In the rest of the section, we will use the functions a M , b M to reconstruct a non-expanding set in the graph. To this end, we begin by showing some properties of the functions a M , b M . Lemma 3.2. If the strategy A, B is an α-partial strategy with value 1 − η for the game Υ, then 12
– The typical L1 norm of a M , b M over a random choice of M ∈ E R−1 is at least αr .
ka M k1 ,
M∼E R−1
kb M k1 >
M∼E R−1
α r
. – For every M ∈ E R−1 , the L1 -norm of a M , b M satisfy ka M k1 , kb M k1 6
1 εR
– The functions a M , b M are correlated on edges of G for a random choice of M. Formally, if G denotes the normalized adjacency matrix of the graph G then the functions a M , b M satisfy, ! 1 . (3.7)
[ha M , Gb M i] > (1 + ε)Υ(A, B) max
ka M k1 ,
kb M k1 − εRr M∈E R−1 M∈E R−1 M∈E R−1 Here Υ(A, B) denotes the value of the partial strategy A, B. Proof. Item 1 The typical L1 norm evaluates to,
ka M k1 =
M∼E R−1
n
M∈E R−1 U ∗ ∈V t ,x∈V π,i∈[r]
=
n
˜ r, U∈V π,i∈[r]
o A π. U + U ∗ +i x = π(i)
o n o ˜ = π(i) = 1 A(U) ˜ ,⊥ A(π.U) ˜ r r U∈V
The second step uses that the joint distribution of U + U ∗ +i x is the same as U˜ ∼ V r . The last step uses that the distribution of π(i) is uniformly random over [r] even for a fixed U˜ and π. The conclusion follows by observing that since A, B is an α-partial strategy, {A(U) , ⊥} is at least α. The same argument also applies to the functions b M . Item 2 For fixed M ∈ V R−1 , the L1 -norm of a M evaluates to, n o ∗ ka M k1 = A π. U + U + x = π(i) i ∗ t U ∈V ,x∈V π,i∈{R+1,...,R+t}
=
Z∈V t+1 , π,i∈∈{R+1,...,R+t}
n
A(π.(U + Z)) = π(i)
o
n o 1 π−1 (A(π.(U + Z))) ∈ {R, . . . , R + t} t + 1 Z∈V t+1 ,π 1 1 6 6 t + 1 εR
=
In contrast to the proof of Item 1, here we inserted x in a random coordinate among {R, . . . , R + t} (as opposed to a completely random coordinate). The experiment as a whole does not change because π is a random permutation. The second step uses the fact that (i, U + U ∗ +i x) has the same distribution as (i, U + Z). Again, an identical proof shows a similar bound for the functions b M . Item 3 Let us denote by E success the event that the provers answer the question correctly. Since, A, B are α-partial strategies with value 1 − η, the probability of E success > α(1 − η). Rewriting the probability of success of partial strategies A, B, we get X n o ˜ = π1 (i) ∧ B(π2 .W) ˜ = π2 (i) [E success ] = A(π1 .U) i∈[r]
˜ W,π ˜ 1 ,π2 U,
13
=
"
X i∈[r]
U,W U ∗ ,W ∗ ,π1 ,π2
# A π1 .(U + U ) = π1 (i) ∧ B π2 .(W + W ) = π2 (i) . ∗
∗
For each fixing of U, W, the noise vertices U ∗ , W ∗ and the permutations π1 , π2 are independent of each other. Hence, we can rewrite the probability of success as " # X ∗ ∗ [E success ] =
A π .(U + U ) = π (i) B π .(W + W ) = π (i) 1 1 2 2 ∗ ∗ i∈[r]
U,W U ,π1
W ,π2
(3.8) Observe that for U ∈ V R and any i, j ∈ {R + 1, . . . , R + t}, it holds that A π1 .(U + U ∗ ) = π1 (i) = A π1 .(U + U ∗ ) = π1 ( j) ∗ ∗ U ,π1
U ,π1
Here, we used the fact that all of the coordinates of U ∗ are identically distributed (even conditioned on U). It follows that for every i ∈ {R + 1, . . . , R + t} and for every choice of U,
U ∗ ,π1
1 A π1 .(U + U ∗ ) = π1 (i) 6 . t
(3.9)
A similar inequality holds for B with the probability over the choice of W ∗ and π2 . Next consider i ∈ [R]. For U = (u1 , . . . , uR ) ∈ V R , let U −i ∈ V r−1 denote the r − 1 tuple obtained from U by removing the ith coordinate ui . Let M −i ∈ E r−1 denote the edge tuple obtained by deleting ei from M. Then, n o n o −i ∗ −i A π .(U + U + u ) = π (i) =
A π .(U + u ) = π (i) 1 i i 1 1 i i 1 ∗ ∗ U π1
U ,π1
= a M−i (ui )
(3.10)
Rewriting the expression for probability of success in (3.8) using equations (3.9) and (3.10) we get, [E success ] 6
X i∈[R]
6
M∼E R
X i∈[R]
1 1 a M−i (ui )b M−i (wi ) + t · · t t
M 0 ∼E R−1 ,uw∼E
[a M0 (u)b M0 (w)] +
1 t
If G denotes the stochastic adjacency matrix of the graph G then the above expression can be rewritten as: [E success ] 6 R
M 0 ∈E R−1
ha M0 , Gb M0 i +
1 . εR
Recall that since the value of the partial strategy A, B is Υ(A, B), [E success ] > (1 − η) r {A(U) , ⊥} = Υ(A, B)r U∼V
M 0 ∈E R−1
ka M0 k1
Comparing the two inequalities above, we get
M 0 ∈E R−1
ha M0 , Gb M0 i > (1 + ε)Υ(A, B)
M 0 ∈E R−1
ka M0 k1 −
A similar inequality holds for B just by a syntactically identical proof. 14
1 εRr
The properties shown in Lemma 3.2 holds for the average of functions a M , b M over all choices of M. Now, we extract a pair of functions a M , b M that still satisfy some of the properties in Lemma 3.2. Lemma 3.3. Let A, B be two α-partial strategies with value Υ(A, B) for the game Υ. Then for every β > 0, there exists a choice of M ∈ E R−1 such that the functions a M , b M satisfy, ! ka M + b M k1 1 ha M , Gb M i > (1 + ε)Υ(A, B) − −β 2 αεR αβ 1 6 ka M + b M k1 /2 6 r εR Proof. For M ∈ E R−1 , let us denote by θ M = ka M + b M k1 /2. As G denotes the normalized adjacency matrix and a M , b M are functions bounded in [0, 1], for every M we have ha M , Gb M i 6 ka M k1 6 θ M . We will lower bound the expected value of ha M , Gb M i conditioned on θ M > αβ/r, h i h i
ha M , Gb M i1θM >αβ/r =
[ha M , Gb M i] −
ha M , Gb M i1θM >
M∈E R−1
M∈E R−1
M∈E R−1
[ha M , Gb M i] − αβ/r θM
! 1 Υ(A, B)(1 + ε) − −β αεR
In the last step, we used the lower bound for
M∈E R−1 ha M , Gb M i obtained in Item 3 of Lemma 3.2 and the fact that
M θ M > α/r (Item 1 of Lemma 3.2). Hence, there exists a choice of M ∈ E R−1 such that: ! 1 ha M , Gb M i1θM >αβ/r > θ M Υ(A, B)(1 + ε) − −β . αεR This pair of functions aM , b M satisfy both ka M + b M k1 /2 > αβ/r and ha M , Gb M i > 1 Υ(A, B)(1 + ε) − αεR − β ka M + b M k/2. Furthermore, by Item 2 of Lemma 3.2 we will have 1 ka M + b M k1 /2 6 εR . Lemma 3.4. Given two functions a, b : V → [0, 1] on a regular graph G = (V, E) there exists a set S ⊆ V such that 4 4 max(kak1 , kbk1 ) − √ 6 µ(S ) 6 kak1 + kbk1 + √ n n ! √ ha, Gbi 4ha, Gbi Φ(S ) 6 min 1 − ,2 − + O 1/ n kak1 + kbk1 kak1 + kbk1 Proof. Sample a set of vertices A ⊆ V by including each vertex x in A with probability a(x). The expected volume of A is exactly
[µ(A)] = kak1 , and the variance of µ(A) satisfies (µ(A)) 6 1/n. Similarly, sample a set of vertices B ⊆ V such that
[µ(B)] = kbk1 and the variance of µ(B) is at most 1/n. Observe that the expected number of edges from A to B is given by
[|E(A, B)|/|E|] = ha, Gbi. Furthermore, since G is a regular graph it is easy to see that the variance of |E[A, B]|/|E| is at most 1/n. For succinctness, let us denote by µ(A, B) = |E[A, B]|/|E|. √ With probability at least 34 , we will have |µ(A)−kak1 | 6 2/ n. Similarly, with probability √ √ at least 34 , |µ(B) − kbk1 | 6 2/ n and with probability at least 34 , |µ(A, B) − ha, Gbi| 6 2/ n. By union bound, there exists sets A,B such that all three conditions hold. 15
Consider the set S = A ∪ B. Starting at a vertex u ∈ A and picking its random neighbour v, we have {v < A ∪ B, u ∈ A} 6 µ(A) − µ(A, B) . uv∼E
A similar result holds for the set B. Hence the expansion of the set A ∪ B can be upper bounded by, uv∼E {v < A ∪ B, u ∈ A ∪ B} [uv ∼ E]u ∈ A ∪ B uv∼E [v < A ∪ B, u ∈ A] + uv∼E [v < A ∪ B, u ∈ B] 6 uv∼E [u ∈ A ∪ B] µ(A) − µ(A, B) + µ(B) − µ(A, B) 6 (µ(A) + µ(B))/2 4µ(A, B) 62− µ(A) + µ(B) √ 4ha, Gbi 62− + O(1/ n) kak1 + kbk1
Φ(A ∪ B) =
The second step in the above calculation is a simple union bound, while in the third step, we use the fact that µ(A ∪ B) > (µ(A) + µ(B))/2. To show the other bound on expansion, observe that {v < A ∪ B, u ∈ A ∪ B} 6 [uv ∼ E]u ∈ A ∪ B − µ(A, B) = µ(A ∪ B) − µ(A, B) .
uv∼E
Using this, we can bound the expansion of A ∪ B as uv∼E {v < A ∪ B, u ∈ A ∪ B} [uv ∼ E]u ∈ A ∪ B √ µ(A, B) ha, Gbi 61− 61− + O(1/ n) µ(A ∪ B) kak1 + kbk1
Φ(A ∪ B) =
In the final step of the calculation, we use the fact that µ(A ∪ B) 6 µ(A) + µ(B). Finally, the bound on the size of the set S follows trivially by observing that max(µ(A), µ(B)) 6 µ(A ∪ B) 6 µ(A) + µ(B). Lemma 3.5. (Soundness close to 0) Suppose there exists a α-partial strategy for Υ with value η then there exists a set S ⊆ V with Φ(S ) 6 1 − η/8 , and
αη 4 6 µ(S ) 6 8R εR whenever R > 4/αεη and all sufficiently large n.
(3.11)
Proof. Let A, B be two α-partial strategies for the Υ. By using Lemma 3.3 with β = η/4, we obtain functions a, b : V → [0, 1] with the properties stated in Lemma 3.3. Finally, the functions a, b are used to construct a non-expanding set S of volume at most kak1 + kbk1 + √ O(1/ n) as shown in Lemma 3.4. 16
By Lemma 3.4, the expansion of the set S is bounded by, ! √ 1 − β /2 + O(1/ n) 6 1 − η/4 , Φ(S ) 6 1 − (1 + ε)Υ(A, B) − αεR for R > 4/αεη and n > 64/η2 . The volume of the set is bounded by √ √ αη 2 − O(1/ n) 6 µ(S ) 6 + O(1/ n) . 4R εR Lemma 3.6. (Soundness close to 1) Suppose there exists a α-partial strategy for Υ with value 1 − η then there exists a set S ⊆ V with Φ(S ) 6 5η , and
αε 4 6 µ(S ) 6 8R εR whenever R > 4/αεη and all sufficiently large n.
(3.12)
Proof. Let A, B be two α-partial strategies for the Υ. By using Lemma 3.3 with β = ε, we obtain functions a, b : V → [0, 1] with the properties stated in Lemma 3.3. Finally, the functions a, b are used to construct a non-expanding set S of volume at most kak1 + kbk1 + √ O(1/ n) as shown in Lemma 3.4. By Lemma 3.4, the expansion of the set S is bounded by, ! √ √ 1 1 Φ(S ) 6 2 − 2 (1 + ε)Υ(A, B) − − β + O(1/ n) 6 2(η − ε + ηε + + β) + O(1/ n) , αεR αεR which is at most 5η for R > 1/αεη and n > 1/η2 . The volume of the set is bounded by √ √ αε 2 − O(1/ n) 6 µ(S ) 6 + O(1/ n) , 4R εR which is the required bound for sufficiently large n.
4
Partial 2-Prover Games
In this section, we present a value-preserving reduction from partial 2-provers games to ordinary (total) 2-prover games. We restate Theorem 2.10 for the reader’s convenience: Theorem (Restatement of Theorem 2.10). For all positive integers c, given a 2-prover game Γ with n vertices, there is a reduction to another 2-prover game Γ0 running in time nO(c) such that: – Completeness If the α-partial value of Γ is at least 1 − ε, then the value of Γ0 is at least 1 − ε − e−α·c . – (Soundness close to 1) If the value of Γ0 is at least 1 − η, then the 1/2c-partial value of Γ is at least 1 − 4η. 17
– (Soundness close to 1) If the value of Γ0 is at least η, then the 1/2c-partial value of Γ is at least η/4. Furthermore, the reduction preserves the uniqueness property of the games. Let Γ be a 2-prover game. For a parameter c ∈ , let Γ0 be the following 2-prover game: Reduction from a partial 2-prover game Γ to a total 2-prover game Γ0 1. Sample c vertex pairs (u1 , v1 ), . . . , (uc , vc ) ∼ E. 2. Send the vertex sequence u1 , . . . , uc to the first prover and the sequence v1 , . . . , vc to the second prover. 3. Let (i, a) and ( j, b) be their answers. 4. The provers win if i = j and (a, b) ∈ Πui ,v j . We observe that the reduction preserves the uniqueness property. Observation 4.1. If Γ is a unique game, then Γ0 is a unique game as well. The completeness and soundness properties of the reduction are proven in Lemma 4.2 and Lemma 4.4 respectively. Lemma 4.2 (Completeness). If the α-partial value of Γ is at least 1 − ε, then the value of Γ0 is at least 1 − ε − e−α·c . Proof. Let A : VA → Σ ∪ {⊥} , B : VB → Σ ∪ {⊥} be an α-partial strategy of value 1 − ε. We consider the following strategy A0 , B0 for Γ0 . (i, a) if A(ui ) = a ∈ Σ and A(u1 ) = . . . = A(ui−1 ) = ⊥ , 0 A (u1 , . . . , uc ) := (1, 1) if A(u1 ) = . . . = A(uc ) = ⊥ . In words, the prover returns the first answer in the list A(u1 ), . . . , A(uc ) (ignoring ⊥). If the partial strategy refuses to answer on all inputs u1 , . . . , uc , then the prover returns an arbitrary answer. Similarly, define ( j, b) if B(v j ) = b ∈ Σ and B(v1 ) = . . . = B(v j−1 ) = ⊥ , 0 B (v1 , . . . , vc ) := (1, 1) if B(v1 ) = . . . = B(vc ) = ⊥ . Let u1 v1 , . . . , uc vc be a sequence of c vertex pairs, independently drawn from E. The probability of the event A(u1 ) = . . . = A(uc ) = ⊥ is at most (1 − α)c 6 e−αc . Let us condition on the complementary event, i.e., the event that A(ui ) , ⊥ for at least one coordinate i ∈ [c]. Let i0 be the first coordinate such that A(ui0 ) , ⊥ or B(vi0 ) , ⊥. The winning probability of the provers (conditioned on the event that A(ui ) , ⊥ for at least one coordinate i ∈ [c]) is equal to n o n o (A(ui0 ), B(vi0 )) ∈ Πui0 ,vi0 = (A(u), B(v)) ∈ Πu,v A(u) ∈ Σ ∨ B(v) ∈ Σ > 1 − ε , uv∼E
since the value of the partial strategy A, B on Γ is 1 − ε. It follows that the value of the strategy A0 , B0 for the game Γ0 is at least 1 − ε − e−αc . 18
The following lemma will be useful in the soundness analysis. Lemma 4.3 (Glorified Markov Inequality). Let Ω be a probability space and let X, Y : Ω → + be two jointly distributed non-negative random variables over Ω . 1. Suppose
X 6 γ
Y. Then, there exists ω ∈ Ω such that X(ω) 6 2γY(ω) and Y(ω) >
Y/2. 2. Suppose
X > (1 − γ)
Y and X(ω) 6 Y(ω) for all ω ∈ Ω, then there exists ω ∈ Ω such that Y(ω) >
Y/2 and X(ω) > (1 − 2γ)Y(ω). Proof. Proof of 1: We have γ
Y >
X > {X > 2γY}
X | X > 2γY > 2γ {X > 2γY}
Y | X > 2γY It follows that
Y | X 6 2γY >
Y/2. Therefore, we can find ω ∈ Ω such that X(ω) 6 2γY(ω) and Y(ω) >
Y/2, as desired. Proof of 2: Set Z = Y-X. Apply the first part of the lemma to non-negative random variables Z and X. Lemma 4.4 (Soundness). – (Soundness close to 1) If the value of Γ0 is at least 1 − η, 1 then the /2c-partial value of Γ is at least 1 − 4η. – (Soundness close to 1) If the value of Γ0 is at least η, then the 1/2c-partial value of Γ is at least η/4. Proof. Let A0 : VAc → [c] × Σ and B0 : VBc → [c] × Σ be a strategy for Γ0 of value β. We first construct a partial strategy for Γ that uses shared randomness. The idea is as follows: given a question uv ∈ E, the provers generate c − 1 additional questions using shared randomness, embedd the given question at a random index from {1, . . . , c} and execute the strategy A0 , B0 for the game Γ0 . Recall that in the game Γ0 , the provers are supposed choose to answer one of the c questions they receive. Therefore, if the strategy A0 outputs an answer a to the question u, then A outputs a else it outputs ⊥. Formally, For i ∈ [c] and u ∈ VAc−1 , we define a partial strategy Ai,u : VA → Σ ∪ {⊥} as a if A0 (u +i u) = (i, a) , Ai,u (u) := ⊥ otherwise. Here, u +i u denotes the vertex sequence u0 ∈ VAc obtained from u by inserting u as the i-th coordinate (the original coordinates i, . . . , c − 1 of u are moved by one to the right). Similarly, define a partial strategy Bi,v : VB → Σ ∪ {⊥} for every i ∈ [c] and v ∈ VBc−1 . Note that u, v and i serve as the shared randomness of the two provers. It is clear that n o n o Ai,u (u) ∈ Σ = 1/c , Bi,v (v) ∈ Σ = 1/c (4.1) i∈[c], u∈VAc−1 , u∼VA
i∈[c], v∈VBc−1 , v∼VB
Hence, in expectation over the shared randomness, the provers answer 1c fraction of questions each. Since the value of strategy A0 , B0 is 1 − η, we have n o β/c = (Ai,u (u), Bi,v (v)) ∈ Πu,v . i∈[c], uv∼E c−1 , uv∼E
19
Again, the partial value of the strategy is β only in expectation over the shared randomness. Now we will eliminate shared randomness from the partial strategies, to obtain one strategy where both the properties hold. To this end, we define two random variables n o Vol(i, u, v) := Ai,u (u) ∈ Σ ∨ Bi,v (v) ∈ Σ , uv∼E n o Ai,u (u), Bi,v (v) ∈ Πu,v . Val(i, u, v) := uv∼E
The measure on (i, u, v) is as follows: We choose i ∈ [c] uniformly at random, and sample uv from E c−1 . It is clear that Val 6 Vol (pointwise) and oi n h
Vol =
Ai,u (u) ∈ Σ ∨ Bi,v (v) ∈ Σ i∈[c], uv∼E c−1 uv∼E
=
h
i∈[c], uv∼E c−1 u∼VA
= 2/c −
n
o n o n oi Ai,u (u) ∈ Σ + Bi,v (v) ∈ Σ − Ai,u (u), Bi,v (v) ∈ Σ
i∈[c], uv∼E c−1 , uv∼E
6 (2 − β)/c
v∼VB
n
uv∼E
o Ai,u (u), Bi,v (v) ∈ Σ
(using (4)) .
(4.2)
On the other hand, since the value of A0 , B0 is 1 − η, o n (Fi,u (u), Fi,v (v)) ∈ Πu,v = β/c .
Val = i∈[c], uv∼E c−1 , uv∼E
(4.3)
At this point, we use Lemma 4.3 (Glorified Markov Inequality) to finish the proof. Specifically, there are two random variables Val and Vol such that
Val > β/(2 − β)
Vol. Applying the Lemma 4.3 to these random variables when β = 1 − η and β = η yields the two parts of the lemma. With β = 1 − η, we have
Val > β/(2 − β)
Vol > (1 − 2η)
Vol. Hence we can find i, u, and v such that Vol(i, u, v) >
Vol/2 = 1/2c, and 1Good (i, u, v) = 1 i.e., Val > (1 − 2η0 )Vol. Using i, u, v as shared randomness yields a 1/2c-partial strategy with value at least 1 − 4η. The other case is shown similarly using the other implication of Glorified Markov Inequality.
5
Unique Games with Small Set Expansion
In this section, we present the proof Theorem 1.6 showing that a certain stronger version of Unique Games Conjecture yields a hardness result for the Small-Set Expansion problem. We begin by briefly reviewing some of the analytic notions useful for the proof.
5.1
Influences and Noise Stability
For the sake of convenience, we recall some analytic notions here. We recall a few standard definitions to facilitate our analysis. Let Ω denote a finite probability space with q different atoms. Let {χ0 = 1, χ1 , χ2 , . . . , χq−1 } be an orthonormal basis for the space L2 (Ω). For Q σ ∈ [q]R , define χσ (x) = i∈[R] χσi (xi ). Every function F : ΩR → can be expressed as a multilinear polynomial as follows: X ˆ F(x) = F(σ)χ σ (x). σ
20
Definition 5.1. For 0 6 ρ 6 1, define the operator T ρ on L2 (ΩR ) as T ρ F(x) =
[F(y) | x] where yi = xi with probability ρ and a random element from Ω with probability 1 − ρ. Formally, X ˆ T ρ F(x) = ρ|σ| F(σ)χ σ (x), σ∈[q]R
where |σ| is the number of non-zero coordinates in σ. For a function F : ΩR → define the influence of the ith coordinate as follows: X Inf i (F) =
[[F]] = Fˆ 2 (σ) x xi
σi ,0
Lemma 5.2 (Sum of Influences Lemma). Given a function F : ΩR → , if H = T 1−γ F then P [F] [F] `∈[R] Inf ` (H) 6 2e ln 1/(1−γ) 6 γ Proposition 5.3 (Convexity of Influences). Let F be a random function from ΩR to and let H =
[F] denote the average function. Then,
[Inf ` (F)] > Inf ` (
[F]) = Inf ` (H) . The Gaussian noise stability Γρ is defined as follows: Definition 5.4. Given µ ∈ [0, 1], let t = Φ−1 (µ) where Φ denotes the distribution function of the standard Gaussian. Then, Γρ (µ) = [X 6 t, Y 6 t], where (X, Y) is a two-dimensional Gaussian vector with covariance matrix
! 1 ρ . ρ 1
The following theorem on noise stability of functions over a product probability space is essentially a restatement of Theorem 4.4 in Mossel et al. [MOO05] Theorem 5.5. Let Ω be a finite probability space with the least non-zero probability of an atom being at least α. For all η > 0 there exists γ, τ such that the following holds: For every two functions F, F 0 : ΩR → [0, 1] satisfying, max(min(Inf ` (T 1−γ F), Inf ` (T 1−γ F 0 )) 6 τ , `∈[R]
we have, hF, T 1−ε F 0 i =
z∈ΩR
F(z)T 1−ε F 0 (z) 6 Γ1−ε (
[F],
[F 0 ]) + η .
In many applications, the following asymptotic bound on the function Γρ is sufficient. Theorem 5.6 (Theorem B.2 [MOO05]). As µ → 0, Γρ (µ) ∼ µ2/(1+ρ) (4π ln(1/µ))−ρ/(1+ρ)
3−ρ (1 + ρ)3/2 6µ 2 . 1/2 (1 − ρ)
Fact 5.7. For all ρ ∈ [0, 1], the function Γρ (µ1 , µ2 ) is a strictly increasing function on µ1 and µ2 . 21
Lemma 5.8. For every ε > 0, there exists sufficiently small µ0 > 0 such that for all µ1 , µ2 6 µ0 , Γ1−ε (µ1 , µ2 ) 6 ε · (µ1 + µ2 ) . Proof. Set µ0 to be the minimum of ε2/ε and the µ for which Theorem 5.6 holds with ρ = 1 − ε. Let us denote µ = max(µ1 , µ2 ). As Γ1−ε (µ1 , µ2 ) is a strictly increasing function of µ1 and µ2 , we can write: Γ1−ε (µ1 , µ2 ) 6 Γ1−ε (µ, µ) 6 µ1+ε/2 6 εµ 6 ε(µ1 + µ2 ) . Here the second inequality is a consequence of Theorem 5.6. While the third inequality is derived as µε/2 6 µε/2 = ε, the final inequality follows trivially from max(µ1 , µ2 ) 6 0 µ1 + µ2 . Lemma 5.9. For a subset S , the ratio ∂(S )/µ(S ) is given by
[w < S |v ∈ S ] ,
v∼V,w∼N(v)
where v ∼ V is generated with probability proportional to deg(v) and w is a random neighbour of v. Proof. First, observe that [v ∈ S ] = µ(S ). As the edge (v, w) is uniformly distributed over ,V\S )| E, [w < S ∧ v ∈ S ] = |E(S|E| = ∂(S ). Hence we get,
[w < S |v ∈ S ] =
v∼V,w∼N(v)
v∼V,w∼N(v) [w < S ∧ v ∈ S ] ∂(S ) = . v∼V,w∼N(v) [v ∈ S ] µ(S )
5.2
Hardness Reduction
In this section, we present the details and proof of the reduction from Unique Games with small set expansion to the Gap-Small-Set Expansion. Theorem 1.6 is a direct consequence of the following theorem we present in this section. Theorem 5.10. For every pair of constants β, ε, there exists δ > 0 such that with appropriate choice of parameters for UG-to-SSE reduction, given an instance Υ = (V, E, Π = {πv←w : [R] → [R]}, [R]), the reduction runs in time poly(|Υ|) · F(ε, δ) and produces a graph G = (V, E) such that the following holds: – Completeness If the exists a labelling A : V → [R] that satisfies more than 1 − ε fraction of edges in Υ, then ΦG (δ) 6 2ε. – Soundness If no labelling to Υ satisfies more than η fraction of edges and Φ(S ) > 1−ε for all µ(S ) 6 β then, ΦG (δ) > 1 − 5ε − β
22
Input A Unique Games instance Υ = (V, E, Π = {πv←w : [R] → [R]}, [R]) and two constants ε, δ > 0. 2δ Parameters Pick small enough δ so that βε 6 µ0 as obtained in Lemma 5.8 for ε. 1 Set q = d δ e. Fix the values of γ, τ such that the error term in Theorem 5.5 is at most εδ/10. τ εδ Finally choose η < γ 10 . Construct a graph G = (V, E) with vertices V = V × [q]R . The edge weights in graph G are given by: 2 2
we =
1 × Probability that following procedure generates e . 2
– Sample an edge (v, w) ∼ E from the Unique Games instance Υ – Sample x ∈ [q]R uniformly at random. Generate y by perturbing each coordinate of x independently with probability ε. Formally, for each ` ∈ [R], with probability 1 − ε , x(`) (`) y = uniform random value from [q] with probability ε . – Introduce an edge from (v, x) to (w, πw←v ◦ y). Here for a permutation π : [R] → [R] and a vector y ∈ [q]R , π ◦ y ∈ [q]R is given by (π ◦ y)(`) = y(π
−1 (`))
for all ` ∈ [R] .
By design, the sum of the weights of edges in G is exactly equal to 12 . Hence, the µ(S ) for a set S ⊆ V ranges from 0 to 1. For a vertex v ∈ V define, S v = S ∩ v × [q]R
and
µv =
|S v | . qR
Further, let Fv : [q]R → {0, 1} be the indicator function for the set S v , i.e., Fv (x) = 1[(v, x) ∈ S v ]. The quantity ∂(S ) for a subset S ⊆ V can be estimated as, ∂(S ) = =
[1[(w, πw←v ◦ y) < S ] · 1[(v, x) ∈ S ]]
[(1 − Fw (πw←v ◦ y))Fv (x)]
(v,w)∼E x,y (v,w)∼E x,y
= µ(S ) −
[Fw (πw←v ◦ y))Fv (x)]
(v,w)∼E x,y
(5.1)
Let us denote Int(S ) = µ(S ) − ∂(S ) which is given by Int(S ) =
[Fw (πw←v ◦ y))Fv (x)]
(v,w)∼E x,y
Completeness. Let A : V → [R] be a labelling that satisfies 1 − ε fraction of edges in Υ. Consider the subset of vertices S ⊆ V in graph G defined as follows: S = {(v, z)|z(A(v)) = 1} . For each vertex v ∈ V, among the set of vertices v × [q]R , exactly 1q -fraction of the vertices satisfy z(A(v)) = 1. Furthermore, the degree of all vertices within a block v × [q]R is the same. Hence, µv = 1q for all vertices v ∈ V. 23
By (5.1), we can write ∂(S ) = µ(S ) −
[Fw (πw←v ◦ y)Fv (x)] , h i 6 µ(S ) − (1 − ε)
1[(πw←v ◦ x)(A(w)) = 1]1[x(A(v)) = 1] .
(v,w)∼E x,y
(v,w)∼E x
Consider an edge (v, w) satisfied by the labelling A, i.e., A(w) = πw←v (A(v)). For such an −1 edge e, (πw←v ◦ x)(A(w)) = x(πw←v (A(w))) = x(A(v)) . Therefore, for a satisfied edge the inner expression simplifies to 1[x(A(v)) = 1] whose expectation is 1q = µ(S ). Over the choice of (v, w) ∼ E, the edge (v, w) is satisfied by A with probability at least 1 − ε. Substituting back, we get that ∂(S ) 6 µ(S ) − (1 − ε) · (1 − ε) · µ(S ) 6 2εµ(S ). δ Soundness. Suppose S ⊆ V be a subset of vertices of the graph G such that µ(S ) ∈ [ 10 , δ] def
and Φ(S ) 6 1 − 5ε − β , i.e., Int(S ) > (5ε + β)µ(S ). Set µ = µ(S ). For a vertex v, define Hv = T 1−γ Fv . For each vertex v define a set of candidate labels as follows: Lτ (v) = {` ∈ [R]| Inf ` (Hv ) > τ} By Lemma 5.2, the sum of influences of Hv is at most γ1 . Therefore, the size of each set |Lτ (v)| is at most 1/τγ. Consider the following labelling A to the Unique Games instance Υ: For each vertex v ∈ V, assign a label uniformly at random from Lτ (v) if it is non-empty, else assign an arbitrary label. Call an edge (v, w) to be good if πw←v (Lτ (v)) ∩ Lτ (w) , ∅. Any good edge (v, w) is 1 1 · |L(w)| > γ2 τ2 . Since no assignment satisfied by the labelling A with probability at least |L(v)| to Υ satisfies more than η fraction of edges, at most η/γ2 τ2 fraction of edges are good. Claim 5.11. Consider a bad edge e = (u, v) in Υ, i.e., πw←v (Lτ (v)) ∩ Lτ (w) = ∅. Then,
[Fv (x)Fw (πw←v ◦ y)] 6 Γρ (µv , µw ) + εδ/10
x,y
Proof. Define Fw0 : ΩR → [0, 1] as Fw0 (y) = Fw (πw←v ◦ y). Then the condition πw←v (Lτ (v)) ∩ Lτ (w) = ∅ implies that max(min(T 1−γ Fv , T 1−γ Fw0 )) 6 τ . `∈[R]
The claim follows by applying Theorem 5.5 to functions Fv and Fw0 .
By definition, we have Int(S ) =
[Fw (πw←v ◦ y))Fv (x)] ,
(v,w)∼E x,y
6 [(v, w) is good.] · 1 + [(v, w) is not good]
Fw (πw←v ◦ y))Fv (x)|(v, w) is not good] .
(v,w)∼E x,y
Recall that at most η/γ2 τ2 < 12 fraction of edges (v, w) are good. Further for every (v, w) that is not good, Claim 5.11 yields an upper bound on the expectation. Hence we can write, Int(S ) 6
η γ 2 τ2
+
(v,w)∼E
Γ1−ε (µv , µw ) + εδ/10 ,
24
which implies that,
(v,w)∼E
Γ1−ε (µv , µw ) > (3ε + β)µ(S )
Define sets A, B ⊆ V as follows: ( ) 2δ A = w µw > βε
(5.2)
) ( B = v [w ∈ A] > ε . w∈N(v)
Since
w∼V [µw ] = µ(S ) ∈ [δ/10, δ], we have µ(A) = w∼V [w ∈ A] 6 βε/2. Note that w ∼ V can also be generated by sampling a vertex v ∼ V and a uniformly random neighbour w ∼ N(v). Thus we can write, βε/2 > [w ∈ A] = w∼V
[w ∈ A] > [v ∈ B] [w ∈ A|v ∈ B] .
v∼V, w∈N(V)
w∈N(v)
v∼V
By definition of the set B, w∈N(v) [w ∈ A|v ∈ B] > ε. Hence we have µ(B) = v∼V [v ∈ B] 6 β 2. Define C = A ∪ B. For v < C,
Γ1−ε (µv , µw ) 6 [w ∈ A]
Γ1−ε (µv , µw )|w ∈ A + [w < A]
Γ1−ε (µv , µw )|w < A
w∈N(v)
w∈N(v)
w∈N(v)
w∈N(v)
βε µv + [w < A]
Γ1−ε (µv , µw )|w < A 6 w∈N(v) w∈N(v) 2 < βµv + [w < A]
[ε · (µv + µw )|w < A] w∈N(v)
w∈N(v)
w∈N(v)
(because Γ1−ε (µv , µw ) 6 min(µv , µw )) (because Lemma 5.8)
6 µv (β + ε) + ε ·
[µw ] w∈N(v)
Average the preceding inequality over all v < C, [v < C] ·
v∼V
v∼V w∈N(v)
Γ1−ε (µv , µw )|v < C
6
" # [v < C] ·
(β + ε) µv + ε ·
[µw ] v < C
v∼V
w∈N(v)
v∼V
6 (β + ε)
µv + ε ·
v∼V
[µw ]
v∼V w∈N(v)
6 (β + 2ε) µ(S )
(5.3)
In the above calculation, we are using the fact that [v < C]
v∼V
w∈N(v) µw |v < C 6
v∈V
w∈N(v) µw 6
w∼V µw = µ(S ). Notice that generating v ∼ V, w ∼ N(v) is identical in distribution to sampling a uniform random edge (v, w). Hence we can rewrite (5.3) as,
Γ1−ε (µv , µw )|{v, w} 1 C 6 (β + 2ε) µ(S ) (5.4) (v,w)∼E
Finally we can rewrite (5.2), (3ε + β)µ(S ) 6
(v,w)∼E
Γ1−ε (µv , µw ) 6
[{v, w} ⊂ C] + (β + 2ε) µ(S ) .
(v,w)∼E
εδ Hence we have (v,w)∼E [{v, w} ⊂ C] > εµ(S ) > 10 . Clearly, this implies that µ(C) > However, with µ(A) 6 βε/2 and µ(B) 6 β/2 it must hold that µ(C) < β. Consequently, C ⊆ V is a set such that µ(C) < β such that
Φ(C) 6 1 − ε , contradicting the assumption of expansion in Unique Games instance Υ. 25
εδ 10 .
6
Putting it Together
In this section, we wrap up the proofs of some of the main theorems presented in this work. To this end, we first present the detailed formal statement of the main reduction, and prove it using Theorem 2.9 and Theorem 2.10. Theorem 6.1 (Reduction from Graph Expansion to Unique Games). Given a graph G of size n and parameters ε, δ > 0, we can compute in polynomial time a Unique Games instance Υ with (n/ε)O(1/δ) variables and with poly(1/εδ) labels such that the following holds: – Completeness: For all η > 0, if there exists a set S with µ(S ) = δ and Φ(S ) 6 η then opt(Υ) > 1 − 2(η + ε). – Soundness close to 1: For all η > 0, if opt(Υ) > 1 − η then there exists a set S ⊆ V such that " # δ 8δ Φ(S ) 6 20η and µ(S ) ∈ , . 32 log(1/ε) ε – Soundness close to 0: For all η > 0, if opt(Υ) > η then there exists a set S ⊆ V such that " # 8δ δ Φ(S ) 6 1 − η/32 and µ(S ) ∈ , . 32 log(1/ε) ε Proof of Theorem 6.1. Execute the SSE-to-UG reduction with parameters R = b 1δ c and ε on the input graph G = (V, E). This yields a Unique Games instance Υ∗ on alphabet size R. Apply the reduction from partial Unique Games to Unique Games presented in Section 4 with parameter c = 4 ln(1/ε), to get the Unique Games instance Υ. Completeness. If there exists a subset S such that µ(S ) = δ and Φ(S ) 6 η then by Theorem 2.9, the Υ∗ has a 14 -partial strategy with value 1 − ε − η. By the completeness part of Theorem 2.10, this implies that there exists an assignment to the unique games instance Υ 1 with value 1 − ε − η − e− 4 c > 1 − 2(ε + η). 1 -partial strategy for Soundness. By Theorem 2.10, if opt(Υ) > 1 − η then there exists a 2c Υ∗ with value at least 1 − 4η. Applying Theorem 2.9, we get that there exists a set S with Φ(S ) 6 20η and " # εδ 8δ µ(S ) ∈ , . 32 ln(1/ε) ε
Finally, we can show that the Gap-Small-Set Expansion hypothesis implies the Unique Games Conjecture. Proof of Theorem 1.4. The reduction outlined in Theorem 6.1 shows that Gap-Small-Set Expansion hypothesis implies UGC, but for one caveat. The size of the set returned in the soundness case is much larger than δ. More precisely, if opt(Υ) > η, the theorem asserts that there exists a set of vertices S with # " δ 8δ Φ(S ) 6 1 − η/32 and µ(S ) ∈ , . 32 log(1/ε) ε
26
To obtain a non-expanding subset of size δ, we either include or subsample vertices from S . Formally, if µ(S ) < δ, then arbitrarily extend the set S to obtain a set S 0 with µ(S 0 ) = δ. Note that the fraction of edges within S 0 is lower bounded by Int(S 0 ) > Int(S ) > . This implies that Φ(S 0 ) 6 1 −
η η µ(S ) µ(S ) > · · µ(S 0 ) . 32 32 δ
! η ηµ(S ) 61−Ω . 32δ ln 1/ε
If µ(S ) > δ, then select δn vertices at random from S to get the set S 0 . The fraction of edges within S 0 is in expectation equal to !2 δ 0
Int(S ) = Int(S ) · , µ(S ) which is at least ηε2 . Therefore, in either case there exists a set S 0 such that µ(S 0 ) = δ and Φ(S 0 ) 6 1 − ηε2 . Now we state the main result of the paper from this algorithmic standpoint. To this end, we formally state a hypothesis that would be emerge from a refutation of Unique Games Conjecture. Hypothesis 6.2 (Unique Games is easy). There exists a constant ε > 0 and a function f : → such that given a Unique Games instance Υ with n vertices and k labels, it is possible distinguish between the cases opt(Υ) > 1 − ε and opt(Υ) 6 ε in time n f (k) . Theorem 6.3. Suppose the above hypothesis (Unique Games is easy) holds for a constant ε0 and a function f , then there exists a function g : [0, 1] → such that given any graph G with n vertices, and it is possible to distinguish whether ΦG (δ) 6 ε1 or ΦG (δ) > 1 − ε1 for some absolute constant ε1 in time ng(δ) . A somewhat stronger consequence of the Unique Games is easy hypothesis follows using the parallel repetition theorem. For the sake of completeness, we recall the following consequences of the parallel repetition theorem for Unique Games due to [Rao08]. Theorem 6.4 ([Rao08]). Suppose the above hypothesis (Unique Games instance is easy) holds for a constant ε0 and a function f . Then, given a Unique Games instance Υ with n variables and k labels, we can distinguishes between the cases opt(Υ) > 1 − ε and √ O(1/ε) /ε) opt(Υ) 6 1 − O( ε) in time nO( f (k . (Here, the constant factors hidden by the Onotation depend only on the constant ε0 .) The following corollary is an immediate consequence of the parallel repetition theorem and Theorem 6.1. Corollary 6.5. Suppose the above hypothesis (Unique Games instance is easy) holds for a constant ε0 and a function f . Then, given a graph G with n vertices and parameters ε, δ such that ε < ε1 for some absolute constant ε1 , we can distinguish the following cases in time ng(ε,δ) for some g : [0, 1]2 → . 1. There exists S ⊆ V with µ(S ) = δ and Φ(S ) 6 ε √ 2. Every set S ⊆ V with µ(S ) 6 1500δ/ε satisfies Φ(S ) > 1500 ε. 27
7
Unique Games with Sufficient Local Expansion are Easy
In this section, we present a polynomial-time algorithm for Unique Games which guarantees good approximations whenever small sets in the underlying constraint graphs expand. (More concretely, the approximation guarantee will depend on the volume of the smallest nonexpanding set). Our algorithm is based on a basic semidefinite relaxation (like many previous approximation algorithms for Unique Games [Kho02, CMM06, AKK+ 08]). For a unique game U with vertex set V and alphabet [k], we associate a semidefinite program SDP(U): The goal is to find a collection of vectors {ui }u∈V,i∈[k] ⊆ d for some d ∈ so as to maximize P the value
(u,v,π)∼U hui , vπ(i) i subject to the constraints that ki=1 kui k2 = 1 for all u ∈ V and hui , u j i = 0 for all u ∈ V and i , j ∈ [k]. We let sdp(U) be the optimal value of the semidefinite program SDP(U), sometimes referred to as the semidefinite value of U. Theorem (Restatement of Theorem 1.7). Let U be a regular unique game with semidefinite value sdp(U) > 1 − ε and constraint graph G. Then, for every δ > 0 satisfying √ ΦG (δ) > O ε log(1/δ) , (7.1) √ we can compute an assignment for U with value at least Ω(δ2 ) − O( ε). The proof of Theorem 1.7 relies on a few lemmas. We first state these lemmas and establish the theorem assuming the lemmas. Then, we prove the lemmas in the following subsections. Let U be a unique game with vertex set V, alphabet [k], and semidefinite value sdp(U) = 1 − ε. We may assume that ε is sufficiently small (say, ε < 1/100), for otherwise Theorem 1.7 is trivially true. Let {ui }u∈V,i∈[k] be an optimal solution to the semidefinite program SDP(U). For t ∈ , we define the correlation of two vertices u, v ∈ V with respect to this solution for SDP(U), X def ρt (u, v) = max kui kkvσ(i) khu¯ i , v¯ σ(i) i2t , (7.2) σ
i∈[k]
where the maximum is over all permutations σ of [k]. Here, the notation x¯ denotes the unit vector in the direction of x. The following lemma shows how to compute a good assignment for U if the quantity
u,v∈V ρt (u, v) does not decrease too quickly with t → ∞. Lemma 7.1. Suppose that
u,v∈V ρt (u, v) > δ. Then, there exists an assignment for U with √ value at least δ2 /4 − O( ε + e−t/100 ). Hence, in order to prove Theorem 1.7, we want to argue that the quantity
u,v∈V ρt (u, v) can only decrease quickly with t → ∞ if the constraint graph G does not satisfy the local expansion condition (7.1). In order to relate the quantity
u,v∈V ρt (u, v) to local expansion, ˜ G (δ) for the expansion profile Φ(δ): The we consider the following semidefinite relaxation Λ d goal is to find vectors {Xu } ⊆ for some d ∈ so as to minimize
uv∼G kXu − Xv k2 subject to the constraints
u,v∈V |hXu , Xv i| 6 δ and
kXu k2 = 1. (This relaxation was first studied in [RST10] as a natural generalization of the eigenvalue relaxation for edge expansion). The ˜ G (δ) has the following approximation guarantee for the (efficiently computable) parameter Λ epxansion profile ΦG (δ).
28
Theorem 7.2 ([RST10]). For every graph G and every δ > 0, 1/2 ˜ ˜ Λ(δ) 6 Φ(δ) 6 O Λ(δ/2) log(1/δ) . ˜ G (δ). The following lemma relates the quantity
u,v∈V ρt (u, v) to the parameter Λ ˜ G (δ + 2−Ω(t) ) 6 O(tε). Lemma 7.3. If
u,v∈V ρt (u, v) 6 δ for t ∈ , then Λ Combining the previous lemmas gives Theorem 1.7. Proof of Theorem 1.7. We choose t = O(log(1/δ)). We distinguish two cases. If
u,v∈V ρt (u, v) > δ, then Lemma 7.1 allows us to find an assignment for U with the required value. On the other hand, if
u,v∈V ρt (u, v) 6 δ, then Lemma 7.3 implies that ˜ G (δ + o(δ)) 6 O(ε log(1/δ)). By Theorem 7.2, we can conclude that the expansion profile Λ √ of G satisfies ΦG (3δ) 6 O( e log(1/δ)), which contradicts the assumption of Theorem 1.7. (We omitted the factor 3 from the theorem statement, because it can be absorbed in the O(·) notation.)
7.1
Rounding via Correlations
In this section, we prove the following lemma. Lemma (Restatement of Lemma 7.1). Suppose that
u,v∈V ρt (u, v) > δ. Then, there exists √ an assignment for U with value at least δ2 /4 − O( ε + e−t/100 ). Proof. Since
u,v∈V ρt (u, v) > δ, there exists a vertex w ∈ V such that
v∈V ρt (w, v) > δ. Using this vertex w, we compute an assignment for U as follows: 1. For every vertex v ∈ V, let σv be a permutation of [k] such that i = σv ( j) whenever P hv¯ i , w¯ j i2 > 1/2. (Such permutations exist since j hv¯ i , w¯ j i2 6 1 by orthogonality, which means that hv¯ i , w¯ j i2 > 1/2 can holds for at most one index j ∈ [k].) 2. Sample a label s ∈ [k] for vertex w according to the distribution given by the weights kw1 k2 , . . . , kwk k2 . 3. To every vertex v ∈ V, assign the label σv (s). In the following claim, we lower bound the probability that our assignment satisfies a constraint (u, v, π) of the unique game U. P Claim. Let (u, v, π) be a constraint of U. Suppose i kwi k kuσu (i) k hw¯ i , u¯ σu (i) i2t > γ and P ¯ i , v¯ π(i) i2 > 1 − η. Then, an assignment sampled as above satisfies the i kui k kvπ(i) k hu √ constraint (u, v, π) with probability at least γ2 − O( η + e−t/100 ). Proof. We may assume that σu and σv are the identity permutations (simply by renaming the labels for u and v). Then, the probability that our assignment satisfies the constraint P (u, v, π) is exactly s kw s k2 1 s=π(s) . P Let I be the set of labels i such that hu¯ i , v¯ π(i) i2 > 1 − 1/100. Then, i∈I kui k kvπ(i) k > P 1 − 100η (using the assumption i kui k kvπ(i) k hu¯ i , v¯ π(i) i2 > 1 − η). Since kui k kvπ(i) k 6 1/2 + kui k2 /2, we can lower bound the total weight of the labels in I for vertex u by P P 2 2 i∈I kui k > 1 − 200η, which implies that i γ − 200η (using the first assumption of the claim and Cauchy– Schwarz). Let J be the set of labels i such that hw¯ i , u¯ i i2 > 1 − 1/100. Then, we arrive at the lower bound for the contribution of the labels in I ∩ J X p kwi k kui k hw¯ i , u¯ i i2t > γ − 200η − e−t/100 . P
i∈I
i∈I∩J
Using Cauchy–Schwarz, we can p lower bound the total weights p of the labels in I ∩ J for the P vertex w by i∈I∩J kwi k2 > (γ − 200η − e−t/100 )2 > γ2 − 2 200η − 2e−t/100 . To prove the claim it suffices to show that i = π(i) for every label i ∈ I ∩ J. Using triangle inequality, we have hw¯ i , v¯ π(i) i2 > 1 − 1/10 (the constant 1/10 is not optimal here). By construction of the permutation πv , it follows that π(i) = σv (i). For the current claim, we assumed that σv is the identity permutation and hence π(i) = i, as desired. To finish the proof of the current lemma, let us relate the quantity kw ¯ i , u¯ σu (i) i2t > δ to the correlation ρt (w, u). Let σ0u be a permutai k kuσu (i) k hw i tion that maximizes the right-hand side in the definition (7.2) of ρt (w, u). We want to upper bound the contribution of the labels i where the permutations σu and σ0u disagree, σu (i) , σ0u (i). By construction of σv , we have hw¯ i , w¯ σ0u (i) i2 < 1/2. We can bound the maximum contribution of these labels by X kwi k kuσ0u (i) k hw¯ i , u¯ σ0u (i) i2t < 2−t . P
i : σu (i),σ0u (i)
P It follows that i kwi k kuσu (i) k hw¯ i , u¯ σu (i) i2t > ρt (w, u) − 2−t . Therefore, using the previous claim and convexity, we can lower bound the expected value of our computed assignment (which also lower bounds the optimal value of U), √ 2 √ opt(U) >
u∈V ρt (w, u) − 2−t − O ε − e−t/100 > δ2 − O ε − e−t/100 . (In the first step, we also used the fact that
(u,v,π)∼U SDP(U)-solution of value 1 − ε.)
7.2
P
¯ i , v¯ π(i) i2 i kui k kvπ(i) khu
> 2ε for an
Correlations and Small-Set Expansion
In this section, we prove Lemma 7.3. ˜ G (δ + 2−Ω(t) ) 6 Lemma (Restatement of Lemma 7.3). If
u,v∈V ρt (u, v) 6 δ for t ∈ , then Λ O(tε). The proof of this lemma relies on the following lemma from [AKK+ 08]. Lemma 7.4 ([AKK+ 08]). For every t ∈ , there exists a collection of unit vectors {Xu }u∈V such that hXu , Xv i − 2−Ω(t) 6 ρt (u, v) 6 hXu , Xv i . Using this lemma, the proof of Lemma 7.3 is straight-forward. Proof. Let {Xu }u∈V be vectors as in Lemma 7.4. Using the assumption of Lemma 7.3, these ˜ G (δ + 2−Ω(t) ), namely vectors satisfy the constraints of the relaxation Λ
|hXu , Xv i| 6 δ + 2−Ω(t) ,
u,v∈V
30
kXu k2 = 1 .
u∈V
On the other hand, we can upper bound the objective value as follows
kXu − Xv k2 6 2 − 2
ρt (u, v) uv∼G P ¯ i , v¯ π(i) i2t >2−2
i kui k kvπ(i) khu
uv∼G
(u,v,π)∼U
> 2 − 2
(u,v,π)∼U
P
2t
i hui , vπ(i) i
(using Jensen’s inequality)
> 4tε . ˜ G (δ + 2−Ω(t) ) of value We conclude that the vectors {Xu } form a solution for the relaxation Λ O(tε), which proves the current lemma.
Acknowledgements We thank Boaz Barak for suggesting the notion of partial games, which helped to improve the presentation of the results. (However, the authors are to blame for the name "partial game".) We also thank Sanjeev Arora, Venkatesan Guruswami and Prasad Tetali for insightful discussions.
References [AIMS10]
Sanjeev Arora, Russell Impagliazzo, William Matthews, and David Steurer, Improved algorithms for unique games via divide and conquer, Electronic Colloquium on Computational Complexity (ECCC) 17 (2010), 41. 5
[AKK+ 08] Sanjeev Arora, Subhash Khot, Alexandra Kolla, David Steurer, Madhur Tulsiani, and Nisheeth K. Vishnoi, Unique games on expanding constraint graphs are easy, STOC, 2008, pp. 21–28. 1, 3, 5, 28, 30 [Alo86]
Noga Alon, Eigenvalues and expanders, Combinatorica 6 (1986), no. 2, 83–96. 2
[AM85]
Noga Alon and V. D. Milman, λ1 , isoperimetric inequalities for graphs, and superconcentrators, J. Comb. Theory, Ser. B 38 (1985), no. 1, 73–88. 2
[AR98]
Yonatan Aumann and Yuval Rabani, An O(log k) approximate min-cut maxflow theorem and approximation algorithm, SIAM J. Comput. 27 (1998), no. 1, 291–301. 2
[ARV04]
Sanjeev Arora, Satish Rao, and Umesh V. Vazirani, Expander flows, geometric embeddings and graph partitioning, STOC, 2004, pp. 222–231. 2
[BHH+ 08] Boaz Barak, Moritz Hardt, Ishay Haviv, Anup Rao, Oded Regev, and David Steurer, Rounding parallel repetitions of unique games, FOCS, 2008, pp. 374– 383. 1 [Che70]
Jeff Cheeger, A lower bound for the smallest eigenvalue of the Laplacian, Problems in analysis (Papers dedicated to Salomon Bochner, 1969), Princeton Univ. Press, Princeton, N. J., 1970, pp. 195–199. MR MR0402831 (53 #6645) 2 31
[CMM06]
Moses Charikar, Konstantin Makarychev, and Yury Makarychev, Near-optimal algorithms for unique games, STOC, 2006, pp. 205–214. 28 , Integrality gaps for Sherali–Adams relaxations, STOC, 2009, pp. 283–
[CMM09] 292. 3 [FKO07]
Uriel Feige, Guy Kindler, and Ryan O’Donnell, Understanding parallel repetition requires understanding foams, IEEE Conference on Computational Complexity, 2007, pp. 179–192. 1
[FS02]
Uriel Feige and Christian Scheideler, Improved bounds for acyclic job shop scheduling, Combinatorica 22 (2002), no. 3, 361–399. 3
[GMPT07] Konstantinos Georgiou, Avner Magen, Toniann Pitassi, and Iannis Tourlakis, Integrality gaps of 2 − o(1) for vertex cover sdps in the Lovász–Schrijver hierarchy, FOCS, 2007, pp. 702–712. 3 [Kho02]
Subhash Khot, On the power of unique 2-prover 1-round games, STOC, 2002, pp. 767–775. 1, 8, 28
[KKMO07] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell, Optimal inapproximability results for MAX-CUT and other 2-variable CSPs?, SIAM J. Comput. 37 (2007), no. 1, 319–357. 1, 6 [KR08]
Subhash Khot and Oded Regev, Vertex cover might be hard to approximate to within 2 − ε, J. Comput. Syst. Sci. 74 (2008), no. 3, 335–349. 1
[KV05]
Subhash Khot and Nisheeth K. Vishnoi, The unique games conjecture, integrality gap for cut problems and embeddability of negative type metrics into `1 , FOCS, 2005, pp. 53–62. 1, 3, 5
[LLR95]
Nathan Linial, Eran London, and Yuri Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica 15 (1995), no. 2, 215–245. 2
[LR99]
Frank Thomson Leighton and Satish Rao, Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms, J. ACM 46 (1999), no. 6, 787–832. 2
[MOO05]
Elchanan Mossel, Ryan O’Donnell, and Krzysztof Oleszkiewicz, Noise stability of functions with low influences invariance and optimality, FOCS, 2005, pp. 21– 30. 21
[Rag08]
Prasad Raghavendra, Optimal algorithms and inapproximability results for every CSP?, STOC, 2008, pp. 245–254. 1
[Rao08]
Anup Rao, Parallel repetition in projection games and a concentration bound, STOC, 2008, pp. 1–10. 27
[Raz08]
Ran Raz, A counterexample to strong parallel repetition, FOCS, 2008, pp. 369– 373. 1
[RS09a]
Prasad Raghavendra and David Steurer, How to round any CSP, FOCS, 2009, pp. 586–594. 3, 5 32
[RS09b]
, Integrality gaps for strong SDP relaxations of unique games, FOCS, 2009, pp. 575–585. 3
[RST10]
Prasad Raghavendra, David Steurer, and Prasad Tetali, Approximations for the isoperimetric and spectral profile of graphs and related parameters, STOC, 2010, pp. 631–640. 6, 28, 29
[SAS10]
Boaz Barak Sanjeev Arora and David Steurer, Subexponential time algorithms for unique games and related parameters, FOCS, 2010. 6
[STT07]
Grant Schoenebeck, Luca Trevisan, and Madhur Tulsiani, Tight integrality gaps for Lovász–Schrijver LP relaxations of vertex cover and max cut, STOC, 2007, pp. 302–310. 3
33