Lollipop graphs are extremal for commute times Johan Jonasson
Chalmers University of Technology August 13, 1998 Abstract
Consider a simple random walk on a connected graph G = (V; E ). Let C (u; v) be the expected time taken for the walk starting at vertex u to reach vertex v and then go back to u again, i.e. the commute time for u and v, and let C (G) = maxu;v2V C (u; v). Let further G (n; m) be the? family of connected graphs on n vertices with m edges, m 2 fn ? 1; : : : ; n2 g, and let G (n) = S m G (n; m) be the family of all connected n-vertex graphs. It is proved that if G 2 G (n; m) is such that C (G) = maxH 2G (n;m) C (H ) then G is either a lollipop graph or a so called double-handled lollipop graph. It is further shown, using this result, that if C (G) = maxH 2G (n) C (H ) then G is the full lollipop graph or a full double-handled lollipop graph with [(2n ? 1)=3] vertices in the clique unless n 9 in which case G is the n-path.
1 Introduction Let G = (V; E ) be a simple connected graph on n vertices and let fX ; X ; X ; : : :g be a simple random walk on G. Let Pv , v 2 V , denote the probability measure induced by fXk g with X v and let Ev denote expectation with respect to Pv . De ne the random variables Tv , v 2 V , as minfk : Xk = vg. Quantities of interest are the hitting times h(u; v) = EuTv , u; v 2 V , the commute times C (u; v) = Eu Tv + Ev Tu and the cover times Cu = Eu maxv Tv . Numerous upper and lower bounds for these and other related quantities as well as explicit calculations for some special cases of graphs have been produced over the last couple of years. We refer to [1], in particular Chapters 3-6. In this paper we concentrate on general upper bounds for commute times in a strong sense. Let G (n) denote the family of all connected n-vertex graphs. For a graph G 2 G (n), let h(G) = maxu;v2V h(u; v), C (G) = maxu;v2V C (u; v) and C (G) = maxu2V Cu. Furthermore, let C (G) = maxu2V Cu where Cu are the cover and return times; Cu = Eu minfk : Xk = u and k Tv for all vg. Feige [5] proves that maxH 2G n C (H ) (4=27 + o(1))n . (Indeed he proves something stronger, namely that the above bound holds even for the so called cyclic cover time.) This immediately gives the same upper bound for hitting, commute and cover times as h(G) C (G) C (G) and h(G) C (G) C (G). In fact Feige's bound is tight up to small order terms for all the quantities as h(G) = 4n =27 + o(n ) when G is a lollipop graph with [2n=3] vertices in the clique. 0
0
+
+
+
( )
+
3
+
3
+
3
1
+
1
2
2
JOHAN JONASSON
(See De nition 1.1 below for formal de nitions of the terms \lollipop graph" and \clique" and see [1, Chapter 5] for the calculation.) Before giving De nition 1.1 let us state some standard graph theory de nitions and notation. For a vertex u we denote by du the degree of u, i.e. the number of edges with u as an end vertex. When we say that two vertices, u and v, are adjacent or incident or neighbors, this means that fu; vg is an edge. That two edges are adjacent means that they have a common end vertex and that a vertex u and an edge e are adjacent means that u is an end vertex of e. When dealing with more than one graph at the same time we sometimes need to specify to which graph a certain vertex or edge set belongs. Therefore V (G) always automatically means the set of vertices of the graph G and similarly E (G) is the set of edges of G. If V V (G) is a subset of vertices then the induced subgraph of G on V is the graph obtained by removing from G all vertices of V n V and all edges with at least one end vertex in V n V . The diameter of a graph G, diam(G), is de ned as maxu;v2V dist(u; v) where dist(u; v) is the length (i.e. the number of edges) of the shortest path between u and v. 0
0
0
0
Definition 1.1 Let G = (V; E ) be a connected graph on n vertices with jE j = m. P
Write m = n?1+ ki=1 i+l where k and l are the unique integers such that 0 l k. We say that G is an n-path if k = l = 0 and dv 2 for all v 2 V . We call G a clique if either n 2 f1; 2g or m n?2 1 + 2. The graph G is said to be a lollipop graph if there exists one edge e 2 E such that the removal of e disconnects G into two graphs Gc and Gh such that Gc is a clique and Gh is a jV (Gh)j-path. We call G a double-handled lollipop graph if there are two nonadjacent edges e; e0 2 E , such that the removal of e and e0 disconnects G into three graphs, Gc, Gl and Gr , such that Gc is a clique and Gl and Gr are jV (Gl )j- and jV (Gr )j-paths respectively. Finally, if l = 0 then we say that a (double-handled) lollipop graph is a full (double handled) lollipop graph.
.
Remark. Note that G is a lollipop graph and a double-handled lollipop graph if and only if n 4 and G is an n-path. Note also that a graph consisting of a clique and two paths extending from the same vertex is not a double-handled lollipop graph. Let Lrn from now on denote the (unique) full lollipop graph on n vertices with r vertices in the clique. As stated above h(Ln n = ) = 4n =27 + o(n ) (see e.g. [1]). Thus, if we disregard small order terms Feige's upper bound on the cover and return time tells us that with respect to hitting times this lollipop graph is optimal. However, Brightwell and Winkler [2] prove that in terms of hitting times Ln n = is is not only optimal up to small order terms, but it is the optimal graph. I.e. no other n-vertex graph can have a higher hitting time. In this paper we prove the corresponding result for commute times. Our main results are the following two theorems. The family G (n; m) is de ned to be the family of all connected n-vertex graphs with m edges. [(2 +1) 3]
3
3
[(2 +1) 3]
Theorem 1.2 Let G 2 G (n; m) be such that C (G) = maxH 2G (n;m) C (H ). Then G
is either a lollipop graph or a double-handled lollipop graph.
3
LOLLIPOP GRAPHS ARE EXTREMAL FOR COMMUTE TIMES
2 G (n) be such that C (G) = maxH 2G n C (H ). If n 10, then G is either Ln or any full double-handled lollipop graph with [(2n ? 1)=3] vertices in the clique. If n 9, then G is the n-path. As a consequence we have for n 9 that max C (H ) = 2(n ? 1) H 2G n and for n 10 we have that Theorem 1.3 Let G
( )
n?1)=3]
[(2
2
( )
4n + c n + r(n) max C ( H ) = H 2G n 27 n 3
( )
where cn and r(n) are both O(1) and are speci ed in Proposition 2.7 below. Remark. Our results for commute times dier slightly from those for hitting times in [2]. The results there state that for hitting times Ln(2n+1)=3] is the optimeal graph whereas for commute times the number of vertices in the clique should be [(2n ? 1)=3]. Also, for hitting times there are no exceptions for small n as for commute times. It should also be noted that even though the result of Theorem 1.3 is very similar in spirit to that of [2], the techniques used to prove these results are very dierent.
2 Proofs We will use the correspondence between random walks on graphs and electrical networks: Let G = (V; E ) be an n-vertex m-edge connected graph and regard all the edges of E as 1 ohm resistors. De ne the eective resistance between two vertices u and v as i? where i is the ow of current that results from applying a 1 volt battery to u and v. Chandra et. al. [3] prove: Lemma 2.1 For any two vertices u and v C (u; v) = 2mR(u; v): De ne in analogy with above de nitions R(G) = maxu;v2V R(u; v). It follows immediately that C (G) = 2mR(G) so that the statement of Theorem 1.2 is equivalent to the same statement for eective resistances. Eective resistances obey the Monotonicity Law, see e.g. [4]. In particular we have the Shorting Law which says that shortcutting two or more vertices cannot increase the eective resistances in the network, and the Cutting Law which says that removing one or more edges cannot decrease the eective resistances in the network. The following lower bound on R(u; v) is obtained from the Shorting Law by shorting all vertices but u and v together. Lemma 2.2 The u; v 2 V . If fu; v g 62 E , then R(u; v) d1 + d1 u v and if fu; vg 2 E , then R(u; v) d 1+ 1 + d 1+ 1 : u v 1
4
JOHAN JONASSON
We shall also use the basic Series and Parallel Rules for calculating resistances. The Series Rule states that if every path from vertex u to vertex v goes via vertex w, then R(u; v) = R(u; w) + R(v; w). The parallel Rule says that if one can write E as E [ E where E and E are disjoint and every path between u and v only contains edges from one of these sets, then R(u; v)? = RE1 (u; v)? + RE2 (u; v)? , where RE (u; v), i = 1; 2, denotes the eective resistance obtained by removing all edges in E n Ei. Both these Rules are well known and are taught in high-school courses in physics. For rigorous proofs one can e.g. use the random walk connection; the Series Rule follows immediately from Lemma 2.1 and the Parallel Rule follows from applying the proper lemma of Chapter 3 in [1]. Lemma 2.3 R(Lrn) = n ? r + 2r : Proof. Let u be any of the vertices in the clique from which the n ? r-path does not extend and let v be the end vertex of that path. If we let w be the vertex in the clique from which the n ? r-path extends, then by the de nition of a lollipop graph we can apply the Series Rule to obtain that R(u; v) = n ? r + R(u; w). By Lemma 2.2, R(u; w) 2=r and by symmetry the shortcutting argument that lead to Lemma 2.2 does not decrease R(u; w) so we have in fact an equality. 2 1
2
1
2
1
1
1
i
Note that an analogous argument shows that if G is a full double handled lollipop then also R(G) = n ? r + 2=r. Recall the way in which we wrote m in De nition 1.1: m = n ? 1+ k(k +1)=2+ l where k and l are the two unique integers such that l 2 f0; : : : ; kg. This way of expressing m will be used throughout this section. Note that when l = 0 in the expression m = n ? 1 + k(k + 1)=2 + l we have Lkn 2 G (n; m), so one should think of the r in Lemma 2.3 as k + 2. For the cases l = 1; : : : ; k we have the following. Lemma 2.4 Assume that l 1. Then there exists one lollipop graph G 2 G (n; m) such that R(G ) > n ? 3 ? k + 1=(l + 1). On the other hand we have for any lollipop graph G 2 G (n; m) that R(G) n ? 3 ? k + 2=(l + 1). Proof. Let rst G be any lollipop graph and let w denote the vertex of the clique of G from where the n ? 3? k-path extends. Since l is de ned so that the number of edges in the clique is k + l + 1 we can if necessary rearrange the edges within the clique so that dw = l + 2 and thus so that the induced subgraph on the other vertices in the clique is complete. Let v be the end vertex of the n ? 3 ? k-path. Then R(u; v) = R(u; w) + n ? 3 ? k for any vertex u in the clique. Since l 6= 0 the (induced subset on the vertices of the) clique cannot be complete so we can take u to be a vertex such that fu; wg 62 E . Lemma 2.2 thus implies that R(u; w) 1=(k + 1) + 1=(l + 1) > 1=(l + 1) proving the rst part of the lemma. For the second part observe that for any u in the clique we have either fu; wg 2 E in which case u and w must have at least l common neighbors, or fu; vg 62 E in which case u and w must have at least l + 1 common neighbors. In either case the Cutting Law and the Parallel Rule yield that R(u; w) 2=(l + 1) and the result follows. 2 +2
0
0
0
0
+2 2
The next lemma is a simple graph theoretical result which gives the maximum diameter of an n-vertex m-edge connected graph as a function of m.
5
LOLLIPOP GRAPHS ARE EXTREMAL FOR COMMUTE TIMES Lemma 2.5 For any graph G 2 G (n; m) it is the case that
diam(G) n ? 1 ? k:
Proof. Assume for contradiction that diam(G) = n ? 1 ? k + r for some r 1 and take u; v 2 V with dist(u; v) = n ? 1 ? k + r. Let P = (u; x1; : : : ; xn?2?k+r ; v) be a shortest path between u and v. It is clear that none of the k ? r vertices of V n P has more than three neighbors in P as otherwise P would not be a shortest path. Therefore we have !
jE j n ? 1 ? k + r + 3(k ? r) + k ?2 r = n ? 1 + 2(k ? r) + k ?2 r = n ? 1 + 21 (k ? r + 1)(k ? r + 2) ? 1 n ? 1 + 21 k(k + 1) ? 1 contradicting the value of k. 2
!
Lemma 2.5 comes in handy in the proof of the next lemma which is the key to the proof of Theorem 1.2. Note that the result is trivial for n 3. Lemma 2.6 Assume that m n?2 1 +1 and that G 2 G (n; m) is such that R(G) =
maxH 2G n;m R(H ). Then minv2V dv = 1. (
)
Proof. We will prove that if minv dv 2 then R(G) is beaten by R(G0 ) where G0 is the lollipop graph of Lemma 2.4 unless l = 0 in which case R(G) is beaten by R(Lkn+2). Assume rst that l = 0. By Lemma 2.3, R(Lkn+2) = n?2?k +2=(k +2) > n?2?k. Now pick any two vertices u and v. If we can prove that the assumption du; dv 2 implies that R(u; v) n ? 2 ? k we are done in the case l = 0. By Lemma 2.5 dist(u; v) n ? 1 ? k and by the Cutting Law may assume that dist(u; v) in fact equals n ? 1 ? k. Let P = (u; x1; : : : ; xn?2?k ; v) be a shortest path between u and v. If any of the vertices in V n P has more than two neighbors in P then the Cutting Law and the Parallel Rule immediately yield that R(u; v) n ? 2 ? k and we are done so assume that this is not the case. Then since m = n ? 1 + k(k + 1)=2 this implies that
(i) every vertex in V n P has exactly two neighbors in P , (ii) the induced subgraph on V n P is complete. Now since du; dv 2 there exist two vertices u0 and v0 in V n P such that fu; u0g 2 E and fv; v0g 2 P . By (ii) we also have that fu0; v0g 2 E implying that dist(u; v) 3, a contradiction when k n ? 5 and also when k = n ? 4 as the Parallel Rule and the Cutting Law tells us that R(u; v) 3=2 < 2 = n ? 2 ? k. In the case k = n ? 3, we have that R(u; v) 6=5 which allows us to assume that k 8 so that n 11. Together with (i), (ii), the Parallel Rule and the Cutting Law this clearly entails that R(u; v) 1 = n ? 2 ? k. Assume now that l = 1. Then Lemma 2.4 tells us that there exists a lollipop graph G 2 G (n; m) such that R(G ) > n?5=2?k. Pick any u and v in V and assume 0
0
6
JOHAN JONASSON
u0 u
w0 x
1
:::
v0
xn? ?k 3
v
Figure 1: The \worst case". rst that dist(u; v) = n ? 1 ? k. Let as above P be a shortest path between u and v and let u0 and v0 be vertices in V n P incident to u and v respectively. If fu0; v0g 2 E then we have as above a contradiction in terms of the value of dist(u; v) when k n?5 and that R(u; v) 3=2 when k = n?4, so we may assume that fu0; v0g 62 E . (Note that the value k = n ? 3 is impossible when l 1.) Then we nd by edge counting that there must be at least two vertices in V n P that have three neighbors in P . If u0 and v0 are such then the Parallel Rule yields that R(u; v) n ? 3 ? k so we may assume that this not the case and so that there exists one other vertex w0 2 V n P with three neighbors in P . Now if fu0; w0g 2 E and fv0; w0g 2 E then the Parallel Rule yields R(u; v) 4(n ? 1 ? k)=(n + 3 ? k) < n ? 5=2 ? k. Thus we may in fact assume that there are at least three vertices in V n P with three neighbors in P . However, together with the Parallel Rule this implies that R(u; v) n ? 5=2 ? k as desired. Assume now that dist(u; v) = n ? 2 ? k and let P = (u; x ; : : : ; xn? ?k ; v) be a shortest path between u and v. Since dist(u0; v0) n ? 1 ? k by Lemma 2.5 we have that R(u; v) is at most n ? 3 ? k + 5=12 < n ? 5=2 ? k by the Parallel Rule and the Cutting Law. (This bound is obtained by letting u0 be incident to x and letting v0 be incident to another vertex w0 which in turn is incident to xn? ?k . See Figure 1.) Now let us nally deal with the case l 2. Pick as before u and v arbitrarily and assume rst that dist(u; v) = n ? 2 ? k and let P = (u; x ; : : : ; xn? ?k ; v) be a shortest path between u and v. Let again u0 and v0 be two vertices in V n P incident to u and v respectively. Again by Lemma 2.5, dist(u0; v0) n ? 1 ? k. Also, since we can clearly assume that no vertex in V n P has more than two neighbors in P it follows from edge counting that at least two vertices in V n P must have precisely two neighbors in P . Combining these two facts and the now familiar calculation Rules yields R(u; v) n ? 7=3 ? k (which is what we get if we let u0 be incident to x and v0 be incident to xn? ?k .) By Lemma 2.4 we may thus assume that l 3 so that at least one more vertex in V n P has two neighbors in P . Let w be such a vertex. In the \worst case" we can let w be incident to e.g. u and x which gives R(u; v) n ? 17=6 ? k and so we can assume that l 6. This gives us another three vertices of V n P with two neighbors in P . Adding these implies that R(u; v) n ? 3 ? k and by Lemma 2.5 this proves the desired result when dist(u; v) = n ? 2 ? k so assume now that dist(u; v) = n ? 1 ? k. Let P = (u; x ; : : : ; xn? ?k ; v) be a shortest path between u and v. Since no vertex in V n P has more than three neighbors in P we get that the induced subgraph on k ? + 1 edges. This V n P has at least n ? 1 + k(k + 1)=2 + 2 ? 3k ? (n ? 1 ? k) = means that the induced subgraph on V n P has diameter at most 2, implying that dist(u; v) 4. If k n ? 6 this is immediately a contradiction and if k = n ? 5 we 1
3
1
3
1
1
3
3
1
1
2
1
2
LOLLIPOP GRAPHS ARE EXTREMAL FOR COMMUTE TIMES
7
have by the Parallel Rule and the Cutting Law that R(u; v) 2 = n ? 3 ? k which by Lemma 2.4 proves the result. If k = n ? 4 we have to do some more work. Since l 2 at least two of the vertices in V n P have three neighbors in P . If u0 and v0 are incident, then this readily implies that R(u; v) 5=4 (the \worst case" being when only u0 and v0 have three neighbors in P ), so by Lemma 2.4 we may assume that l 4 which means that there must be at least two vertices but u0 and v0 in V n P which has three neighbors in P . By the Parallel Rule and the Cutting Law this entails that R(u; v) 15=14 and thus allows us to assume that l 14. This in turn implies that R(u; v) 1 and proves the result when u0 and v0 are incident. Now if fu0; v0g 62 E then at least one vertex in V n P but u0 and v0 has three neighbors in P . Combining this with the fact that there is a path from u to v through V n P of length 4 yields R(u; v) (4 2)=(4 + 2) = 4=3 and we may assume that l 3. Then we get R(u; v) (4 5=3)=(4 + 5=3) = 20=17 < 5=4 and we can assume that l 4. But then R(u; v) (4 3=2)=(4 + 3=2) = 12=11 allowing us to assume that l 11. This nally implies that R(u; v) 1. The proof is complete. 2
Proof of Theorem 1.2. Note rst that if m n?2 1 + 2 or n = m + 1 = 2 then all graphs in G (n; m) are lollipop graphs so the result is trivially true. Now use induction; assume that m n2 + 2 or n = m + 1 = 2 and assume that the result holds for G (n + i; m + i), i = 0; : : : ; b ? 1 and let us under this assumption prove the result for G (n + b; m + b). First let G0 2 G (n + b ? 1; m + b ? 1) be such that R(G0) = maxH 2G(n+b?1;m+b?1) R(H ). Then by the induction hypothesis G0 is a lollipop graph or a double-handled lollipop graph, so R(G0) = R(u; v) where u and v are either the end vertices of the two handles (in the double-handled case) or the end vertex of the handle and a vertex of the clique (in the singlehandled case) unless of course if b = 1. In either case construct a new graph G1 2 G (n + b; m + b) by adjoining to either u or v a vertex v1 , e.g. V (G1 ) = V (G0 ) [ fv1 g and E (G1) = E (G0) [ ffv; v1gg. Then G1 is a (double-handled) lollipop graph and R(G1 ) = R(G0) + 1 by the Series Rule. Now if G01 2 G (n + b; m + b) is not a lollipop or a double-handled lollipop graph but still such that R(G01) R(G1) then by Lemma 2.6 we can construct a graph G00 2 G (n + b ? 1; m + b ? 1) by removing from G01 a vertex of degree 1 and its adjoining edge. By the Series Rule R(G00) R(G01) ? 1 R(G1 ) ? 1 = R(G0 ) a contradiction to the induction hypothesis. Since for any n and m 2 fn ? 1; : : : ; n2 g we can write n= n0 + b and m = m0 + b for some n0 and m0 such that n0 = m0 + 1 = 2 or m0 n 2?1 + 2, the result follows from Lemma 2.1. 2 In order to prove Theorem 1.3 we now have to compare dierent lollipop graphs. (Let us for now forget about the double-handled lollipop graphs and use the singlehandled lollipops as representatives for the family of graphs in G (n; m) with maximal n resistance, m = n ? 1; : : : ; 2 .) By Lemma 2.4 we have for 1 l k, R(G) n ? 3 ? k + l +2 1 : Thus by Lemma 2.1, C (G) 2(n ? 1 + 12 k(k + 1) + l)(n ? 3 ? k + l +2 1 ) 0
8
JOHAN JONASSON
2(n ? 1 + k(k + 1)) 2l := 2(a(n; k) + + l ( n ? 3 ? k ) + l+1 l+1 1 2
:= 2(a(n; k) + fn;k (l)): Let us study the function fn;k (l). We have 2(n ? 1 + k(k + 1)) ? 2 fn;k (l + 1) ? fn;k (l) = n ? 3 ? k ? (l + 1)(l + 2) 1 2
l = 1; : : : ; k. Thus fn;k (l) is maximized either at l = 1 or l = k and so C (G) Gn(k) _ Hn(k) where and
Gn(k) = (2n + k(k + 1))(n ? 2 ? k)
(1)
Hn(k) = (2n ? 2 + k + 3k)(n ? 3 ? k + k +2 1 ):
(2)
2
Set further Fn(k) to be the commute time for a full lollipop with m = n ? 1 + k(k + 1)=2 vertices, i.e. for Lkn . By Lemma 2.3, (3) Fn(k) = (2n ? 2 + k(k + 1))(n ? 2 ? k + k +2 2 ): Theorem 1.3 follows immediately from the following two propositions. +2
Proposition 2.7 The function Fn (k) de ned by (3) is for n
k = 0 and
Fn(0) = 2(n ? 1) : 2
For n 10, Fn (k) is maximized at k = [(2n ? 1)=3] ? 2 and
n + c n + r(n) Fn([(2n ? 1)=3] ? 2) = 427 n 3
where
cn =
(
14 9 4 3
for n = 10; 11; 13; 14; 16; 17; : : : for n = 12; 15; 18; : : :
and 8 >
? + n? for n = 11; 14; 17; : : : : 2 + n? for n = 12; 15; 18; : : : 8 27
6
1
8 27
18
2
2
3
6
1
9 maximized at
9
LOLLIPOP GRAPHS ARE EXTREMAL FOR COMMUTE TIMES
Proposition 2.8 For Gn (k) and Hn (k) given by (1) and (2) respectively we have
for n 9 that Gn(k) _ Hn(k) 2(n ? 1)2 and for n 10 that n3 + c n + r(n) Gn(k) _ Hn(k) 427 n where cn and r(n) are as in Proposition 2.7. The proof of these two propositions involve some heavy calculations which have been carried out with the help of Mathematica. I am greatly indebted to the constructors of this program which saved me a large amount of work and also a from a large risk of making some computational error. Proof of Proposition 2.7. Write Fn(k) = Fn0 (k) + 4n=(k + 2) so that Fn0 (k) = ?k3 +(n ? 3)k2 +(2 ? n)k +2n2 ? 6n +2, a polynomial in k of degree 3 with a negative k3 -coecient. Thus Fn0 has at most two local maxima; one at k = 0 and perhaps one more. By inspection we see that for n 8, Fk0 is decreasing and by using e.g. Mathematica we see that Fn0 for n 9 has a local maximum at k = [(2n ? 1)=3] ? 2. Now for Fn we have that since Fn is the sum of Fn0 and a decreasing function, Fn is maximized either at k = 0 or at the rightmost local maximum which is at or to the left of [(2n ? 1)=3] ? 2. Using Mathematica we get for n = 9; 12; 15; : : : 0, Fn(2n=3 ? 2) ? Fn(2n=3 ? 3) = 4(3n?+3) 2n
Fn(2n=3 ? 3) ? F (2n=3 ? 4) =
n ?46n2 +126n?126 2n2 ?9n+9
4 3
For n = 10; 13; 16; : : : we get Fn(2n=3 ? 5=3) ? Fn (2n=3 ? 8=3) =
0.
n?n n 0, n? n n?140 0. Fn(2n=3 ? 8=3) ? Fn (2n=3 ? 11=3) = 8n3 ?684n2n?221+162 n+15
n ?12n2 +42n+8 0, 3+3n?6n2 n?32 0. Fn(2n=3 ? 7=3) ? Fn (2n=3 ? 10=3) = 4n3 ?6n422 ?n215+30 n+6
For n = 11; 14; 17; : : : we get Fn(2n=3 ? 4=3) ? Fn (2n=3 ? 7=3) =
4 3 6 2 +54 +2 3+3 6 2
8 3
Thus, for n 8, Fn is maximized at k = 0 and Fn(0) = 2(n ? 1) as desired. For n 9, Fn is maximized either at k = 0 or k = [(2n ? 1)=3] ? 2. Inserting the latter into (3) with the help of Mathematica con rms the expression for Fn([(2n ? 1)=3] ? 2) and inspection of this expression reveals that for n 10 this is indeed the maximum for Fn whereas for n = 9, Fn(0) is maximal as claimed. 2 Proof of Proposition 2.8. This proof is very similar to the proof of Proposition 2.7. Let us rst take care of Gn. Since Gn is a polynomial with degree 3 and negative k -coecient, Gn is maximized either at k = 0 or the other local maximum if such exists. First, inspection of Gn reveals that for n 8, Gn is decreasing and since Gn(0) = 2(n ? 1) ? 1 this ts with the statement of the proposition. For n = 9; 12; 15; : : : we get 2
3
2
10
JOHAN JONASSON
Gn(2n=3 ? 3) ? Gn(2n=3 ? 4) = 2n ? 18 0, Gn(2n=3 ? 2) ? Gn(2n=3 ? 3) = ?6 0, so that Gn(2n=3 ? 3) = 4n =27 + 2n=3 + 6 < 4n =27 + cnn + r(n) is maximal as 3
3
desired. For n = 11; 14; 17; : : : we get Gn(2n=3 ? 7=3) ? Gn(2n=3 ? 10=3) = n? 0, n 14, Gn(2n=3 ? 4=3) ? Gn(2n=3 ? 7=3) = ? n 0, 2(
14)
3
4 +1 3
Gn(2n=3 ? 10=3) ? Gn(2n=3 ? 13=3) = n? 0, and for n 14 we have that Gn(2n=3 ? 7=3) = 4n =27 + 8n=9 + 28=17 < 4n =27 + 2(4
35)
3
3
3
cnn + r(n) as desired and for n = 11 we have Gn(4) = 210 < 214 < 4n =27 + cnn + r(n). For n = 10; 13; 16; : : : we get Gn(2n=3 ? 8=3) ? Gn(2n=3 ? 11=3) = n? 0, 3
4(
10)
3
Gn(2n=3 ? 5=3) ? Gn(2n=3 ? 8=3) = ? n 0, and G(2n=3 ? 8=3) = 4n =27 + 8n=9 + 80=27 < 4n =27 + cnn + r(n). This completes 2( +5) 3
3
3
the proof for Gn so let us now take care of Hn. Write Hn(k) = Hn(k) + 4(n ? 2)=(k + 1) where Hn(k) = ?k + (n ? 6)k + (n ? 5)k + 2n ? 8n + 10. Then, by exactly the same arguments as for Fn in the proof of Proposition 2.7, Hn is maximized at either k = 0 or the possible local maximum which coincides with or is as close as possible on the left of the possible second local maximum of Hn. It is readily veri ed that for n 8, Hn is decreasing and that for n 9, Hn has a local maximum at k = [(2n ? 1)=3] ? 3. Note also that Hn(0) = 2(n ? 1) . For n = 9; 12; 15; : : : we get Hn(2n=3 ? 4) ? H (2n=3 ? 5) = n3 ?n2n?2 n n? 0, 0
0
3
2
2
0
0
0
2
4
Hn(2n=3 ? 3) ? H (2n=3 ? 4) =
66 2
+342 612 21 +54
0.
? n n? n
36 18 2 2 15 +27
Thus Hn is locally maximized at k = 2n=3 ? 4 and Hn(2n=3 ? 4) = 4n =27 + 2n=3 + 6 ? 24=(2n ? 9) < 4n =27 + cnn + r(n). For n = 11; 14; : : : we get Hn(2n=3 ? 10=3) ? Hn(2n=3 ? 13=3) = n3 ?n2 ?n2 n n? 0, 3
3
4
54
6
+186 242 51 +105
Hn(2n=3 ? 7=3) ? Hn(2n=3 ? 10=3) = ?n n? n? 0, and Hn(2n=3?10=3) = 4n =27+8n=9+(164n?88)=(54n?189) < 4n =27+cnn+r(n). 2 +7
8(
6
3
3
For n = 10; 13; 16; : : : we get Hn(2n=3 ? 11=3) ? Hn(2n=3 ? 14=3) =
Hn(2n=3 ? 8=3) ? Hn(2n=3 ? 11=3) =
19)
21
n ?120n2 +540n?860 0, 6n2 ?57n+132 268?198n+42n2 ?4n3 0, 6n2 ?39n+60 8 3
11
LOLLIPOP GRAPHS ARE EXTREMAL FOR COMMUTE TIMES
and Hn(2n=3 ? 11=3) = 4n =27+8n=9+(80n +4)=(27n ? 108) < 4n =27+ cnn + r(n).
2
3
3
References
1. D. ALDOUS AND J. A. FILL, \Reversible Markov Chains and Random Walks on Graphs," Draft book, March 1998. 2. G. BRIGHTWELL AND P. WINKLER, Maximum hitting times for random walks on graphs, Random Struct. Alg. 1 (1990), 263-276. 3. A. CHANDRA, P. RAGHAVAN, R. SMOLENSKY AND P. TIWARI, The electrical resistance of a graph captures its commute and cover times, Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, Seattle, WA, May 1989, pp. 574-586. 4. P. DOYLE AND J. SNELL, \Random Walks and Electrical Networks," The Mathematical Association of America, Washington DC, 1984. 5. U. FEIGE, A Tight Upper Bound on the Cover Time for Random Walks on Graphs, Random Struct. Alg. 6 (1995), 51-54.