Faster Algorithms for the Quickest Transshipment Problem - CiteSeerX

Report 1 Downloads 52 Views
Faster Algorithms for the Quickest Transshipment Problem L. Fleischer August 1997 Abstract A transshipment problem with demands that exceed network capacity can be solved by sending ow in several waves. How can this be done in the minimum number of waves? This is the question tackled in the quickest transshipment problem. Hoppe and Tardos [10] describe the only known polynomial time algorithm to solve this problem. Their algorithm repeatedly minimizes submodular functions using the ellipsoid method, and is therefore not at all practical. We present an algorithm that nds a quickest transshipment with a polynomial number of maximum ow computations, and a faster algorithm that also uses minimum cost ow computations. When there is only one sink, we show how the algorithm can be sped up to return a solution using O(k) maximum ow computations, where k is the number of sources. Hajek and Ogier [9] describe an algorithm that nds a fractional solution to the singlesink quickest transshipment problem on a network with n nodes using O(n) maximum ow computations. They actually solve the universally quickest transshipment|a dynamic ow that minimizes the amount of supply left in the network at every moment of time. In this paper, we show how to solve this problem in O(mn log(n2 =m)) time, the same asymptotic time required by the fastest known algorithm to compute a maximum ow.

email: [email protected]. Department of Industrial Engineering and Operations Research, Columbia University. Supported in part by ONR through an NDSEG fellowship, by AASERT through grant N00014-95-1-0985, by an American Association of University Women Educational Foundation Selected Professions Fellowship, by the NSF PYI award of E va Tardos, and by NSF through grant DMS 9505155. 

1 Introduction The eld of network ows blossomed in the 1940s and 50s with interest in transportation planning, and has developed rapidly since then. There is a signi cant body of literature devoted to this subject. However, it has largely ignored a crucial aspect of transportation: transportation occurs over time. In the 1960's, Ford and Fulkerson introduced dynamic network ows to include time in the network model. Since then, dynamic network ows have been used widely to model networkstructured, decision-making problems over time: problems in electronic communication, production and distribution, economic planning, cash ow, job scheduling, and transportation. For examples, see the surveys of Aronson [4] and Powell, et al. [17]. A dynamic network consists of a network N on vertex set V with a capacity vector u and a transittime vector, both associated with the edge set E . Flow moves through a dynamic network over time. Edge capacities restrict the rate of ow and edge transit times determine how long each unit of ow spends traversing the network. Much of the work on dynamic ow problems uses an exponentially sized, time-expanded graph [4, 17]. In this paper, we discuss a special case of dynamic network ow problems: we assume all transit times are zero. This special case has been considered in [2, 9, 15, 19, 14], among others. Dynamic network ow models with zero transit times capture some time-related issues: they can be used to model instances when network capacities restrict the quantity of ow that can be sent at any one time. Solving these problems eciently may help in nding a more ecient exact or approximate algorithm to solve harder dynamic network problems with transit times or multicommodity demands.

1.1 The Discrete Model A dynamic transshipment is a time-dependent ow f (t) through a dynamic network with non-zero supply and demand vector  associated with V , and a time bound T . We assume for simplicity of notation that fij (t) = ?fji(t) for all i; j 2 V . A dynamic transshipment P obeys P edge capacity constraints f (t)  u for all t 2 f1; 2; : : : ; T g, ow conservation constraints rt=1 j fij (t)  i for all r 2 f1; 2; : : : ; T g and all i 2 V , and zeroes all supplies and demands by time T : PTt=1 Pj2V fij (t) = i for all i 2 V . The quickest transshipment problem is a dynamic transshipment that zeroes all supplies and demands in the minimum possible time. Solving a quickest transshipment problem with xed supplies is useful for clearing a network after a communication breakdown. Many of the applications of dynamic ows need integral solutions: when ows are of big objects, like airplanes, or train engines, the amounts are often small, so fractional approximations are not very useful. Recently, Hoppe and Tardos [10] describe the only known polynomial time algorithm to solve the quickest transshipment problem. They actually solve the harder problem in which arcs have non-zero transit times. However, their algorithm repeatedly calls the ellipsoid method as a subroutine, so it is theoretically slow, and also not practical.

1.1.1 Results We present a new framework for solving quickest transshipment problems. A fractional solution for a dynamic transshipment problem given time bound T is easy to nd, and we will show this 1

algorithm # sinks Hajek-Ogier 1 Hoppe-Tardos

k

Section 4.2

k

Section 4.3

k

Section 5 Section 4.4

1 1

integral discrete transit

ow time times universal run time p O(n) maximum ows O(k3 log(nU  ) log(nU )) p p p minimum cost ows p O(1) maximum ows p p O(k) maximum ows O(k2 log Un + k log(T  =k)) p p maximum ows O(k log(T  =k)) p p maximum ows +k minimum cost ows

Figure 1: New and existing polynomial time algorithms for the quickest transshipment problem. ( is the sum of all supplies,  is the maximum transit time). can be done with a single maximum ow. The new framework allows us to nd integer solutions quickly, as well. In the special case when there is only one sink, we solve the integral quickest transshipment problem with O(k) maximum ows, where k is the number of vertices with i 6= 0 at start (these vertices are called terminals). In the general, multi-source, multi-sink case, we show that integral quickest transshipment problem can be solved with O(k2 log Un + k log(T  =k)) maximum ow computations, or O(k log(T  =k)) maximum ow computations and k minimum cost

ow computations, where U is the maximum capacity in the graph, T  is the minimum time needed for the problem to be feasible, and n is the number of vertices. We also describe a strongly polynomial time algorithm.

1.2 The Continuous-Time Model Dynamic network ows have also been considered in the continuous time setting [2, 3, 16, 18]. Most of the work in this area has examined networks with time-varying edge capacities, storage capacities, or costs. The focus of this research is on proving the existence of optimal solutions for classes of time-varying functions, and proving the convergence of algorithms that eventually nd solutions. These algorithms fall short of being ecient, either theoretically or practically, and implementations do not seem able to handle problems with more than a few nodes. For the case in which capacity functions are constant, Fleischer and Tardos [5] extend the polynomial, discrete-time dynamic transshipment algorithm in [10] to work in the continuous-time setting. A continuous dynamic transshipment is a ow f that varies over time. Let x be the rate of ow of f : x(t) = df (t)=dt. We assume that each xij is a Lebesgue-measurable function on (0; T ]. Here, the capacities are upper bounds on the rate of ow through the arcs. The formulation is similar to the formulation of the discrete-time problem. Z r X x(t)  ue; 80  t  T xij (t)dt  i ; 8r 2 (0; T ]; 8i 2 V 0

2V Z T jX 0 j 2V

8i 2 V

xij (t)dt = i ; 2

If this problem is feasible and T is integral, there is a solution f that changes only at times in f1; 2; : : : ; T g: A discrete-time solution can be transformed into a continuous-time solution by sending ow at rate f (t) in the interval (t ? 1; t]. Fleischer and Tardos [5] prove that this transformation of the optimal discrete-time solution is optimal for the continuous-time problem. Thus, the continuous-time problem is no harder than the discrete-time problem; the integral quickest transshipment algorithms mentioned in the preceding section and presented in this paper also solve the continuous-time problem.

1.2.1 Results A universally quickest transshipment is a quickest transshipment that simultaneously minimizes the amount of excess left in the network at every moment of time. An optimal solution may require fractional ow sent over fractional intervals of time. There is a two source, two sink example for which a universally quickest transshipment does not exist. Hajek and Ogier [9] describe an algorithm that solves the universally quickest transshipment problem in networks with multiple sources and a single sink. Their algorithm uses O(n) maximum ow computations. We describe how this problem can be solved in O(mn log(n2 =m)) time | the same asymptotic time as the fastest known algorithm to compute a maximum ow. Figure 1 summarizes the work on polynomial time algorithms to solve various quickest transshipment problems. Every day usage often involves continuous streams of trac. All of the algorithms presented here, like the algorithm of Hajek and Ogier [9], allow for constant streams of ow into or out of any node in the network. The details are discussed in Section 6 The model as presented in this paper can also handle nite bu er capacity at the nodes of the network. Suppose storage capacity at node i is ai . Then we have the additional constraint (in P P r the discrete-time model) that t=1 j fji(t)  ai ? i , for all r 2 f1; : : : ; T g and all i 2 V . But the algorithms we use for both the discrete-time problems and the continuous-time problems will nd a dynamic ow that depletes all supplies in the minimal time, and does not ever increase the absolute value of the supply at any node. That is, the optimal solution for the case when ai = 0 for all i 2 V is also optimal when all ai = 1.

2 Preliminaries A network N = (V; E; u; S ) has vertex set V of cardinality n, and arc set E of cardinality m. Arc e has capacity ue and U = maxe ue. Node i has supply i . S is a subset of the nodes containing terminals: nodes with non-zero supplies. The sum of supplies over all terminals equals zero. Let S + be the set of terminals, also called sources, with positive supply, and S ? be the set of terminals, also called sinks, with negative supply. De ne  to be the sum of all positive supply. The sum of all supply in a set of nodes A  V is denoted (A), with  = (S + ). A (static) transshipment problem is de ned on an arbitrary network with P edge capacity vector u and with node supply vector . The objective is to nd a ow such that j fij = i for all vertices i, and fe  ue for all edges e, where we assume fij = ?fji, for simplicity of notation. Network N 0 = (V 0 ; E 0 ) is a new network with vertex set V 0 := V [ fs; tg and edge set E 0 := E [ f(s; i)ji 2 S + g [ f(j; t)jj 2 S ? g. Here s and t are referred to as the super-source and the 3

super-sink respectively. Traditional approaches to solving the dynamic transshipment problem consider the discrete time model and make use of the time expanded network of the original network [4, 17]. A time-expanded network is a directed graph that contains a copy of the network for every time step, and holdover arcs from a copy of a node at time  to the copy of the same node at time  + 1. It is well known and easy to see that the discrete-time dynamic transshipment problem is equivalent to a traditional static transshipment problem in the time expanded network with the set of sources composed of the copies of sources in the rst copy of the network, and the set of sinks consisting of the copies of sinks in the nal copy of the network. Unfortunately, this graph may be very large, and is thus not practical to work with for large discretizations. Hoppe and Tardos [10] describe the only polynomial time algorithm to solve the discrete problem.

3 Dynamic transshipment feasibility All algorithms described in this paper rely on testing dynamic transshipment feasibility. In the problem with transit times, Hoppe and Tardos [10] use submodular function minimization to resolve dynamic transshipment feasibility. Zero transit times make the problem much easier.

Theorem 3.1 Given time bound T , a feasible, fractional dynamic transshipment can be found with one maximum ow computation.

Proof: The dynamic transshipment problem is feasible in time T if and only if the static transshipment problem is feasible in the same network with edge capacities multiplied by T : Flow on any edge summed over the course of a feasible dynamic transshipment cannot exceed T times the capacity of the edge. A feasible static transshipment f can be transformed into a feasible dynamic transshipment by sending ow at rate fij =T through each arc (i; j ) from time 0 until time T . Binary search can be used to nd the minimum time T  needed for dynamic transshipment feasibility. Given A, let o(A) denote the amount of ow that can be sent from sources inside A to sinks outside A in one unit of time. Since all transit times are zero, the amount of ow that can be sent from A to outside A in time T is o(A)T . Note that T  is de ned by some cut A in the network with the property that o(A)T  = (A). Any set A with this property is called tight. To bound the time spent on binary search, observe the following. T  may be fractional, since (A) may not be an integral multiple of o(A); but, since all supplies and capacities are integers, its denominator, at most o(A), will be bounded by the size d of the minimum cut in N 0 when T = 1. If the dynamic transshipment problem is feasible for some time T , then it is feasible for T = . We can then nd the optimal time T  with O(log dT  ) = O(log nU ) maximum ow computations. Theorem 3.1 implies that the maximum ow f for optimal time T  yields an optimal fractional transshipment. A further challenge is to nd a solution in which the ow rates (or ow amounts, in the discrete time case) are integers. If we are looking for a completely integral solution, where the ow rates and the time intervals are integers, we can assume T  = dT  e. If T  is integral, the dynamic transshipment problem is equivalent to a static transshipment in the time expanded graph, so standard network ow theory proves the existence of an integral solution. 4

8 8

8

σ4

σ2

σ4 u(T-1)

8

u

8

σ2

σ3

u

8

σ1

8

σ3

8

σ1

Figure 2: A network, and the corresponding two-level network.

4 Quickest integral transshipment We describe two variants of an algorithm to solve the integral quickest transshipment problem. The rst requires O(k2 log Un + k log(T  =k)) maximum ow computations, while the second requires O(k log(T  =k)) maximum ow computations and 2k minimum cost ow computations. When there is only one sink, a modi ed version of the rst algorithm solves the integral quickest transshipment problem with only O(k) maximum ow computations.

4.1 The basic algorithm All of the algorithms start o using a network inspired by the time-expanded network. Unlike the time-expanded network, the two-level network consists of just two copies of the original dynamic network (Figure 2). The upper copy, N U , represents the rst unit of time and has original arc capacities. The lower copy, N L , represents the remaining time, and has capacities multiplied by T  ? 1. In addition, the two-level network contains k super-terminals sSi , i = 1 to k. Let sUi be the copy of terminal si in N U , and sLi be the corresponding terminal in N L . Each super-terminal sSi has the supplies associated with the corresponding terminal si and is connected to sUi and sLi by in nite capacity arcs eUi and eLi . A static transshipment computation in the two-level network gives an integral ow, since all inputs are integral. The ow in N U corresponds to the ow sent in the rst unit of time. The ow in N L indicates that the remaining supplies can be satis ed in time T  ? 1. That is, this reduced dynamic transshipment problem with reduced supplies f (eLi ) = ji j ? f (eUi ) at si is feasible in time T  ? 1. If we construct a new two-level network for this reduced problem, and repeat the process, we then nd an integral ow for the second unit of time, and a further reduced problem feasible in time T  ? 2. This leads to the following observation.

Observation 4.1 Given feasible time bound T , an integral dynamic transshipment can be found with T ? 1 maximum ow computations. To develop a more ecient algorithm, it is necessary to reduce the number of static transshipment problems considered. Let f U be the the ow in N U and let f~(A) be the amount of ow entering terminals in A, eg. jf~i j = f (eUi ) for a single terminal i. We would like to continue to use f U for as 5

Basic (N ; u; ; T  ) T = T  , i = 0. While  6= 0, i = i + 1.

Construct two-level network with u, , T . Solve transshipment problem on this two level network. f i = ow through N U , the small capacity level. i = maximum  such that transshipment problem (N ; u;  ? f~i ; T ? ) is feasible. T = (T ? i ).  = ( ? i f~i). Return f(f 1 ; 1 ); (f 2 ; 2 ); : : : ; (f i ; i )g. Figure 3: The basic quickest transshipment algorithm

long as the remaining supplies fi ? f~i gi2S can be satis ed in the remaining time, T  ? . Finding the maximum  so that the remaining problem is feasible is equivalent to solving a parametric ow problem in a static network. Let N  = (V 0 ; E 0 ; u ) where V 0 and E 0 are de ned as in Section 2, and u equals i ? f~i on edges leaving the super-source, ?j ? f~j on edges entering the super-sink, and uij (T  ? ) on all other edges (i; j ). Here,  is the amount of time we can use the rst integral

ow. Starting with  = 1, we can increase  until either the supply is depleted at some terminal (eg. ji j ? f~i = 0 for some i), or until the reduced problem becomes infeasible. If the reduced problem is infeasible, there is some subset of nodes A such that (A) ? f~(A) > o(A)(T ? ). Since both sides of this inequality vary linearly with , there is some 0 such that the reduced problem is infeasible for all  > 0 . Binary search will nd the maximum feasible , with 1    mini jf~ j . Figure 3 summarizes the algorithm. The quickest integral transshipment takes as input a network with arc capacity vector u, vertex supply vector , and time bound T , and returns a set of owduration pairs f(f i ; i )gri=1 that specify the ows f i that compose the optimal transshipment and the duration of time i for which they continue. The algorithm repeatedly nds integral ows and their corresponding time periods until all supplies are exhausted. Each time we x a , one of two things happens. Either supply at some terminal is exhausted, or there is some set of nodes A whose total supply is equal to the amount of ow that can leave this set in the current time bound. That is, A is tight, and it wasn't tight previously. We will refer to such a tight set as a new tight set. The ow leaving A must be at a maximum for the remainder of the algorithm: once a set is tight, it remains tight. De ne a chain of sets to be a sequence of nested sets: each set is strictly contained in its successor. We need the following lemma. i

i

Lemma 4.2 The intersection and union of tight sets are tight. Proof: Recall o(A) is the amount of ow that can leave set A in one unit of time. Meggido [11] P shows that o(A)+ o(B )  o(A [ B )+ o(A \ B ), i.e. o is a submodular function. Since (A) = i2A i satis es this inequality at equality, it is easy to see that o(A)T ? (A) is also submodular. For tight set A, o(A)T ? (A) = 0 by de nition. Since T is feasible if and only if o(A)T ? (A)  0, 6

submodularity implies that the intersection of tight sets is tight and the union of tight sets is tight.

Lemma 4.3 Each time the algorithm forces a new set to become tight, the number of nested tight sets increases by one. Proof: When we nd a new tight set of terminals, we know the old tight sets are still tight. Let N be the new tight set, and let B0  B1      Br be the existing chain of nested tight sets, with B0 = ;. If j = maxfijN 6 Bi g, then the new chain of nested tight sets is B0      Bj  Bj [ N  Bj+1      Br .

A terminal is active if it has non-zero supply remaining. Let A represent the set of active terminals. Let C represent the chain of tight sets encountered by the algorithm. We prove below that each iteration of the algorithm decreases jAj ? jC j by at least one. Thus after at most k iterations, we have a complete chain of nested tight sets, and the ow is constant for the remaining time.

Lemma 4.4 The basic algorithm searches for a new feasible integral ow f i at most k times. Proof: Each time we nd a new  by nding a new tight set, jAj ? jC j decreases by one: jC j increases and jAj does not change. Each time we deplete a supply, jAj decreases by one. jC j does not decrease: If the di erence

between two tight sets of terminals is just one terminal, then the ow rate out of that terminal must be constant for the remaining time. If such a terminal is active, it must be active until the end. Since there are at most k terminals, the total number of times the algorithm stops with a non-empty reduced problem is at most k. There is the problem that some of the i we nd may be fractional. The next two sections present two di erent approaches to handling this problem.

4.2 A continuous-time approach to the quickest transshipment problem In this section, we solve the quickest transshipment problem by rst producing a solution with constant, integral rates of ow over time periods of arbitrary lengths. This algorithm requires O(k2 log Un + k log T  ) maximum ow computations or a strongly polynomial number of maximum

ow computations. We then transform this continuous-time solution into a discrete-time integral transshipment with at most k more maximum ow computations. The solution is then fully integral, and of interest for applications which can only handle integer quantities. To solve the continuous time problem, we use a complete binary search to nd the absolute maximum time  that we can repeat ow f . To bound the binary search, we observe that although the  we nd may be fractional, its denominator d is bounded by the size of a cut in the original network, which is bounded by Um. Once we nd , we can remove the supply sent during time , and focus our attention on the reduced problem. This problem now has a time bound that may be fractional, as well as supplies that may be fractional. By multiplying the new supplies jj ? f~ and the new time bound T  ?  by d, we obtain an integral problem that is equivalent to the fractional problem: 7

Any ow g for the original problem can be transformed into a ow satisfying the multiplied problem by sending each amount of ow for a period that is d times longer than the time it is sent in the original problem. So the problem with supplies and time multiplied by d is feasible if the original problem is. Since all the input data are integral, it has an integral solution. We can then take that integral solution and use it to nd a feasible, integral solution to the original problem by reducing the time any particular ow is sent by a factor of d. This means we can nd solution where the

ow rates are integral, but the intervals during which they are sent are fractional. If all 's are integer, then the continuous-time solution is easily transformed into a discrete-time solution by sending fij units of ow on arc (i; j ) at each of  time units. If there are fractional 's, then we subtract the fractional part from each  to get a partial solution that is integral and hence satis es an integral amount of the supplies. Since our initial supplies are integers, the sums of supplies sent in the fractional times are also integers; and, they can be satis ed in time  = Pkithe =1 i ? bi c  k. We can now use Observation 4.1 to solve this smaller dynamic transshipment problem with  k maximum ow computations. These two partial solutions scheduled one after the other form a feasible, integral, discrete dynamic transshipment completing by the optimal time.

Theorem 4.5 A fully integral solution for the quickest transshipment problem with zero transit

times can be found in strongly polynomial time or with O(k2 log Un + k log(T  =k)) maximum ow computations.

Proof: 1 falls in the range [1; T  ], and may be fractional with denominator d1  Um. We multiply remaining supplies by d1 . Repeating this, we nd that j lies within interval [1; d1 d2    dj ?1 T  ], and may be fractional with denominator dj  Um. This together with Lemma 4.4 implies that the algorithm consists of at most k binary searches over range [1; D] where D = d1 d2    dk  (UmP )k . Each guess for  requires one maximum

ow calculation. Thus, this algorithm P requires P 2 O(k i [log diP+ log i ]) = O(k log Un + k i log i ) maximum ow computations. Since i i = T , we have ki=1 log i  k log(T  =k). Using Megiddo's parametric search for the parametric maximum ow problem instead of binary search, the algorithm requires O(kn2 log2 n) maximum

ow computations [12, 13].

4.3 A minimum cost ow approach to the quickest transshipment problem In this section, we demonstrate how the quickest integral transshipment can be solved without resorting to binary search among small fractions. We start with the basic algorithm as described in Section 4.1, constructing the two-level network with capacities determined by the remaining time T , and computing a static transshipment f in this network. Again, we seek to repeat the ow described by f restricted to the upper network N U , denoted f U , as long as the remaining problem is feasible. However, this time we insist f U is repeated for an interval of integer duration. This avoids the binary search for  among small fractions. But with this restriction, there may not be a new tight set at time T ? . In Section 4.1, our progress is measured by the number of nested tight sets created. We describe a procedure using a minimum cost ow computation that will force a new set to become tight in one additional unit of time, once we have found the largest integer interval  in which f U can be repeated. We observe that since repeating f U for one additional time unit is infeasible, there must be some set of terminals that is close to being tight. We try to force such a set to become 8

8

(1, )

(-1, a)

b

a

(-1, b)

8

(1, )

8

(1, )

(-1, c)

(1, )

8

c

(cost, capacity)

Figure 4: Flow f represented by thicker lines on the left and, on the right, the two-level network for the corresponding minimum cost ow problem. tight by sending either as little out of the set, or as much into the set, as possible in the next time unit, while keeping the remaining problem feasible. To do this, we use one minimum cost ow computation in a modi ed two-level network. We construct a minimum cost ow problem that encourages sending as much of f U as possible and as little additional ow as possible: We place the remaining supplies and demands at the terminals of a two-level network with capacities determined by remaining time T ?  as described in Section 4.1. All arcs in the network are assigned cost 0. For each source in N U , we reduce the capacity of the arc from the super-source to be the amount of ow f U leaving the source in N U , and assign this arc a cost of ?1. We also add an in nite capacity, cost 1 arc from the super-source to each source in N U . We make similar additions and adjustments of arcs from the sinks in N U to the super-sinks. Figure 4 gives an example of this modi cation. We prove below that after sending the part of the minimum cost ow restricted to N U , jAj ? jC j is reduced by at least one.

Theorem 4.6 A solution for the integral quickest transshipment with zero transit times can be computed with k + k log(T  =k) maximum ows and k minimum cost ows.

To prove Theorem 4.6, we show that after sending the minimum cost ow restricted to N U , jAj?jC j is reduced by at least one. Since jC j never decreases and jAj never increases, this amounts to showing that either jAj decreases or jC j increases. Lemma 4.4 then implies the theorem. Let x be the minimum cost ow in the modi ed network. Standard optimality conditions for minimum cost ows [1] imply that there exists a set of labels  on the vertices in the two-level network that obey the following complementary slackness conditions. De ne cij := cij ? (i)+ (j ) for all arcs (i; j ) in the two level network. cij > 0 ) xij = 0 cij = 0 ) 0  xij  uij cij < 0 ) xij = uij

If some source or sink that is active at time T ?  does not send or receive any ow in N L , then it has been emptied in N U and thus jAj decreases by one. We must now show that jAj ? jC j decreases even if all active sources and sinks send some ow in N L. We show this by nding a new tight set of terminals. Thus by Lemma 4.3, this implies that jC j increases by one. 9

We rst suppose that there is non-zero ow on a cost 1 edge entering a source. For terminal sj , let eUj denote the original arc connecting the terminal in N U and super-terminal. Let eUj? and eUj+ denote the two arcs that replace eUj in the minimum cost ow problem, of cost ?1 and +1, respectively.

Lemma 4.7 If all active terminals send or receive ow in N L and s 2 S + is such that x(eU+ ) > 0, then L0 := fvi 2 V j(viL )  (sL )g is a new tight set. Proof: Since c is unchanged by adding the same constant to every label, we can assume without loss of generality that (sL ) = 0. Thus L0 = fvi 2 V j(viL )  0g. Similarly, de ne U0 := fvi 2 V j(viU )  0g. The proof consists of establishing a sequence of statements that build on each other ending in the claim of the lemma. (i) (ii) (iii) (iv) (v) (vi)

If sj 2 L0 \ S + sends out less x- ow in N U than it sent out f - ow, then sj 2 U0 . If sj 2 L0 \ S ? receives more x- ow in N U than it received f - ow, then sj 2 U0 . If L0 is tight in N U , then L0 \ U0 is tight in N U . s 62 U0 . Either L0 is not tight in N U , or L0 sends out more x- ow in N U than it sends out f - ow. L0 is a new tight set in N L .

In the arguments below, we repeatedly use complementary slackness and the assumption that all active terminals send or receive ow in N L . For instance, the capacity of all arcs connecting superterminals to terminals in N L is in nite, so c = 0 for each of these arcs. Since c = 0 by design for each of these arcs, (sLi ) = (sSi ) for all active terminals si . (i) If x(eUj? ) < f (eUj ), complementary slackness and sj 2 L0 \ S + imply (sUj )  (sSj ) + 1 = (sLj ) + 1  1. (ii) If x(eUj+ ) > 0, complementary slackness and sj 2 L0 \ S ? imply (sUj ) = (sSj ) + 1 = (sLj ) + 1  1. (iii) Complementary slackness implies all arcs leaving U0 in N U are at full capacity and all arcs entering U0 in N U are empty. Thus U0 is tight in N U . Since the intersection of tight sets is tight (Lemma 4.2), L0 tight in N U implies that L0 \ U0 is also tight in N U . (iv) Using complementary slackness, (sU ) = (sS ) ? 1 = (sL ) ? 1 = ?1. (v) If L0 is not tight in N U , then (iii) implies that L0 \ U0 is tight in N U . Thus in N U , (i) and (iii) together imply that all sources in L0 are either sending out at least as much ow as with f or are contained in a tight set. Similarly, in N U , (ii) and (iii) imply that all sinks in L0 are receiving no more ow than as with f , or are contained in a tight set. Thus L0 sends out at least as much ow in N U as it does with f . By (iv), s 2 L0 nU0 . Since sU sends out more

ow with x than s does with f , L0 is actually sends out more ow in N U than it does with f. 10

(vi) By complementary slackness, all arcs leaving L0 in N L are at full capacity and all arcs entering L0 in N L carry no ow. Thus, L0 is tight in N L . If L0 is not tight in N U , it is clearly a new tight set. Otherwise L0 is sending out more ow in N U than it did with f . Thus, it was not tight for ow f , and hence is newly tight. If the only cost 1 arcs used are those leaving sinks, consider the reverse network obtained by reversing the direction of arcs and ow x, and multiplying all labels by ?1. The new labels and new ow are feasible and satisfy the complementary slackness conditions, hence are optimal. Since there is ow entering a source on a cost 1 arc in this reverse network, Lemma 4.7 implies that there is a new tight set L0 in this reverse network. Thus L0 is a new tight set in the original graph. If no cost 1 edge is used, then since f U is not feasible, there is a cost ?1 edge adjacent to a source that is not at full capacity. The following lemma shows that there is also a new tight set of terminals in this case.

Lemma 4.8 If all active terminals send or receive ow in N L and s 2 S + is such that x(eU? ) < f (eU ), then L0 := fvi 2 V j(viL )  (sL )g is a new tight set. Proof: As in Lemma 4.7, we assume (sL ) = 0, and we show that L0 = fvi 2 V j(viL )  0g is a

new tight set. The proof of this proceeds along the lines of the proof of Lemma 4.7, establishing the following sequence of statements using similar ideas. Because of the similarity, we leave the details of the proof to the reader. As before, de ne U0 := fvi 2 V j(viU )  0g. (i) (ii) (iii) (iv) (v) (vi)

If sj 2 L0 \ S ? receives less x- ow in N U than it receives f - ow, then sj 2 U0 . If sj 2 L0 \ S + sends out more x- ow in N U than it sends out f - ow, then sj 2 U0 . If L0 is tight in N U , then L0 \ U0 is tight in N U . s 62 U0 . Either L0 is not tight in N U , or L0 receives less x- ow in N U than it receives f - ow. L0 is a new tight set in N L .

Proof of Theorem 4.6: Each iteration of the altered basic algorithm uses one minimum cost

ow computation, reducing jAj ? jC j by at least one. If i is the length of the interval inPthe ith iteration, the total computation uses at mostPk minimum cost ows plus at most k + i log i P k maximum ows. Since i=1 i  T , we have ki=1 log i  k log(T  =k).

4.4 The single sink case When there is only one sink, the dynamic transshipment problem is also known as the evacuation problem. In this section, we explain how to compute a quickest integral evacuation with O(k) maximum ow computations. The best previous solution for computing a quickest integral evacuation is the algorithm of Hoppe and Tardos [10] which uses the ellipsoid method as a subroutine. The algorithm of Hajek and Ogier computes a fractional solution using O(n) maximum ow computations. 11

We start with the basic algorithm described in Section 4.1, and observe that, since there is only one sink, the parametric ow problem that we have to solve to nd the maximum feasible  is simpler. In the proof of the following theorem, we show that we can replace the binary search for the maximum  with a parametric ow algorithm of Gallo, Grigoriadis, and Tarjan [7]. The parametric ow algorithm is based on the push-relabel maximum ow algorithm of Goldberg and Tarjan [8] and runs in the same asymptotic time: O(mn log(n2 =m)). The simpler parametric ow problem is de ned on the original network with only one new node: a super-source|the single sink obviates the need for a super-sink. Now the arcs from super-source to source i have capacity i ? f~i , and all original network arcs have capacity uij (T ? ). Gallo, et al. show that if the capacities on arcs leaving the source are non-increasing functions of a parameter, and all other capacities are constant, then the minimum value of the parameter for the source to be a minimum cut can be found in the same asymptotic time as computing one maximum ow with the push-relabel algorithm. In our example, let = T ? . Dividing all capacities by gives us an equivalent problem with the original constant capacities on the original arcs, and capacities which are non-increasing functions of 1= on the arcs leaving the source. This is the form required by the parametric ow algorithm of Gallo, et al. Thus, for the special case of a single sink, the quickest transshipment problem|also known as the quickest evacuation problem|can be solved with at most O(k) maximum ows.

Theorem 4.9 The quickest evacuation problem with zero transit times can be solved in the same asymptotic time as k maximum ow computations.

5 Universally quickest dynamic transshipment The universally quickest dynamic transshipment is a ow that minimizes the amount of excess left in the network at every moment of time. Hajek and Ogier [9] solve this problem with O(n) maximum ows when there is only one sink. There is a two-source, two-sink example for which a universally quickest transshipment does not exist. In this section, we describe an algorithm that solves the single sink, universally quickest dynamic transshipment in the same asymptotic time as one maximum ow computation. If there are multiple sinks, a universally quickest solution may not exist. For example, consider the network in Figure 5. In one unit of time it is possible to satisfy four units of demand by sending one unit of supply from each source to each sink (Fig. 3a). In two units of time it is possible to satisfy six units of demand by sending one unit from the rst source to both sinks, and one from the second source to the second sink, for each of the two time units (Fig. 3b). However, if in the rst time unit, all supply is sent from the second source, and all demand has been met at the rst sink, then in the second time unit, it is only possible to send supply from the rst source to the second sink, and the amount is restricted to one unit by the capacity of the arc. Hence if the maximum amount of supply is sent in the rst time unit, it is not possible to maximize supply sent by the second unit of time. The problem with multiple sinks is that, in the rush to send ow, some source may send ow to the wrong sink. This is not a problem when there is only one sink. The universally quickest transshipment algorithm described here will nd a series of subsets of V , A1  A2      Ar , such that each set contains at least one more source than the previous set, 12

Figure 5: An example with no universally quickest transshipment T=1

T=2

T=3

2

4

(a) 1

1

3 1 (b)

-2

-4

and Ar contains all sources but not the sink. Each set will have a corresponding time bound Ti with T1 > T2 > : : : > Tr , such that, in the constructed solution, the rate of ow leaving set Ai equals o(Ai ) from time 0 until time Ti after which all sources in Ai nAi?1 are empty.

Theorem 5.1 A single sink, dynamic transshipment that sends ow out of node set Ai at rate o(Ai ) from time 0 until time Ti, after which all sources in Ai nAi?1 are empty of supply, for all Ai and Ti as de ned above, is a universally quickest transshipment. Proof: By the same argument in the proof of Theorem 3.1, the maximum amount of supply

that can be sent to the sink by time T equals the maximum static ow in the network with the capacity of each original arc multiplied by time bound T and arcs from a super-source to each source with capacity equal to the supply at the adjacent source. Represent this maximum quantity by s(T ). This is also the value of a minimum cut in the extended network. A universally quickest transshipment must therefore send supply of s(T ) to the sink by time T , for every 0 < T  T  . we show that a dynamic transshipment obeying the conditions of the theorem achieves this. Proof is by induction on c, the number of sets Ai . If c = 1, then A1 contains all sources, and ow leaving A1 equals o(A1 ) from time 0 until time T1 . This is clearly a universally quickest evacuation. Suppose the theorem is true for c < r, and consider the case when c = r. Now consider the time bound Tr corresponding to the set Ar that contains all sources. By de nition of Tr and Ar , a minimum cut in the maximum ow network parameterized by Tr contains Ar . Because Ar contains all sources, only original network arcs cross this cut. Hence, this cut is also a minimum cut in the corresponding parameterized maximum ow network for every T  Tr . So the dynamic ow that sends ow out of Ar at rate o(Ar ) at every moment 0    Tr is universally quickest ow in this time interval. By construction, after time Tr , all sources in Ar nAr?1 are empty of supply. In addition, since Ti > Tr for all i < r, up until time Tr , all other sets Ai have been sending ow out at rate equal to o(Ai ). Now consider the altered problem with no supplies at the sources in Ar nAr?1 . By induction, the ow that sends ow out of all Ai at rate o(Ai ) up until time Ti for i = 1 to r ? 1 is a universally quickest transshipment. In particular, it is universally quickest at all times after Tr . Before time Ti , the ow on all arcs entering or leaving Ai nAi?1 is xed. Thus, the ow within AinAi?1 can be determined independently in this time period. So any ow that maintains the appropriate ow 13

conservation constraints at all nodes in Ai nAi?1 for 1  i < r for T  Tr is equivalent to any other. In particular, both ows discussed above are interchangeable on Ar for the interval [0; Tr ). Hence, the output of the algorithm on the original problem with r sets is also a universally quickest evacuation. The algorithm has three stages: (a) It rst nds sets Ai using the parametric ow algorithm of Gallo, Grigoriadis, and Tarjan [7], which also returns the corresponding time bounds Ti . These Ai are the minimum cuts in the corresponding maximum ow network parameterized by Ti . (b) This information is then used to construct a static ow which represents the initial ow rate. (c) Finally, the dynamic ow is constructed from this static ow by reducing the ow rate along source-sink paths when the supplies at the corresponding sources are depleted.

(a) Finding nested sets Ai and time bounds Ti

Consider the following parametric ow problem. Add a super-source to the original network, connnected to each original source by an arc of capacity equal to the supply at the source, divided by parameter T , the time bound. The value of a maximum ow in this network is a decreasing function of T . This parametric problem is equivalent to the problem with constant capacities on the added arcs, and T times the original capacities on the original arcs, in the sense that any maximum ow in one problem can be transformed into a feasible ow in the other by multiplying (or dividing) all ow quantities by T . This latter ow problem is the one discussed in the proof of Theorem 5.1. Gallo, Grigoriadis, and Tarjan [7] notice that the value of the maximum ow in this setting is a piecewise linear function of T with at most k breakpoints, and that the associated minimum cuts form a nested set of cuts. They describe an algorithm to nd these breakpoints and cuts in O(nm log(n2 =m)) time. Let the Ti be the values of T at these breakpoints and de ne Ai to be the minimum cut for T in interval [Ti+1 ; Ti ). Using the Gallo et al. algorithm, all Ai and Ti are found in time O(nm log(n2 =m)), i.e. the same asymptotic time as one maximum ow computation.

(b) Constructing the static ow

Given the Ai and the Ti , we construct a static ow f on the original network with the following properties: Flow conservation holds at all non-terminal nodes, there is net ow out of all sources, and any arc that leaves any Ai is saturated in f . In addition, for any source in Ai nAi?1 , the net

ow in f out of the source equals the supply at the source divided by Ti . Such a static ow is useful because we can let f be the ow rate in a dynamic transshipment until time Tr , when the supplies in Ar nAr?1 are depleted. After time Ti , we can subtract path ows from f to get a new static ow which will be the ow rate until the next time bound. We construct f in a piecewise fashion by partitioning the network into subnetworks, de ning f on each subnetwork, and then de ning f on all arcs that cross between subnetworks. Without loss of generality, we assume the network is directed. First consider the subnetwork induced by the vertices in A1 . For each arc (v; w) in the original network that leaves this set, add its capacity to the demand at v. Include the supplies of sources in this set divided by T1 . The ow that solves the resulting transshipment problem de nes f on this subgraph. In general, consider the subgraph induced by the vertices in Ai nAi?1 . For each arc (v; w) in Ai that enters this set, increase the supply at y by uvw . For each arc (v; w) with v 2 Ai nAi?1 that leaves Ai , increase the demand at node v by uvw . Also include the supplies of sources in this set divided by Ti . The ow that solves the resulting transshipment problem de nes f restricted to this subgraph. On any arc leaving any Ai , f is set equal to the capacity of the arc; and f is set to 0 on any arc entering any Ai . All arcs not considered so far lie completely outside Ar . For these, consider the subgraph on nodes V nAr . For all arcs entering this set, increase the supply at the adjacent node by the capacity of 14

the arc. Put demand at the sink equal to the sum of these new supplies. The ow that solves the corresponding transshipment problem de nes f on this subgraph. The bottleneck in computing f is computing the maximum ow on each subgraph. The sum of the size of the subgraphs equals the size of the original network, however, so that the time to compute f is the same asymptotic time as one maximum ow computation.

(c) Constructing the dynamic ow

Given static ow f , we compute a universally quickest transshipment by computing, for time interval [0; Tr ) and for each time interval of form [Ti ; Ti?1 ), a static ow that de nes the ow rate in each interval. To start, fr = f is the ow rate from time 0 until time Tr . At time Ti , the supplies of the sources in Ai nAi?1 are depleted, so fi is reduced by the ow leaving these sources to form fi?1 . To compute these successive static ows, decompose f into paths and cycles. Let fi denote the desired ow rate in interval [Ti+1 ; Ti ), with fr = f . Once fi+1 is obtained, fi can be computed by reducing ow along paths with positive ow leaving sources in Ai nAi?1 . Ford and Fulkerson explain how a ow decomposition can be computed eciently in O(mn) time [6]: Until no arc carries ow, nd a simple cycle or source-sink path with ow, and subtract the maximum amount of ow possible from this path or cycle. Each subtraction reduces the ow on some arc to 0, so there are at most m subtractions. Finding a simple source-sink path or cycle takes at most O(n) time, as does subtracting the ow. Note that all three steps of the algorithm run in time asymptotic to one maximum ow computation. Thus the run time of the algorithm is established.

Theorem 5.2 The single sink, universally quickest transshipment problem can be solved in the same asymptotic time as one maximum ow computation.

6 Incoming and outgoing trac While the solving a transshipment problem with xed supplies is useful for clearing a network after a communication breakdown, every day usage more often involves continuous streams of trac. The algorithms presented here, like the algorithm of Hajek and Ogier [9], allow for constant streams of

ow into or out of any node in the network. If there are constant streams of ow into and out of the nodes, the sum of the rates of these ow must equal zero in order for the problem to remain stable. Before solving the dynamic transshipment part of the problem, we can determine the course of this ow with one maximum ow computation. Let "i be the rate of external ow into node i. Introduce a super-source connected to all nodes i with incoming ow by arcs with capacity "i . Similarly, introduce a super-sink, and connect all nodes j with outgoing ow to the super-sink with arcs of capacity ?"j . If the maximum ow has value strictly less than the sum of the rates of incoming ow, then excess will build up in the network, and the problem is infeasible. Otherwise, the maximum ow determines the course these external ows will take through the network. The residual network of this ow, i.e. the network of arcs e with capacities u0e = ue ? fe , is passed on to any of the previously described algorithms.

Theorem 6.1 Any algorithm that solves the quickest transshipment problem or the universally

quickest transshipment problem, can also solve the corresponding problem with constant streams of

15

ow into and out of any node in the network.

Proof: De ne "(A) = Pi2A "i to be the rate of external ow into set A, and consider a xed time

T . In order for the original problem to be feasible in time T , the total ow that can leave any set A in time T must be at least as great as the total external ow that enters A in time T , plus the total supply in A. That is, the problem is feasible in time T if and only if o(A)T  (A)+ "(A)T , 8A  V . De ne A to be snug if A satis es this inequality at equality. Now consider a minimum time bound T  computed by one of the algorithms in the paper, on a residual network. For example, suppose T  is the minimum time in which the residual network can be emptied of excess supply, computed as described in Section 3. T  is constrained by some tight set A with the property that o0 (A)T  = (A), where o0 (A) is the sum of all residual capacitiesPu0 of edges leaving A. Call this set of edges I . In the P  0  0 original network, A is snug: o(A)T = [o (A)+ e2I ue ?ue ]T = (A)+ e2I fe T  = (A)+"(A)T  . Hence the T  is also constrained by A, a snug set, in the original problem, and thus remains minimum.

Acknowledgements I would like to thank E va Tardos for helpful comments on various drafts of this paper, for pointing out the much simpler upper bound in Section 4.2, and for an observation simplifying the algorithm in Section 4.3.

References [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows. Prentice Hall, 1993. [2] E. J. Anderson and P. Nash. Linear Programming in In nite-Dimensional Spaces. John Wiley & Sons, 1987. [3] E. J. Anderson and A. B. Philpott. A continuous-time network simplex algorithm. Networks, 19:395{425, 1989. [4] J. E. Aronson. A survey of dynamic network ows. Annals of Operations Research, 20:1{66, 1989. [5] L. Fleischer and E. Tardos. Ecient continuous time dynamic network ow algorithms. Technical Report TR1166, Cornell University, Department of Operations Research and Industrial Engineering, 1996. [6] L. R. Ford and D. R. Fulkerson. Flows in Networks. Princeton University Press, 1962. [7] G. Gallo, M. D. Grigoriadis, and R. E. Tarjan. A fast parametric maximum ow algorithm and applications. SIAM J. Comput., 18(1):30{55, 1989. [8] A. V. Goldberg and R. E. Tarjan. A new approach to the maximum ow problem. Journal of ACM, 35:921{940, 1988. [9] B. Hajek and R. G. Ogier. Optimal dynamic routing in communication networks with continuous trac. Networks, 14:457{487, 1984. 16

[10] B. Hoppe and E . Tardos. The quickest transshipment problem. In Proc. of 6th Annual ACMSIAM Symp. on Discrete Algorithms, pages 512{521, 1995. [11] N. Megiddo. Optimal ows in networks with multiple sources and sinks. Mathematical Programming, 7:97{107, 1974. [12] N. Megiddo. Combinatorial optimization with rational objective functions. Mathematics of Operations Research, 4:414{424, 1979. [13] N. Megiddo. Applying parallel computation algorithms in the design of serial algorithms. Journal of the ACM, 30(4):852{865, 1983. [14] F. H. Moss and A. Segall. An optimal control approach to dynamic routing in networks. IEEE Transactions on Automatic Control, 27(2):329{339, 1982. [15] R. G. Ogier. Minimum-delay routing in continuous-time dynamic networks with piecewiseconstant capacities. Networks, 18:303{318, 1988. [16] A. B. Philpott. Continuous-time ows in networks. Mathematics of Operations Research, 15(4):640{661, November 1990. [17] W. B. Powell, P. Jaillet, and A. Odoni. Stochastic and dynamic networks and routing. In M. O. Ball, T. L. Magnanti, C. L. Monma, and G. L. Nemhauser, editors, Handbooks in Operations Research and Management Science: Networks. Elsevier Science Publishers B. V., 1995. [18] M. C. Pullan. An algorithm for a class of continuous linear programs. SIAM J. Control and Optimization, 31(6):1558{1577, November 1993. [19] G. I. Stassinopoulos and P. Konstantopoulos. Optimal congestion control in single destination networks. IEEE transactions on communications, 33(8):792{800, 1985.

17