International Journal of Foundations of Computer Science c World Scientific Publishing Company °
FAST CLUSTERING AND MINIMUM WEIGHT MATCHING ALGORITHMS FOR VERY LARGE MOBILE BACKBONE WIRELESS NETWORKS
FERRO A. and PIGOLA G. and PULVIRENTI A. Dipartimento di Matematica e Informatica Universit` a degli Studi di Catania, Viale A. Doria, 6 Catania, 95125, Italy e-mail: {ferro,apulvirenti}@dmi.unict.it
[email protected] and SHASHA D. Courant Institute of Mathematical Science New York University New York, U.S.A. e-mail:
[email protected] Received (received date) Revised (revised date) Communicated by Editor’s name ABSTRACT Mobile Backbone Wireless Networks (MBWN) [10] are wireless networks in which the base stations are mobile. Our strategy is the following: mobile nodes are dynamically grouped into clusters of bounded radius. In the very large wireless networks we deal with we deal with, several hundreds of clusters may be generated. Clustering makes use of a two dimensional Euclidean version of the Antipole Tree data structure [5]. This very effective structure was originally designed for finite sets of points in an arbitrary metric space to support efficient range searching. It requires only a linear number of pairwise distance calculations among nodes. Mobile Base Stations occupy an approximate centroid of the clusters and are moved according to a fast practical bipartite matching algorithm which tries to minimize both total and maximum distance. We show that the best known computational geometry algorithms [1] become infeasible for our application when a high number of mobile base stations is required. On the other hand our proposed 8% average error solution requires O(k log k) running time instead of the approximatively O(k2 ) exact algorithm [1]. Communication among nodes is realized by a Clusterhead Gateway Switching Routing (CGSR) protocol [15] where the mobile base stations are organized in a suitable network. Other efficient clustering algorithms [11, 17] may be used instead of the Antipole Tree. However the nice hierarchical structure of the Antipole Tree makes it applicable to other types of mobile wireless (Ad-Hoc) and wired networks but this will be subject of future work. Keywords: Wireless Network, mobile base stations, closest match, Antipole Clustering.
1
1. Introduction Event-Driven Mobile Backbone Wireless Networks (EDMBWN) are a special case of Mobile Backbone Wireless Networks (MBWN) [10] in which the base stations are mobile. The term event-driven means that agents and mobile stations move when certain events happen following some scheduling or special triggers (military or emergency operations, special events organization, etc.) even though the destination of single agents is unpredictable. In this paper we will deal with Very Large EDMBWN including several thousands of mobile nodes and requiring several hundred mobile base stations. Nodes communicate through a Clusterhead Gateway Switching Routing protocol where the mobile base stations are organized in a suitable network (e.g. a starred network with a satellite at the center of the star). The mobile nodes are dynamically grouped into clusters of bounded communication radius (small enough to guarantee communication from the base station to any other node in the cluster) by some efficient clustering algorithm. Mobile base stations are initially placed in a central point such that communication radius is guaranteed (e.g. the middle point of a diameter of the cluster) and are moved according to some efficient matching algorithm which tries to minimize both total and maximum distance. In order to cluster we propose to make use of a very efficient two dimensional Euclidean version of the Antipole Tree data structure [5] (see section 2). The Antipole Tree Clustering algorithm is based on the fact that distant elements lie in different clusters. The algorithm is able to find such a pair {A, B} (PseudoDiameter) in linear time. Whereas precise diameter calculation in the plane requires n log n time [14] whereas our Pseudo-Diameter algorithm requires only linear time √ √ with an approximation ratio of 1/ 2 (that is P seudoDiameter/Diameter ≥ 1/ 2 [4]). Finally, partition elements of the sets according to their proximity to one of the two Pseudo-Diameter endpoints {A, B}. This top-down recursive splitting procedure will produce a binary tree whose leaves are the final clusters. A suitable splitting condition Φ together with a Pseudo-Center computation will guarantee that the distance of each node from the Pseudo-Center of its cluster will be less than communication radius. When bandwidth constraints on the mobile base stations are imposed then a cardinality bound on clusters must be also introduced. In this case splitting must continue until the allowed cardinality of each cluster is achieved. Concerning mobile base stations repositioning, several minimization criteria may be adopted. If the total distance must be minimized then one can make use of the computational geometry algorithm described in [1] which runs in more than quadratic time (see also [16] for an approximation algorithm and [13] for a survey). If the parallel motion time has to be minimized then the so called Bottleneck Matching problem (minimizing the max distance) must be solved. This can be done in time O(k 1.5 log k) [9]. Unfortunately those theoretical solutions become infeasible for a very large number of base stations (see the experiments section 3.1). In this paper we propose a simple practical solution which will make it possible to compute the next configuration of the mobile bases in time close to O(k log k), where k is the number of mobile bases, using the Voronoi Diagram of the destination points [14] (see section 3.1). This algorithm is the sequential composition of three 2
simple procedures. The first procedure performs a rigid motion of the input source points towards the input destination points followed by a “closest match” assignment. The second procedure tries to eliminate long matching edges. Finally the third one eliminates intersecting segments. Experiments in section 3.1 show that for example, deciding where to move 1000 base stations with the exact algorithm would take almost half an hour whereas our algorithm will take only 3 seconds to produce a solution with an 8% average error in the total distance, no mobile stations crossing and a lower maximum distance (which corresponds to lower moving parallel time). 2. The Antipole Tree Data Structure for Clustering in the Plane. Assume we are given a cluster radius σ which guarantees good communication between each agent and its base station. The Antipole clustering of bounded radius σ [5] is a top down procedure starting from a given finite set of points S which checks if a given splitting condition Φ is satisfied. If this is not the case then splitting is not performed and the given subset is a cluster and a Pseudo-Center having distance less than σ from every node in the cluster is computed. Otherwise if Φ is satisfied then a pair of points {A, B} of S called Pseudo-Diameter is generated and the set is partitioned by assigning each point of the splitting subset to the closest endpoint of the Pseudo-Diameter {A, B}. In the plane this procedure can be efficiently performed in the following way. At each step of the antipole tree algorithm let T be the splitting subset to be processed. Let (PXm , PXM ), (PYm , PYM ) be the four points of T having minimum and maximum Cartesian coordinates. Notice that these four points belong to the convex hull of the set T . The diameter of these four points is the Pseudo-Diameter of the set T . Moreover the splitting condition Φ for T is p (PXm .x − PXM .x)2 + (PYm .y − PYM .y)2 ≥ 2 × σ where P.x, P.y are the coordinates of the point P . Indeed the whole T is included in a rectangle of dimensions (PXM .x − PXm .x) and (PYM .y − PYm .y). Therefore the diameter of T is certainly less then or equal to the diagonal of this rectangle which is the left hand term of the above inequality. Notice also that if the splitting condition is satisfied then the diagonal of the rectangle is greater than √ or equal to 2 × σ. On the other hand the Pseudo-Diameter is at least Diagonal/ 2 because either PXM and PXm differ that much in their x coordinates or PYM and PYm differ that much in their y coordinates. This yields P seudo−Diameter ≥ √12 proving that 2×σ √ our approximation ratio is 1/ 2 (see Figg.1, 1 for the pseudo code). If the splitting condition is not satisfied then the partition is not performed and the subset is one of the clusters The middle point of the rectangle diagonal is the Pseudo-Center of the cluster. Finally a variant of the above is the bounded cluster size version of the Antipole Clustering procedure whose purpose it is to satisfy bandwidth constraints (see Fig. 2) This procedure is the same as the one above except that splits occur also in the case that the cardinality of the subset is larger than the size threshold k.
3
2D AntipoleClustering(AntipoleTree AP , Dataset S, Radius σ, Cluster-Centers CCenters) 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Diag := Diagonal(S, Q, C); {A,B} := Q; if Diag ≤ 2σ then // splitting condition Φ fails CCenters := CCenters ∪ {C}; return; end if; AP.Diagonal := Diag; AP.Center := C; Sl := {O ∈ S|dist(O, A) < dist(O, B)}; Sr := {O ∈ S|dist(O, B) ≤ dist(O, A)}; 2D AntipoleClustering(AP.left, Sl , σ,CCenters); 2D AntipoleClustering(AP.right, Sr , σ,CCenters); return; end 2D AntipoleClustering.
Diagonal(Dataset S, Pseudo-Diameter Q, Pseudo-Center C) 1 2 3 4 5 6 7 8
Let PXm ∈ S| PXm .x ≤ Pi .x ∀Pi ∈ S; Let PXM ∈ S| PXM .x ≥ Pi .x ∀Pi ∈ S; Let PYm ∈ S| PYm .y ≤ Pi .y ∀Pi ∈ S; Let PYM ∈ S| PYM .y ≥ Pi .y ∀Pi ∈ S; Q := Find Diameter({PXM , PXm , PYM , PYm }); C := ((Pp Xm .x + PXM .x)/2, (PYm .y + PYM .y)/2);
return (PXm .x − PXM .x)2 + (PYm .y − PYM .y)2 ; end Diagonal.
Find Diameter(T ) 1 2
return P1 , P2 ∈ T such that dist(P1 , P2 ) ≥ dist(x , y) ∀x , y ∈ T ; end Find Diameter
Fig. 1. Antipole Clustering Algorithm, Algorithm to find the diagonal, the Pseudo-Diameter and the Pseudo-Center. Find the Diameter in the set T .
3. Moving the Base Stations. Mobile base stations are initially placed in the Pseudo-Center of each cluster and they periodically move to occupy the next center of gravity configuration. In terms of computational geometry, movement is a classical optimization problem called point bipartite matching. In the rest of the paper we will use the term “matching” to mean “bipartite matching”. Several minimization criteria may be adopted. If the total distance must be minimized then one can make use of the algorithm described in [1] which runs in time more than quadratic in the number k of mobile bases, see also [16] for an approximation algorithm and [13] for a survey. If the parallel motion time has to be minimized then the so called Bottleneck Matching problem (minimizing the max distance) must be solved. This can be done in time O(k 1.5 log k) [8, 9]. However in our case we propose a very simple solution that takes advantage of the fact that antipole clustering will keep cluster radius bounded: let r be such a uniform bound. This has more the flavor of a solution of the uniform matching problem (also called balanced matching or even fair matching) [9] where one tries to minimize the difference between the maximum and the minimum pair. 4
2D AntipoleClusteringBCSize(AntipoleTree AP , Dataset S, Radius σ, Cluster-Centers CCenters, MaxSize k) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Diag := Diagonal(S, Q, C); {A,B} := Q; if Diag ≤ 2σ then // splitting condition Φ fails if |S| ≤ k then CCenters := CCenters ∪ {C}; return ; end if; end if; AP.Diagonal := Diag; AP.Center := C; Sl := {O ∈ S|dist(O, A) < dist(O, B)}; Sr := {O ∈ S|dist(O, B) ≤ dist(O, A)}; 2D AntipoleClusteringBCSize(AP.left, Sl , σ, CCenters, k); 2D AntipoleClusteringBCSize(AP.right, Sr , σ, CCenters, k); return; end 2D AntipoleClusteringBCSize.
Fig. 2. Antipole Algorithm with bounded size of cluster.
The idea is to consider the set of mobile base stations as a rigid body which must move into the next configuration in a a way that minimizes distance. This is done in such a way that the Pseudo-Centers of the set of mobile bases will move into the Pseudo-Centers of the next precomputed configuration followed by the assignment of each source point to the closest destination. Formally let P1 , P2 , · · · , Pk be the positions of the k base stations before a move is needed and let Q1 , Q2 , · · · , Qk be their destination which are the Pseudo-Centers of the k clusters after antipole clustering is performed. In order to simplify the presentation assume first that clusters before and after the motion are contained in non overlapping circles of radius r and centers Pi ’s and Qj ’s respectively. Let BP and BQ be the Pseudo0 0 0 Centers of P1 , P2 , · · · , Pk and Q1 , Q2 , · · · , Qk respectively. Let P1 , P2 , · · · , Pk be the result of translating the set of points P1 , P2 , · · · , Pk by the vector BP 7→ BQ giving the offset in each of the coordinates. Since the distance of any pair of clusterheads Pi , Pj is not smaller than 2 ∗ r and since the rigid motion preserves distance then 0 0 Pi and Pj will lie in different clusters with different Pseudo-Centers. Hence if Q∗i is 0 the Pseudo-Center of the cluster containing Pi then the one-to-one map Pi 7→ Q∗i defines the motion of our base stations. Therefore, under this strong assumption, in order to find the position Q∗i into which the base station must move, one has to 0 compute the Voronoi Diagram of Q1 , Q2 , · · · , Qk and find for every Pi the unique area of the Voronoi Diagram containing it (nearest neighbor search). Therefore the new positions of the mobile stations can be computed in time O(k log k) where k is the number of base stations [14]. If we remove the above strong assumption then 0 the map assigning each translated point Pi into the Voronoi polygon of the set Q1 , Q2 , · · · , Qk containing it may be not one-to-one. In this general case we use the Closest Match strategy (see Fig. 3). That is, we proceed by assigning each Voronoi 0 polygon representative Qi to the closest element Pj in the polygon. After each of these assignments is executed the matched pairs are eliminated and the algorithm proceeds recursively on the remaining nodes until all matches are performed. The other routines used by the algorithm are:
5
ClosestMatch(Source P , Destination Q) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
BP := Center of gravity(P ); BQ := Center of gravity(Q); 0 P := Rigid Motion(P ,BP¯BQ ); M := LongEdges := T empLongEdges := ∅; threshold := log(|Q|); while Q 6= ∅ then ¯ := P¯ := T empLongEdges := ∅; Q VQ := VoronoiDiagram(Q); for each qi ∈ Q do T := Nearest Neighbor(VQ ,qi ,P ); if T 6= ∅ then ¯ := Q ¯ ∪ {qi }; Q P¯ := P¯ ∪ T ; M := M ∪ (qi , V alue(T )); T empLongEdges := T empLongEdges∪ (qi , V alue(T )); end if; end for each; if |Q| ≤ threshold then LongEdges := LongEdges ∪ T empLongEdges; end if; ¯ Q := Q \ Q; P := P \ P¯ ; T empLongEdges := ∅; end while; end ClosestMatch.
Fig. 3. Closest Match Algorithm.
• Rigid Motion which transfers each point of a dataset P from its initial position to a new position according to the vector BP 7→ BQ connecting the two centers of gravity of two sets P and Q. • VoronoiDiagram takes as input a set of points and returns the Voronoi Diagram of such a set and the corresponding tree needed to perform nearest neighbor search [7, 14]. Its construction requires O(k log k) time where k = |Q|; • Nearest Neighbor which takes as input the Voronoi Diagram VQ of the set Q, a point qi ∈ Q and the set of points P and returns the nearest point of P inside the Voronoi polygon having as representative qi . It requires O(log k) for each point of P . However this algorithm has two disadvantages: 1. The maximum distance of a matching pair is too high. Indeed the maximum distance is directly related to the time necessary for simultaneously moving the base stations to their matching destinations. 2. Some matching pair segments may intersect. These collisions must of course be avoided. In order to overcome these problems, two corresponding procedures are performed sequentially. 1. Remove Long Edges (see Fig. 5). We eliminate O(log k) long matching pairs by iterating the following straightforward geometric step: for every long 6
match (A, B) find the set of all matching pairs {(Pi , Qi )} such that Pi (resp. Qi ) lies in the circle of center B (resp. A) and radius dist(A, B). Let (Pj ,Qj ) the pair in this set which minimizes the max{dist(A, Qj ), dist(Pj ,B)}. Replace the pairs (A, B), (Pj ,Qj ) by (A, Qj ), (Pj ,B) respectively (for example, in Fig. 4, j = 1). P2
Q2 Q1 P1 A B P3
Q3
Fig. 4. The dashed lines represent the segments after the procedure Remove Long Edge is called.
Remove Long Edges(Match M , Match LongEdges) 1 for each (A, B) ∈ LongEdge do 2 Let S := { (Pi , Qi ) ∈ M |Pi (resp. Qi ) 3 lies in the circle of center B (resp. A)} 4 Let (Pj , Qj ) ∈ S| minimize max{dist (A, Qj ), dist(Pj , B)} 5 M := M \ {(A, B), (Pj , Qj )} ∪ {(A, Qj ), (Pj , B)}; 6 end for each; 7 end Remove Long Edges. Fig. 5. Remove Long Edge Algorithm.
2. Remove Intersection (see Fig. 6). In order to remove segment intersections we make use of the two standard computational geometry tests Line Segment Intersection Test and Line Segment Intersection Test Visit [14]. The first one finds a pair of intersecting segments in a set M if such a pair exists. The second one takes as input one segment p and a set of segments M and returns one segment q of M intersecting p if such segment exists. The intersection is eliminated by the procedure Exchange Endpoint which replaces the two intersecting segments of the match (diagonals of a quadrilateral) by the other two non intersecting segments (opposite sides of the quadrilateral) obtained by switching endpoints. The process Remove Intersection Visit goes on recursively until no intersection is detected. If we assume that any three nodes are not collinear then termination is guaranteed by the existence of an intersection free match see [3]. 7
Remove Intersection(Match M ) 1 2 3 4 5 6
while p := Line Segment Intersection Test(M ) 6= ∅ do M := M \ {p}; M := Remove Intersection Visit(p,M ,L); end while; return M ; end Remove Intersection.
Remove Intersection Visit(Pair p,Match M , ProcessList L) 1 if q := Line Segment Intersection Test Visit(M ,p) 6= ∅ then 2 M := M \ {q}; 3 Check(q,L); 0 0 4 (p , q ) := Exchange Endpoint(p, q); 0 5 Add(q ,L); 0 6 Add(p ,L); 0 0 7 M := Remove Intersection Visit(ExtractNext(L), M ∪ {q }); 0 0 8 M := Remove Intersection Visit(ExtractNext(L), M ); 0 9 return M ; 10 else 11 M = M ∪ {p}; 12 return M ; 13 Remove Intersection Visit. Fig. 6. Remove Intersection Algorithm.
We summarize the whole protocol in the pseudo-code in Fig. 7. ReassignBackbone(Locations S, Radius σ) 1 2 3 4 5 6 7 8 9
2D AntipoleClustering(AP ,S,σ,P ); repeat periodically 2D AntipoleClustering(AP ,S,σ,Q); ClosestMatch(P ,Q,M ,LongEdge); Remove Long Edges(M ,LongEdge); Remove Intersection(M ); P := Q; end repeat; end ReassignBackbone. Fig. 7. Reassign backbone Algorithm.
3.1. Complexity and Experimental Analysis The main point we will try to demonstrate in this section is that the existing computational geometry solutions to the matching problem arising from our application do not give rise to feasible solutions when the number of required base stations is large (over 500). Specifically, we consider two quality parameters: the 8
total distance covered by the moving base stations and the maximum distance covered by a single base station. The two best practical solutions proposed in the computational geometry literature are the one proposed by [1] for the Minimum Weight Matching computation and the one proposed by [8, 9] for the Bottleneck Matching respectively. Our experiments show that both solutions require excessive time for this application. On the other hand our algorithm gives a feasible solution which is only 8% (in the average) far from the Minimum Weight Matching but has in most cases a lower max distance match. Moreover the max distance is not far from the Bottleneck matching algorithm but has a lower total weight. The conclusion is that our matching algorithm gives values which are a good compromise between the two exact algorithms but proposes a feasible solution in terms of time needed to compute the matching. In this section we perform both average and worst case analysis of the ClosestMatch algorithm. The results are contained in the following two theorems: Theorem 1 (Average Case) Let P and Q be such that |P | = |Q| = k. If at each stage of the algorithm at most a constant fraction h of k is unmatched then the running time of ClosestMatch is O(k log k). Proof. Let Qi and P i be the unmatched points at the i − th cycle. The i − th cycle of the algorithm requires the computation of the Voronoi diagram of Qi an the computation of all Nearest Neighbor between Qi and P i . The process will finish when hki = 1, yielding i = logh k. Therefore the ClosestMatch cycle will be executed exactly logh k times and the i − th cycle will require O( hki log hki ) steps. The total complexity is: µ
logh k
X
O
i=0
k k log i hi h
¶ = O (hk log k) = O (k log k)
2 During the experiments described below we have noticed that a constant fraction h less then 2 occurs. Theorem 2 (Worst Case) Let P and Q such that |P | = |Q| = k. If at each stage of the algorithm only one match is found then the running time of ClosestMatch is O(k 2 log k). Proof. We have at each stage a complexity O((k − i) log(k − i)) for i = 0, · · · , k. The total complexity is: k X
¡ ¢ O ((k − i) log(k − i)) = O k 2 log k
i=0
2 Concerning Remove Long Edges we perform O(log k) long matching pairs elimination where each such elimination takes no more than O(k) time. Therefore the resulting running time complexity is O(k log k). Finally an experimental running time analysis for Remove Intersection is shown in Fig. 8. This experiment shows that even for a bad case such a randomly generated 9
match (which is much worse than the output of our ClosestMatch algorithm) the running time of Remove Intersection lies substantially below the function 1 6000 × (k log k) for realistic data. Next we illustrate experimental results compar1.2 Remove_Intersection (x*log(x))/6000
Average Running Time (in secs.)
1
0.8
0.6
0.4
0.2
0 100
200
300
400
500 600 Number of stations
700
800
900
1000
Fig. 8. The figure depicts the running time of the Remove Intersection on randomly generated matches.
ing the ClosestMatch plus Remove Long Edges plus Remove Intersection algorithm, the Exact Minimum Weight Matching Algorithm [12, 1, 6] and the Bottleneck Matching algorithm [8, 9]. We implemented our algorithm in Java (JDK 1.3 compiler) and all the experiments were carried out on a PC Athlon 1400Mhz with the Linux operating system. The algorithm was run on random inputs of different sizes. Two kind of point generation were used: uniform generation and, referring to the protocol proposed in this paper, generation of Pseudo-Centers of the clusters returned by the 2D AntipoleClustering algorithm on random set of input points. In order to analyze the performance of our algorithm in the various cases, it is convenient to introduce the following notation relative to a generic input (the sets P and Q): • M is the match produced by an algorithm; • M W M is the Minimum Weight Matching, BN M is the Bottleneck Matching, CM, RLE, RI are ClosestMatch, Remove Long Edges, Remove Intersection respectively. Two versions of RLE are considered: one RLE with threshold = log|Q|, and another moreRLE with threshold = 3 × log|Q|. P • W (M ) = (x,y) ∈ M dist(x , y) is the total weight of a matching; • M ax(M ) = max(x,y) ∈ M {dist(x , y)} is the maximum distance present in the match; M )−W (M )| eight • ²W , the relative error of the total weight of the = |W (MWW(M M W M) match with respect to the optimal one. Here M can be BN M or CM+RI, CM+RLE+RI, CM+moreRLE+RI ; M )−M ax(M )| ax • ²M = |M ax(BN , the relative error of the maximum distance of M M ax(BN M ) the match with respect to the optimal one. Here where M can be M W M or CM+RI, CM+RLE+RI, CM+moreRLE+RI .
10
Table 1. The table reports the running time comparison between the Minimum Weight Matching and the proposed algorithm using as input random points, the last column show the running time of the clustering algorithm to produce that input. Size
100 200 300 400 500 1000
MW M
0.560 5.440 20.049 79.252 125.875 1958.368
CM + RI
0.062 0.132 0.302 0.369 0.423 1.015
CM +
CM +
+ RLE + RI
+ moreRLE + RI
0.049 0.139 0.304 0.511 0.768 2.929
0.050 0.148 0.285 0.511 0.730 3.032
2D AntipoleClustering
1.6 1.8 1.9 2.1 2.1 2.4
eight ax • m² and M² the minimum and the maximum of ²W and ²M respectively; M M
• E(·) indicate the mean of a set of values. the mean and the standard deviation respectively. In Table 1 the running time of the proposed algorithm and of Minimum Weight Matching are reported. The results show how fast is our method when compared to the exact algorithm. gives rise to infeasible realization. For example, to reassign 1000 mobile base stations would require about 30 minutes whereas our algorithm takes only less than 3 seconds. On the other hand our proposed solution gives only a 8% average error with an 8 respect to the optimal minimum weight matching (see Fig. 9). Moreover the maximum distance, corresponding to the time needed for the simultaneous base stations move, is lower than the one returned by the Minimum Weight Matching (see Fig. 10). Fig. 9 also shows that the total weight computed by the optimal Bottleneck Matching algorithm is much higher than our proposed algorithm. 4. Dealing with Unequal Number of Base Stations and Destinations. In this section we describe how to deal with the more general case where the number of base stations is not equal to the number of base station destinations. This may arise in two main cases: 1. The number of final clusters is less than the available base stations. This means that we have more base stations than needed. In this case what we do is to perform our proposed Closest Match (see Fig. 3) algorithm until all destination points are matched. The residual unassigned base stations are put in a set U nused. As before the two procedures Remove Long Edges and Remove Intersection are executed sequentially. Of course at each backbone reassignment stage both used and unused base stations will be included in the input of the next step. 2. The number of destination clusters is larger than the available stations. In this case two options may be possible: 11
1.8
1.6
3.5 BNM CM+RI CM+RLE+RI CM+moreRLE+RI
3
BNM CM+RI CM+RLE+RI CM+maoreRLE+RI
Maximum Total weight match error
Average Total weight match error
1.4
1.2
1
0.8
0.6
2.5
2
1.5
1
0.4 0.5 0.2
0 100
150
200
250
300 350 Number of stations
400
450
0 100
500
0.6
150
200
250
300 350 Number of stations
400
450
500
200
250
300 350 Number of stations
400
450
500
450
500
1 BNM CM+RI CM+RLE+RI CM+moreRLE+RI
0.9
BNM CM+RI CM+RLE+RI CM+moreRLE+RI
0.5
Maximum Total weight match error
Average Total weight match error
0.8
0.4
0.3
0.2
0.7 0.6 0.5 0.4 0.3 0.2
0.1 0.1 0 100
150
200
250
300 350 Number of stations
400
450
0 100
500
150
(a)
(b) eight ) E(²W M
of the proposed vs. the BN M Fig. 9. In Fig. (a) is depicted the algorithm; In Fig. (b) is depicted the M²W eight of the proposed vs the BN M M
algorithm. 2D AntipoleClustering Pseudo-Centers output (up side plots) and randomly generated points (down side plots).
1.1
1.3 MWM CL+RLE+RI CM+moreRLE+RI
1
1.2 1.1
Average Maximum Distance Error
Average Maximum Distance Error
0.9
MWM CM+RLE+RI CM+moreRLE+RI
0.8 0.7 0.6 0.5 0.4
1 0.9 0.8 0.7 0.6
0.3
0.5
0.2
0.4
0.1 100
150
200
250
300 350 Number of stations
400
450
500
(a)
0.3 100
150
200
250
300 350 Number of stations
400
(b) ax ) E(²M M
Fig. 10. The figure depicts the of the proposed vs. the M W M algorithm. 2D AntipoleClustering Pseudo-Centers output (a) and randomly generated points (b).
(a) Introduce new base stations in order to match the number of destinations; (b) No more bases are available but stations must be put in the most convenient positions. In this case we make use of the nice hierarchical structure of the Antipole Tree by merging adjacent clusters using the following min-
12
imum Antipole Pseudo-Diameter criterion: we merge those two clusters whose parent node has the lowest Antipole Pseudo-Diameter (Fig. 1 line 5). The Pseudo-Center C of the merged clusters will be approximated by the middle point of the Antipole. This merging process is iterated until the number of clusters equals the number of base stations. 5. Conclusions and Future Developments. This paper describes a Mobile Wireless Network application in which both clustering and computational geometry algorithm are required. However existing computational geometry algorithms are infeasible with large numbers of mobile nodes We propose practical approximations that merge ideas from computational geometry and clustering Our approach is to group mobile nodes into clusters of bounded radius using the two dimensional Euclidean version of the antipole tree data structure. Mobile Base Stations occupy the Pseudo-Centers of the clusters and are moved accordingly to a fast practical bipartite matching algorithm which tries to minimize both total and maximum distance. A demo of our proposed matching algorithm giving the possibility of comparison with the other methods is available at the web site http://alpha.dmi.unict.it/~ctnyu/ Applications of the hierarchical structure of the Antipole tree clustering to other mobile wireless (Ad-Hoc) networks and wired networks will be the subject of future investigation. References 1. P. K. Agarwal, A. Efrat, and M. Sharir. Vertical Decomposition of Shallow Levels in 3-Dimensional Arrangements and its Applications. SIAM Journal on Computing, 1999. 2. P. Agarwal and J. Erickson, Geometric Range Searching and its Relatives, J. Goodman B. Chazelle and R. Pollack, editors, Advances in Discrete and Computational Geometry. AMS Press, Providence, RI, 1998. 3. M. Atallah, A Matching Problem in the Plane, Journal of Computer and System Sciences, 31, pp. 63-70, 1985. 4. A. Borodin and R. El-Yaniv Online Computation and Competitive Analysis, Cambridge University Press, 1998. 5. D. Cantone, A. Ferro, T, Maugeri, A. Pulvirenti, and D. Shasha, Antipole Clustering to Support Range Search in Dynamic Metric Spaces with Applications to Multiple Sequence Alignment, Technical Report University of Catania, 2002. 6. W.J. Cook, W.H. Cunningham, W.R. Pulleyblank, and A. Schrijver, Combinatorial Optimization, Wiley-Interscience Series In Discrete Mathematics and Optimization, 1998. 7. H. Edelsbronner, L.J. Guibas, and J. Stolfi, Optimal Point Location in a Monotone Subdivision, SIAM Journal on Computing, 15(2):317-340, 1986. 8. A. Efrat and A. Itai, Improvements on Bottleneck Matching and Related Problems Using Geometry, Proceedings of the 12th ACM Symposium on Computational Geometry, pages 301–310, 1996.
13
9. A.Efrat, A.Itai, and M.J. Katz, Geometry helps in Bottleneck Matching and Related Problems, Algorithmica, 31(1), 1-28, 2001. 10. M. Gerla and Kai Shin Xu, Minuteman: Forward Projection of Unmanned Agents Using the Airborne Internet, IEEE Aerospace Conference, 2002. 11. S. Guha, R. Rastogi, and K. Shim, CURE: An Efficient Clustering Algorithm for Large Databases Proceedings of the ACM SIGMOD International Conference on Management of Data, 1998. 12. M. Junger and W. Pulleyblank, Geometric Duality and Combinatorial Optimization, Jahrbuck Uberblicke Mathematik, Vieweg, Brunschweig Wiesbaden 1993, pp. 1-24. 7,1265-1275,1997. 13. J. S. B. Mitchell and J. O’Rourke. Computational geometry column 42, Internat. Journal Computional Geometry and Applications, to appear. Also in SIGACT News 32(3):63-72 (2001), Issue 120. 14. F.P.Preparata and M.I.Shamos, Springer-Verlag, 1995.
Computational Geometry: An Introduction,
15. M. Royer, C-K Toh, A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks, In IEEE Personal Communication Magazine, Vol. 6 46-55 Apri ’99. 16. K. R. Varadarajan and Pankaj K. Agarwal. Approximation algorithms for bipartite and non-bipartite matching in the plane, Proceedings of the 10th ACM-SIAM Symposium on Discrete Algorithms, 1999. 17. T. Zhang, R. Ramakrishnan, and M. Livny, BIRCH: an Efficient data clustering method for very large databases, Proceedings of the ACM SIGMOD International Conference on Management of Data, 1996.
14