Effect of 3D Topographical Surfaces for the ... - Semantic Scholar

Report 3 Downloads 195 Views
Effect of 3D Topographical Surfaces for the Performance Evaluation of Wireless Sensor Networks M. G¨okc¸en Arslan

¨ Tolga Onel and Cem Ersoy

Cryptology Department ¨ ˙ITAK UEKAE TUB 41470 Kocaeli, TURKEY Email: [email protected]

Netlab, Computer Engineering Department Boˇgazic¸i University 34342 Istanbul, TURKEY Email: {onel, ersoy}@boun.edu.tr

Abstract—The soundness of the evaluation model used in wireless sensor networks (WSN) affects the soundness of the results. Most proposed models in the literature assume a distance based sensing and 2D freespace communication for a randomly deployed WSN scenario. However, random sensor deployment commonly takes place in 3D inaccessible terrains. In this study, we investigate the incorporation of a realistic 3D terrain model into the performance evaluation of a target tracking WSN. Our observation is that the unrealistic and contradicting 2D terrain assumptions, in which communication and sensing is not blocked due to neglected topographic formations, result in optimistic and unrealistic WSN performance. The performance evaluations compare the mean error of target localization for a given target route in various artificially generated yet realistic terrains. The effect of terrain parameters on mean error is also investigated. Our simulations show that the performance predictions could be misleading on the paper design due to unrealistic assumptions with regards to the WSN deployment region.

I. I NTRODUCTION The performance of a wireless sensor network (WSN) can be examined by either analytical methods or simulations. However, the performance of the most proposed models in the literature are evaluated on planar surfaces assuming a random sensor deployment scenario. These models, for the sake of simplicity, rely on a 2D freespace communication and a distance-based sensing model, where there is no obstacles or blocking between nodes and, nodes and the target. In reality, random deployment of WSN takes place in 3D terrain, where signals are blocked due to the loss of line of sight (LOS), resulting in a degradation in the communication and sensing tasks. Therefore, a more realistic model which considers the 3D terrain effects on both sensing and communication tasks, needs to be built into WSN performance evaluation. The aim of this work is to analyze the effect of 3D terrain on both sensing and communication tasks in the WSN target tracking application. The performance comparison is carried among various 3D terrains and the 2D surface. The terrains that

we use in our scenarios are artificially generated yet realistic terrains including hilly, ledgy and smooth terrains. In the literature, only a few researchers have considered 3D terrain effects while solving the problems of WSN such as routing, localization, and sensing coverage. In [1], the authors optimize the number and the placement of sensors for a given area for a target detection system. The aim is to minimize the cost by reducing the number of sensors, yet provide sufficient sensor coverage by taking the environmental effects (obstacles, such as buildings, trees) into account. Here, the obstacles are random and modeled by altering detection probabilities of grid pairs. A solution for 3D coverage problem is tackled at polynomial time in [2]. A field is said to be α-covered if every point in the sensing field is at least covered by α sensors assuming that the sensing range of a sensor is modeled by a 3D ball. α-coverage is related with how many neighbouring balls intersect a sensor’s ball. A beacon-less localization scheme with regards to a non-flat terrain has been tackled in [3]. The work is based on a deployment model where sensors are deployed in groups at regular points and the distribution of sensors follow a Gaussion distribution around these points. Moreover, the nodes roll to their final positions e.g., valleys based on the surface characteristics. A similar work introduced a location assistant (LA) to tackle the localization problem of WSN in 3D [4]. The location-unaware sensor nodes estimate their position with the aid of a mobile LA. In our study, we assume the sensor node coordinates are known. Sensor node localization is a problem of its own and we keep it beyond the scope of this paper. The paper is organized as follows: Next section presents a background on collaborative target tracking and defines our architecture. In Section III, terrain modeling and simulation terrains are described. Simulation setup, scenarios and experimental results are provided in Section IV. We give the conclusion and future work in Section V. II. TARGET T RACKING

This work is supported by the State Planning Organization of Turkey under the grant number 03K120250, and by TUBITAK under the grant number 106E082.

978-1-4244-2644-7/08/$25.00 © 2008 IEEE

Target tracking is defined as the processing of measurements obtained from a target to get an estimate of its current state.

Target tracking has its applications in Command, Control, Communications, Computer, Intelligence, Surveillance, and Reconnaissance (C4ISR) in military [5]. The spatial coverage and multiplicity in the sensing task facilitate the tracking of targets in coordination and improve the target localization accuracies. In collaborative target tracking, not all sensors provide as useful information as the others for the target state estimation. The process of updating the target state estimate requires collecting measurements from neighbouring sensors. Here, a decision has to be made in selecting an optimal subset of the sensors and optimal way of adding these measurements into the current belief. The criterion for deciding on the optimal subset mostly uses the sensor position and sensing modality information. Based on this information, a sensor may not transmit data to the other sensors [6]. In [5], the performance of a collaborative target tracking based on the maximum mutual information-based sensor selection has been evaluated with regards to Kalman and Information filter. Mutual information between the target state and the observation measures how much the current observation tells about the current target state. The authors defined a mutual information gain, J. If J is sufficiently large, then the sensor shares its information with the neighbouring nodes, otherwise it does not transmit. The work also adapts an Information Controlled Transmission Power adjustment (ICTP) scheme in which the communication transmission power of a sensor is regulated according to its information content. Our study uses a collaborative target tracking application in the performance evaluation of WSN. Moreover, our model also considers the 3D topographical surfaces of the sensor deployment region. A. Unbiased Target Tracking In target tracking applications, cartesian coordinates are the most suitable form of representing target motions. However in most systems, measurements of the target position are reported in polar coordinates (range, bearing, elevation angle) with respect to the target position. The classical conversion from polar to cartesian coordinates results in biased and inconsistent estimates for cross-range measurement error. Debiased conversion for 3D target tracking and the derivations for the Gaussian noise used in our model are presented in [7]. B. Process Model

rm = r + νr

βm = β + νβ

and it evolves in time according to (2)

[5], where x(k) is the real target state vector at time instant k as in (1), F is the process transition matrix that offsets targets position with respect to the velocity and v is the zero-mean process transition noise that adds randomness to the route.

εm = ε + ν ε .

where r is the true range, β is the bearing angle, ε is the elevation angle and νr , νβ , νε are independent zero-mean noises with standard deviations σr , σβ and σε respectively. The classical transformation from polar to Cartesian coordinates are given by xm

= rm cos βm cos εm

ym

= rm sin βm cos εm .

However, as pointed out in [7] this conversion can give biased estimates due to nonlinear transformation of noisy bearing and elevation. Thus bias compensation factors (λβ , λε ) are used in conversions. Consequently the unbiased conversions are −1 = λ−1 β λε rm cos βm cos εm

xum

−1 = λ−1 β λε rm sin βm cos εm

u ym

while the covariances of the observation in 3D errors are  11 1 −2 −2 2 = λβ λε (rm + 2σr2 )(1 + (λβ )2 cos 2βm ) Rm 4  ×(1 + (λε )2 cos 2εm )   1 2 − (rm + σr2 )(1 + λβ cos 2βm (1 + λε cos 2εm ) 4  22 1 −2 −2 2 Rm = λβ λε (rm + 2σr2 )(1 − (λβ )2 cos 2βm ) 4  ×(1 + (λε )2 cos 2εm )   1 2 − (rm + σr2 )(1 − λβ cos 2βm (1 + λε cos 2εm ) 4  12 1 −2 −2  −2 2 Rm = λβ λε (λβ ) (rm + 2σr2 ) sin 2βm (1 + (λε )2 cos 2εm ) 4  1  2 − λβ (rm + σr2 ) sin 2βm (1 + λε cos 2εm ) 4 21 12 Rm =Rm and the compensation factors are 2

λβ =E [cos νβ ] = e−σβ /2 

The target process is a four dimensional vector that consists of the two dimensional position of the target, (ξ, η), and the ˙ η). velocity of the target, (ξ, ˙ The target process state vector is defined by (1) x = [ξ η ξ˙ η] ˙ T, x(k + 1) = Fx(k) + v(k),

C. Observation Model in 3D Sensors can only observe the first two dimensions of the process. The velocity of the target is not observable by the sensors. Furthermore, sensors collect range and bearing data. However they cannot observe the coordinates of the target directly. The observations in 3D are [7],

2

λβ =E [cos 2νβ ] = e−2σβ = λ4β 2

λε =E [cos νε ] = e−σε /2 

2

λε =E [cos 2νε ] = e−2σε = λ4ε . It should be noted that the covariance matrix for 3D is a three11 22 33 12 , Rm , Rm , Rm , by-three matrix, consisting of elements Rm 13 23 , Rm . However, in this study only the coordinates (x, y) Rm are of importance, not the z coordinate. The target tracking process continues in 2D, since the z coordinate is bound to the terrain height in the corresponding (x, y) coordinate and need not be observed. The observed parameters (rm , βm , εm ) are mapped to 2D coordinates. As a result, we only consider 11 22 12 , Rm , Rm elements of the covariance matrix. the Rm

D. Distributed Data Fusion Architecture Information state y and the information matrix Y associated with an observation estimate ˆ x, and the covariance of the observation estimate P at time instant k are given by, [8] ˆ y(k) Y(k)

I(k)

ˆy(k | k) = ˆy(k | k − 1) + i(k) Y(k | k) = Y(k | k − 1) + I(k).

−1

x(k) = P (k)ˆ −1 = P (k).

In [8], it is shown that by means of sufficient statistics, an observation ϕ contributes i(k) to the information state y and I(k) to the information matrix Y where i(k)

matrix predictions are updated with the denomination values by

= HT R−1 (k)ϕ(k) T

−1

= H R

(3)

(k)H

and H is the observation matrix of the sensor. Instead of sharing the measurements related to the target state among the collaborating sensors, sharing the information form of the observations results in a simple additive fusion framework that can be run on each of the tiny sensing devices. The distributed data fusion equations are, [5] ˆ y(k | k) = ˆ y(k | k − 1) +

N 

ii (k)

(4)

Ii (k),

(5)

i=1

Y(k | k) = Y(k | k − 1) +

N  i=1

where N is the total number of sensors participating in the fusion process and ˆ y(k | k−1) represents the information state estimate at time k given the observations up to and including time k − 1. Before the data at time k are collected, with the observations up to the time instant k − 1, the predicted information state and the information matrix at time k can be calculated from, [5] ˆ y(k | k − 1)

= Y(k | k − 1)FY−1 (k − 1 | k − 1) (6) ˆ y(k − 1 | k − 1)

Y(k | k − 1)

=

[FY−1 (k − 1 | k − 1)FT + Q]−1 (7)

where Q is the state transition covariance. State estimate of the target at any time instant k can be found from, [5] ˆ x(k | k) = Y−1 (k | k)ˆ y(k | k).

(8)

E. Collaborative Target Tracking Algorithm The algorithm employed by a sensor for tracking targets in a collaborative manner within the distributed data fusion framework is depicted in Algorithm 1. The information state and the information matrix denominations associated with the current observation are defined by (3). The predicted information state and the information matrix are computed by (6) and (7) respectively. Information state and the information

Algorithm 1 Pseudocode of the collaborative target tracking algorithm run on a sensor. Modified for 3D experiments from [5]. Require: Observations rm (k), θm (k) from the target at time instant k, information state ii (k) and information matrix Ii (k) at time instant k from the neighboring sensors i, system variables F, Q, H, R. Ensure: Target position estimate ˆx(k | k) at time instant k. 1: ϕ(k) ⇐ f (rm , θm ) T −1 2: i(k) ⇐ H R (k)ϕ(k) T −1 3: I(k) ⇐ H R (k)H 4: ˆ y(k | k − 1) ⇐ Y(k | k − 1)FY−1 (k − 1 | k − 1)ˆy(k − 1 | k − 1) −1 T 5: Y(k | k − 1) ⇐ [FY (k − 1 | k − 1)F + Q]−1 6: ˆ y(k | k) ⇐ ˆy(k | k − 1) + i(k) 7: Y(k | k) ⇐ Y(k | k − 1) + I(k) N y(k | k) = ˆy(k | k − 1) + i=1 ii (k) 8: ˆ N 9: Y(k | k) = Y(k | k − 1) + i=1 Ii (k) x(k | k) = Y−1 (k | k)ˆy(k | k) 10: ˆ III. T ERRAIN M ODELING A. Terrain Generation Terrain is a part of our simulation environment. Our terrains are generated in Geofrac environment in heightmap format [9]. The resulting heightmaps are then exported to Terragen terrain format [10], for visual purposes, e.g., rendering, and further processing by the sensor network simulator program. Heightmaps are 2D representation of terrains in a grid form, consisting of two-dimensional array of height values. The x and y coordinates are associated with z values, which represent the terrain height. To simulate a random continuous terrain at rendering time, points between grid points are interpolated. During processing of the heightmap, three points with coordinates (x, y, z) define a triangle in 3D space. LOS calculation between any two points is achieved by utilizing ray-triangle intersection algorithm for every triangle in the grid squares between those two points [11]. Terrain generation in Geofrac is achieved with 2D Perlin Noise scheme. Harmonics, frequency and the amplitude are three parameters associated with Perlin Noise terrain generation. In Geofrac environment, harmonics is the number of Perlin Noise functions. The frequency parameter denotes the highest frequency with the lowest vertical scale in the terrain, namely the last noise wave with the smallest amplitude. Harmonics and the frequency together determine the turbulence of the terrain whereas the amplitude determines the total vertical scale in the heightmap grid points. The effect of the terrain generation parameters on the mean target localization error in WSN target tracking has been investigated in Section IV-B.

TABLE II S HADOW FADING COMMUNICATION MODEL PARAMETERS .

B. Simulation Terrains The parameters associated with our simulation terrains are shown in Table I and the resulting terrains are shown in Figures 2(a) through 2(d). As seen in Figure 2, Terrain 1 and Terrain 3 are hilly terrains which have peaks, although Terrain 1 is of more mountainous type in the sense that the surface of Terrain 1 is smoother than Terrain 3. The reason for this is that Terrain 3 is generated with a higher frequency for the same harmonics values (see Table I). Terrain 2 is not a hilly (slopy) terrain but has small-sized ledges. Although generated with a higher frequency, Terrain 2 is not hilly because of its small vertical scale. Thus, high frequency that would otherwise result in peaks has caused small ledges in Terrain 2. Terrain 4 is not ledgy-surfaced, but relatively slopy in the sense that it has more wide-scale slopes than Terrain 2 has. On the other hand, it has less number of ledges which makes it smoother. The reason behind this is the smallest frequency and relatively smaller amplitude associated with Terrain 4.

Carrier frequency Path loss exponent TX & RX antenna height Shadow fading standard deviation Sensor transmission power

1.8 GHz 2 0.1 m 4 -30 dB

to the non-LOS condition between the nodes in the terrain. Sensors update their belief about the target state with these received data as described in Section II-D. In our simulations, we use the shadow-fading radio propagation model for the communication signals whose parameters are given in Table II [13]. A. Analysis of the Mean Target Localization Error for Collaborative Target Tracking

TABLE I P ERLIN N OISE PARAMETERS Terrain Terrain Terrain Terrain Terrain

Number 1 2 3 4

Harmonics 6 6 6 7

Frequency 0.15 1 0.3 0.1

Amplitude 50 5 20 20

IV. S IMULATIONS Our simulations have been done with the heightmaps of the terrains explained in Section III-B. The heightmap processing involves LOS calculation for both communication and sensing tasks of WSN. For LOS calculation in the communication scenario, the two z points in the heightmap are increased by the amount of antenna height (0.1 m). In the sensing scenario, the height of the source point (sensor) is increased by 0.1 m, whereas that of the destination point is increased by the amount of assumed target height (e.g., vehicle, human), 1.50 m. We run Monte Carlo simulations to examine the performance of the WSN target tracking application based on the distributed data fusion architecture. We examine the mean target localization error in four terrains and with four different scenarios each having different number of sensors. Each of these 16 simulations are run in a 200 m × 200 m area. All data points in the graphs represent the means of ten runs in which not only the communication uncertainities but also the random sensor deployment change. A target moves in the area according to the process model described in Section II-B, (2). We utilize TWR-ISM-002-I radar detection model [12] with pseudorandom signaling, whose typical range is 18 meters. The target is tracked locally using the information from Kalman filter as described in Section II-D. In collaborative information fusion, if a sensor is able to communicate with other sensors to share its belief about the target state, it broadcasts its information state and the information matrix. Communication may not be possible due

Fig. 1.

Mean error comparison for cooperative information filter

In Figure 1, the mean error about the target localization for cooperative information filter has been compared among various terrain types and the 2D surface. It is seen that Terrain 2 has the highest mean error, although it has relatively less number of slopes than Terrain 1 and Terrain 3 have. However, Terrain 2 has a lot of ledges on its surface, which means, although the peaks are small in size, they are enough to block the communication among sensors. Terrain 1 and Terrain 3 have similar mean errors since they have both hilly surfaces. With being more hilly, Terrain 1 has a higher mean error. Among all other terrains, Terrain 4 has the lowest mean error, since it does not have a hilly surface nor any ledges. Even in the most dense scenario and in comparison with the smoothest terrain (Terrain 4), there is a huge gap between the mean errors. The mean error in the 2D scenario is 0.34 m whereas it is 3.46 m in Terrain 4. In other terrains, the mean error for the 300 sensors scenario is approximately 6.1 m, which makes the tracking application impractical. The target tracking errors using cooperative information filtering have been compared with that of 2D for Terrain 1 in Figure 3(a) as an example. The evaluations are done for a single deployment scenario to visualize the real target route

(a) Terrain 1 (mountainous type)

(b) Terrain 2 (ledgy type)

(c) Terrain 3 (hilly type)

(d) Terrain 4 (smooth type)

Fig. 2.

3D terrains used in the performance evaluation

(a) 2D and 3D (Terrain 1) localization errors of a sensor Fig. 3.

(b) 3D (Terrain 1, Terrain 2, Terrain 4) localization errors of a sensor

Target Localization Errors for Cooperative Information filter (75 sensors scenario)

and the estimated route by the WSN application. The results are evaluated according to the viewpoint of the sensor marked as a star (location (190,140)). The differences in the target location estimates are more clear in Figure 3(b), which focuses on a more refined segment of the target’s route from the viewpoint of the same sensor of Figure 3(a) . As also seen in the mean error graph (Figure 1), mean errors of Terrain 1 and Terrain 2 are very close to each other, resulting in similar target location estimates. Unlike Terrain 1 and 2, Terrain 4

achieves a small target localization error and the target position estimate is close to the original target position. B. Analysis of Terrain Generation Parameters The mean error of the target tracking application has been analyzed in terms of different Perlin Noise terrain generation parameters namely, the amplitude, frequency and harmonics. As seen in Figure 4(a), the mean error increases as the amplitude of the terrain is increased, at fixed harmonics and frequency values. The behaviour of the mean error comes from

(a) harmonics=5 and frequency=0.3 Fig. 4.

(b) harmonics=5 and amplitude=20

(c) frequency=0.12 and amplitude=10

Effect of mean error on terrain generation parameters; amplitude, frequency and harmonics respectively (75 sensors scenario)

the height of the peaks on the terrain surface. With a small amplitude value, the formations on the surface become ledges and with a high amplitude the formations become peaks. The amplitude factor both affects communication and sensing tasks, resulting in a degradation in the tracking accuracy. Figure 4(b) depicts the effect of frequency on the mean error at fixed harmonics and amplitude values. As the frequency increases, the turbulence of the terrain increases resulting in a higher mean error. When the frequency increases, incremental mean error decreases. This is the result of different scale noise functions in Perlin Noise. Since at fixed amplitude and harmonics, the frequency of the last noise function would form ledges, the sensing task would not be affected as would be effected in lower frequencies. Figure 4(c) denotes the effect of number of harmonics on the mean error at fixed frequency and amplitude values. As the number of harmonics increases, smaller frequencies are used in the terrain generation which means the terrain gets smoother, and eventually the mean error is decreased. It can be seen from three of the figures that the mean error slightly changes, as the parameters are altered. This is due to the fact that an obstacle, regardless of its size (whether it is a peak or a ledge), blocks the communication among nodes. Thus, although a node has sensed a target nearby, it can not communicate the information to other nodes in case of any blockage, which leads to an increase in tracking error. V. C ONCLUSIONS We explore the problem of WSN performance evaluation for a collaborative target tracking application on 3D topographic surfaces. We generate artificial but realistic terrains via a terrain generation tool. We use four terrains each generated with different terrain parameters in our simulations. The goal is to experiment with a collaborative target tracking application in different types of terrains and see the topographic effects on the tracking accuracy, which is a combined task of communication and sensing. Our simulations show that the tracking accuracy in a collaborative target tracking application in a realistic terrain environment highly deviates from that in a 2D assumed environment, which most of the time, makes the tracking application impractical. We also investigate the effect of the mean error on the terrain generation parameters in detail. We conclude that the amplitude and frequency parameters, via

creating peaks and ledges in the terrain, increase the mean target localization error among sensor nodes. On the other hand, large harmonics values generate smoother terrains and thus improve the tracking accuracy. As future work, a deployment/redeployment scheme based on the terrain topography could be developed to minimize the target localization error of the sensor nodes. Specific locations could be picked to enhance the communication and the sensing in that area. Natural formations, e.g. river, ponds, can be incorporated to our terrains, so that sensors randomly deployed onto these regions, do not function. Furthermore, targets mobility can also be modeled according to the topography of the terrain. R EFERENCES [1] S. S. Dhillon and K. Chakrabarty, “Sensor placement for effective coverage and surveillance in distributed sensor networks,” in Proceedings of IEEE Wireless Communications and Networking, March 2003. [2] C. F. Huang, Y. C. Tseng, and L.-C. Lo, “The coverage problem in three-dimensional wireless sensor networks,” in Proceedings of IEEE GLOBECOM, November 2004. [3] S. Velagapalli and H. Fu, “Bracon-less location detection in wireless sensor networks for non-flat terrain,” Future Generation Communication and Networking, vol. 1, pp. 528–534, 2007. [4] L. Zhang, X. Zhou, and Q. Cheng, “Landscape-3d: A robust localization scheme for sensor networks over complex 3d terrains,” in Proceedings of IEEE Conference on Local Computers, November 2006. [5] T. Onel, C. Ersoy, and H. Delic, “An information controlled transmission power adjustment scheme for collaborative target tracking,” in Proceedings of IEEE Symposium on Computers and Communications, June 2006. [6] F. Zhao, J. Shin, and J. Reich, “Information-driven dynamic sensor collaboration,” IEEE Signal Processing Magazine, vol. 19, no. 2, pp. 61–72, 2002. [7] L. Mo, X. Song, Z. Sun, and Y. Bar-Shalom, “Unbiased converted measurements for tracking,” IEEE Transactions on Aerospace and Electronic Systems, vol. 34, no. 3, pp. 1023–1026, 1998. [8] B. Grocholsky, A. Makarenko, and H. Durrant-Whyte, “Informationtheoretic coordinated control of multiple sensor platforms,” in Proceedings of the IEEE International Conference on Robotics & Automation, September 2003. [9] J. W. LaSor. (2005) Geofrac. [Online]. Available: http://www.geofrac2000.com [10] (2005) Terragen photorealistic rendering software. [Online]. Available: http://www.planetside.co.uk/terragen/productmain.shtml [11] T. Moller and B. Trumbore, “Fast, minimum storage ray / triangle intersection,” Journal of Graphics Tools, vol. 2, no. 1, pp. 21–28, 1997. [12] (2006) Advantaca. [Online]. Available: http://www.advantaca.com [13] D. Kotz, C. Newport, R. S. Gray, J. Liu, Y. Yuan, and C. Elliott, “Experimental evaluation of wireless simulation assumptions,” in Proceedings of the ACM/IEEE Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, October 2004.