Applied Mathematics and Computation 162 (2005) 65–79 www.elsevier.com/locate/amc
Recursive estimators of signals from measurements with stochastic delays using covariance information b S. Nakamori a,*, R. Caballero-Aguila , c A. Hermoso-Carazo , J. Linares-Perez c a
Department of Technology, Faculty of Education, Kagoshima University, 1-20-6 Kohrimoto, Kagoshima 890-0065, Japan b Dpto. Estadıstica e I.O., Universidad de Jaen, Jaen, Spain c Dpto. Estadıstica e I.O., Universidad de Granada, Granada, Spain
Abstract Least-squares linear one-stage prediction, filtering and fixed-point smoothing algorithms for signal estimation using measurements with stochastic delays contaminated by additive white noise are derived. The delay is considered to be random and modelled by a binary white noise with values zero or one; these values indicate that the measurements arrive in time or they are delayed by one sampling time. Recursive estimation algorithms are obtained without requiring the state-space model generating the signal, but just using covariance information about the signal and the additive noise in the observations as well as the delay probabilities. Ó 2004 Elsevier Inc. All rights reserved. Keywords: Least-squares estimation; Innovation process; Stochastic systems; Randomly delayed observations; Covariance information
*
Corresponding author. E-mail addresses:
[email protected] (S. Nakamori),
[email protected] (R. Caballero-Aguila),
[email protected] (A. Hermoso-Carazo),
[email protected] (J. Linares-Perez). 0096-3003/$ - see front matter Ó 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2003.12.066
66
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
1. Introduction Usually, for the state estimation it is assumed that, in each time instant, the measurements always contain information about the current state of the plant. However, in some practical situations, such as in communication networks, depending on the scheme used, the arrival of measurements can be delayed and so, the measurement available to estimate the state may not be up-to-date. Errors due to transmission delays are sometimes treated as measurement errors and estimation is carried out by supposing that each measurement contains the actual information. However, the solution thus obtained might inaccurately represent the actual system. Sometimes the delay is interpreted as a prescribed constant or as a known deterministic function of the time (e.g. [1]), but due to the numerous sources of uncertainty, this assumption is not always valid. One of the possible ways to model uncertainties in the knowledge of delay is to interpret it as a stochastic process, including its statistical properties in the system model (e.g. [2]). The importance of considering time-varying delays is well-understood in engineering applications and the time-delay models as random processes are usual. These models may also arise in certain economic applications; for example, the delay between submitting a patent application and obtaining a patent is, frequently, random. In the last years, many authors have investigated on system models with randomly varying time-delays. Ray et al. [3] present a modification of the conventional minimum variance state estimator to accommodate the effects of randomly varying delays in arrival of sensor date at the controller terminal. In Yaz [4] and Yaz and Ray [5], the state estimation problem for a model involving randomly varying bounded sensor delays is treated by reformulating it as an estimation problem in systems with stochastic parameters. In Yaz et al. [6], the state estimation in linear discrete-time state-space models with stochastic parameters is treated using linear matrix inequalities and the results are applied to the problem of state estimation with a random sensor delay. Su and Lu [7] design an extended Kalman filtering algorithm which provides optimal estimates of interconnected network states for systems in which some or all measurements are delayed. More recently, Matveev and Savkin [8] propose a recursive minimum variance state estimator in linear discrete-time partially observed system perturbed by white noises when the observations are transmitted via communication channels with random transmission times and various measurement signals may incur independent delays. Besides the state estimation problem from delayed measurements, systems with random delays are also considered in some control applications. Nilson et al. [9] discuss modelling and analysis of systems subject to random timedelays in the communication network and they present a method for different control schemes. Kolmanovsky and Maizenberg [10] consider a finite horizon
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
67
optimal control problem for a class of linear systems with a randomly varying time-delay, affecting only the state variable, which is modelled by a Markov process with a finite number of states. In Orlov et al. [11], an identifiability analysis is developed for linear time-delay systems with delayed states, control input and measurement output. Also, mean square stochastic stability criteria for some kind of discrete and continuous systems with stochastic delays have been investigated in [12,13]. In this paper, using covariance information, we consider the estimation problem of a signal from delayed measurements which are perturbed by white additive noise. It is assumed that the measurements arrive in time or they are delayed by one sampling time and the delays are modelled by independent Bernoulli random variables. Special cases of such models have been considered for example in [14–16] in an uncertain measurement context where, at some samples, the sensor data is reduced to the zero-mean noise that is associated with the measurements. However, this is not realistic for some applications because the previous measurements may contain significant information about the signal and therefore should be used in the absence of any new sensor data arrival at a given sample. In contrast with the analysis of the estimation problem performed in [4–7], our study is not based on a state-space model but only in the knowledge of the covariance functions of the signal and noise, as well as the parameters of the Bernoulli variables modelling the delays. Recursive algorithms for the leastsquares linear one-stage predictor, filter and fixed-point smoother of the signal is obtained from an innovation approach.
2. Delayed observation model We consider a delayed observation model specified by ~yk ¼ zk þ vk ;
k P 0;
yk ¼ ð1 ck Þ~yk þ ck ~yk1 ;
k P 1:
The n 1 signal process fzk ; k P 0g has zero mean and its autocovariance function is expressed in a semi-degenerate kernel form, that is, Ak BTs ; 0 6 s 6 k; T Kz ðk; sÞ ¼ E½zk zs ¼ Bk ATs ; 0 6 k 6 s; where A and B are known n M 0 matrix functions. The noise process fvk ; k P 0g is a zero-mean white sequence with known autocovariance function E½vk vTs ¼ Rk dK ðk sÞ, being dK the Kronecker delta function.
68
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
The noise fck ; k P 0g is a sequence of independent Bernoulli random variables with P ½ck ¼ 1 ¼ pk . If ck ¼ 1, then yk ¼ ~yk1 and the measurement is delayed by one sampling period. Otherwise, if ck ¼ 0, then yk ¼ ~yk , which means that the measurement is up-to-date. So, the value pk represents the probability of a delay in the measurement yk . Usually, in applications of communication networks, the noise fck ; k P 0g represents the random delay from sensor to controller and the assumption of one-step sensor delay is based on the reasonable supposition that the induced data latency from the sensor to the controller is restricted not to exceed the sampling period [5]. We assume that fck ; k P 0g, fzk ; k P 0g and fvk ; k P 0g are mutually independent. Let us denote zk vk Zk ¼ ; Vk ¼ ; Hk ¼ ½ð1 ck ÞIn ; ck In
zk1 vk1 being In the n n identity matrix. Then, a compact representation of the measurement system is given by yk ¼ Hk Zk þ Hk Vk ;
k P 1:
Let us note that the process fVk ; k P 1g is not a white noise; specifically, we have 0 Rk 0 0 T E½Vk VkT ¼
¼ ; E½Vk Vk1 0 Rk1 Rk1 0 and E½Vk VsT ¼ 0 for s 6¼ k, k 1. In this paper, we treat the least-squares (LS) linear estimation problem of the signal, zk , based on the randomly delayed observations up to time L, fy1 ; . . . ; yL g. More specifically, our aim is to obtain the one-stage predictor ðL ¼ k 1Þ, the filter ðL ¼ kÞ and the fixed-point smoother ðL > kÞ from recursive algorithms. For this purpose, we will use an innovation approach; the advantage of considering this approach to address the LS estimation problem comes from the fact that the innovations constitute a white process. If ^yk;k1 denotes the LS linear estimator of yk based on the observations fy1 ; . . . ; yk1 g, the innovation mk ¼ yk ^yk;k1 represents the new information provided by yk after its estimation from the above observations. It is known [17] that the LS linear estimator of zk based on fy1 ; . . . ; yL g, which is denoted by ^zk;L , is equal to the LS linear estimator based on the innovations fm1 ; . . . ; mL g; then, by denoting Sk;i ¼ E½zk mTi and Pi ¼ E½mi mTi , the Orthogonal Projection Lemma (OPL) leads to ^zk;L ¼
L X i¼1
Sk;i P1 i mi :
ð1Þ
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
69
In the next section, we will derive an explicit formula for the innovations and the recursive algorithms for obtaining the one-stage prediction, filtering and fixed-point smoothing estimators.
3. Recursive algorithms for signal estimation When the observations of the signal are not delayed, all the information prior to time k which is required to estimate yk , that is, to obtain the one-stage predictor ^yk;k1 , is contained in ^zk;k1 . However, due to the possible delay in the measurements, this is not true for the problem at hand. Therefore, in order to obtain the current innovation mk , we first need to find the one-stage predictor, ^yk;k1 ¼
k1 X
E½yk mTi P1 i mi ;
ð2Þ
k P 2:
i¼1
Taking into account the model Hk ¼ E½Hk ¼ ½ð1 pk ÞIn ; pk In , we have E½yk mTi ¼ Hk E½Zk mTi ;
hypotheses
and
by
denoting
i6k 2
and T
T E½yk mTk1 ¼ Hk E½Zk mTk1 þ Hk E½Vk Vk1
Hk1
¼ Hk E½Zk mTk1 þ pk ð1 pk1 ÞRk1 : Substituting the above expressions in (2), we obtain ^yk;k1 ¼ Hk
k1 X
1 E½Zk mTi P1 i mi þ pk ð1 pk1 ÞRk1 Pk1 mk1
i¼1
and noting that bk;k1 ¼ Z
^zk;k1 ^zk1;k1
¼
k1 X
E½Zk mTi P1 i mi
ð3Þ
i¼1
we conclude that bk;k1 þ pk ð1 pk1 ÞRk1 P1 mk1 : ^yk;k1 ¼ Hk Z k1
ð4Þ
Therefore, the innovation mk is obtained by a linear combination of the new observation yk , the predictor ^zk;k1 , the filter ^zk1;k1 and the previous innovation mk1 ; specifically
70
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
mk ¼ yk ð1 pk Þ^zk;k1 pk^zk1;k1 pk ð1 pk1 ÞRk1 P1 k1 mk1 ; m1 ¼ y 1 :
k P 2; ð5Þ
Hence, in order to determine mk we need to obtain the linear one-stage predictor and the filter of the signal. In Theorem 1 we propose recursive algorithms for the prediction and filtering estimates. Theorem 1. If we consider the delayed observation model given in Section 2, the one-stage predictor and filter of the signal zk , ^zk;k1 and ^zk;k , respectively, are obtained as ^zk;k1 ¼ Ak Ok1 ;
ð6Þ
^zk;k ¼ Ak Ok ;
ð7Þ
where the vectors Ok are recursively calculated from Ok ¼ Ok1 þ Jk P1 k mk ;
O0 ¼ 0
ð8Þ
being Jk ¼ ð1 pk Þ½BTk rk1 ATk þ pk ½BTk1 rk1 ATk1 pk ð1 pk1 ÞJk1 P1 k1 Rk1 ; J1 ¼ ð1 p1 ÞBT1 þ p1 BT0 : ð9Þ
The innovation satisfies mk ¼ yk ½ð1 pk ÞAk þ pk Ak1 Ok1 pk ð1 pk1 ÞRk1 P1 k1 mk1 ; m1 ¼ y 1
ð10Þ
and Pk , the covariance of the innovation mk , is given by Pk ¼ ð1 pk ÞðAk BTk þ Rk Þ þ pk ðAk1 BTk1 þ Rk1 Þ ½ð1 pk ÞAk þ pk Ak1 rk1 ½ð1 pk ÞAk þ pk Ak1
T
pk ð1 pk1 Þ½ð1 pk ÞAk þ pk Ak1 Jk1 P1 k1 Rk1 T T pk ð1 pk1 ÞRk1 P1 k1 Jk1 ½ð1 pk ÞAk þ pk Ak1
ð11Þ
pk2 ð1 pk1 Þ2 Rk1 P1 k1 Rk1 ; P1 ¼ ð1 p1 ÞðA1 BT1 þ R1 Þ þ p1 ðA0 BT0 þ R0 Þ: The covariance matrices rk of the vectors Ok are recursively calculated from T rk ¼ rk1 þ Jk P1 k Jk ;
r0 ¼ 0:
ð12Þ
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
71
Proof. From (1), the predictor and filter are given by ^zk;k1 ¼
k1 X
Sk;i P1 i mi ;
^zk;k ¼
k X
i¼1
Sk;i P1 i mi :
i¼1
Hence, we only need to calculate Sk;i ¼ E½zk mTi for i 6 k. Using expression (4) for ^yi;i1 , it is clear that b T HT pi ð1 pi1 ÞE½zk mT P1 Ri1 ; Sk;i ¼ E½zk yiT E½zk Z i;i1 i1 i1 i
26i6k
and taking into account the hypotheses on the model for E½zk yiT and expression b T , we have (3) for E½zk Z i;i1 Sk;i ¼ Ak fð1 pi ÞBTi þ pi BTi1 g
i1 X
T
T Sk;j P1 j E½mj Zi Hi
j¼1
pi ð1 Sk;1 ¼ Ak fð1
pi1 ÞSk;i1 P1 i1 Ri1 ; p1 ÞBT1
þ
2 6 i 6 k;
p1 BT0 g:
This expression for Sk;i guarantees that Sk;i ¼ Ak Ji ;
ð13Þ
1 6 i 6 k;
where J is a function satisfying Ji ¼ ð1 pi ÞBTi þ pi BTi1
i1 X
T
T Jj P1 j E½mj Zi Hi
j¼1
J1 ¼
pi ð1 pi1 ÞJi1 P1 i1 Ri1 ; T T ð1 p1 ÞB1 þ p1 B0 :
ð14Þ
2 6 i 6 k;
So, if we denote Ok ¼
k X
Ji P1 i mi
ð15Þ
i¼1
it is clear that the one-stage predictor, ^zk;k1 , and the filter, ^zk;k , satisfy (6) and (7), respectively. Also, from (15) the recursive relation (8) for the vectors Ok is immediate. Substituting (6) and (7) in (5), we have that the innovation is given by (10). Next, we proceed to establish expression (9) for Jk . By putting i ¼ k in (14) and taking into account (13), we obtain Jk ¼ ð1 pk ÞBTk þ pk BTk1 ð1 pk Þ
k1 X j¼1
pk ð1
pk1 ÞJk1 P1 k1 Rk1 :
T T Jj P1 j Jj Ak pk
k1 X j¼1
T T Jj P1 j Jj Ak1
72
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
Then, by denoting rk ¼ E½Ok OTk ¼
k X
T Ji P1 i Ji ;
r0 ¼ 0;
ð16Þ
i¼1
expression (9) for Jk is obtained. From (16), the functions r satisfy the recursive relation (12). Finally, we obtain expression (11) for the covariance T Pk ¼ E½yk ykT E½^yk;k1 ^yk;k1
of the innovation mk . From expressions (4), (6) and (7) for ^yk;k1 , ^zk;k1 and ^zk1;k1 , respectively, the hypotheses on the model together with (15) lead to Pk ¼ ð1 pk ÞðAk BTk þ Rk Þ þ pk ðAk1 BTk1 þ Rk1 Þ ½ð1 pk ÞAk T
þ pk Ak1 rk1 ½ð1 pk ÞAk þ pk Ak1 pk ð1 pk1 Þ½ð1 pk ÞAk 1 T þ pk Ak1 E½Ok1 mTk1 P1 k1 Rk1 pk ð1 pk1 ÞRk1 Pk1 E½mk1 Ok1 ½ð1 T
2
pk ÞAk þ pk Ak1 pk2 ð1 pk1 Þ Rk1 P1 k1 Rk1 : Using the recursive relation (8) for Ok1 , since Ok2 is orthogonal to mk1 , we have that E½Ok1 mTk1 ¼ Jk1 and, consequently, expression (11) is deduced. h In the following theorem we present the recursive formulas for the fixedpoint smoothers ^zk;L , L > k. Theorem 2. Let us consider the delayed observation model given in Section 2. Then, the fixed-point smoothing estimates of the signal zk verify ^zk;L ¼ ^zk;L1 þ Sk;L P1 L mL ;
ð17Þ
L > k;
where the innovation mL is given in Theorem 1. The matrices Sk;L are calculated from Sk;L ¼ ½Bk Ek;L1 ½ð1 pL ÞAL þ pL AL1 T pL ð1 pL1 ÞSk;L1 P1 L1 RL1 ;
L > k;
ð18Þ
Sk;k ¼ Ak Jk ; where Ek;L satisfy T Ek;L ¼ Ek;L1 þ Sk;L P1 L JL ; Ek;k ¼ Ak rk
L > k;
being the matrix functions P, J and r given in Theorem 1.
ð19Þ
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
73
Proof. The recursive relation (17) for the fixed-point smoother, ^zk;L , L > k, is immediate from the general expression of the linear estimators (1). Then, we only need to prove (18) for Sk;L ¼ E½zk mTL and (19) for Ek;L . Using (10) for mL , and since, for L > k, E½zk yLT ¼ Bk ½ð1 pL ÞAL þ pL AL1 T , we obtain T
Sk;L ¼ Bk ½ð1 pL ÞAL þ pL AL1 E½zk OTL1 ½ð1 pL ÞAL þ pL AL1
T
pL ð1 pL1 ÞE½zk mTL1 P1 L1 RL1 : This expression leads to (18), just by denoting Ek;L ¼ E½zk OTL . From (13), the initial condition in (18) is immediate. Finally, the recursive relation (19) is proven from (8). Its initial condition is immediate from the expression of the filter and (16), taking into account that, from the OPL, E½zk OTk ¼ E½^zk;k OTk . h The performance of the one-stage predictor, filter and fixed-point smoothing estimates can be measured by the estimation errors zk ^zk;L , with L ¼ k 1, L ¼ k and L > k, respectively; more specifically, by the covariance matrices of these errors, T
P ðk; LÞ ¼ E½fzk ^zk;L gfzk ^zk;L g : Clearly, since the error zk ^zk;L is orthogonal to the estimator ^zk;L , it is immediate that P ðk; LÞ ¼ Kz ðk; kÞ E½^zk;L^zTk;L : Recursive formulas to obtain P ðk; LÞ, L ¼ k 1, L ¼ k are immediately obtained from expressions (6) and (7) for the one-stage predictor and filter, respectively, P ðk; k 1Þ ¼ Ak ½BTk rk1 ATk ; P ðk; kÞ ¼ Ak ½BTk rk ATk : On the other hand, taking into account that mL and ^zk;L1 are orthogonal, expression (17) for ^zk;L in Theorem 2 leads to T P ðk; LÞ ¼ P ðk; L 1Þ Sk;L P1 L Sk;L ;
L > k:
4. Computer simulation results This section shows a numerical simulation example to illustrate the application of the recursive algorithms proposed in Theorems 1 and 2. We consider a scalar signal fzk ; k P 0g which is generated by the following first-order autoregressive model
74
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
zkþ1 ¼ 0:95zk þ wk ; where fwk ; k P 0g is a zero-mean white Gaussian noise with Var½wk ¼ 0:1, for all k. The autocovariance function of this signal is Kz ðk; sÞ ¼ 1:025641 ð0:95Þ
ks
;
06s6k
and, in accordance with the theoretic study, it is given in a semi-degenerate kernel form, with Ak ¼ 1:025641 0:95k ;
Bk ¼ 0:95k :
The delayed observation model is given by ~yk ¼ zk þ vk ;
k P 0;
yk ¼ ð1 ck Þ~yk þ ck ~yk1 ;
k P 1;
where fvk ; k P 0g is supposed to be a zero-mean white Gaussian noise with Var½vk ¼ 0:9, for all k, and fck ; k P 0g is a sequence of independent Bernoulli random variables with P ½ck ¼ 1 ¼ p, for all k; that is, we assume that the probability of a delay in the measurement is constant at any time. Finally, the mutual independence of the signal, fzk ; k P 0g, and the noises, fvk ; k P 0g and fck ; k P 0g, imposed in the theoretic study, is also assumed. In order to show the effectiveness of the algorithms proposed in this paper, we have performed a program in MATLAB, which simulates the signal value at each iteration, and provides the prediction, filtering and fixed-point smoothing estimates, as well as the corresponding error variances. Firstly, the performance of the linear estimators proposed, measured by the error variances, has been calculated for different values of the probability of delay; specifically, for p ¼ 0:8, 0.4 and 0.2. The prediction and filtering error variances are displayed in Fig. 1 which shows, as it could be expected, that the prediction and filtering error variances are smaller (and, consequently, the performance of the estimators is better) as p decreases. Also, from this figure it is gathered that the estimation accuracy of the filters is superior to that of the corresponding predictors, and, even, the filter for p ¼ 0:8 is better than the predictor for p ¼ 0:2. Fig. 2 displays the error variances of the filter ^zk;k and the fixed-point smoothers ^zk;kþ1 and ^zk;kþ2 , considering p ¼ 0:2. From this figure it is gathered that the estimation accuracy of the smoothers is superior to that of the filters and also that the performance of the smoothers is better as the number of available observations increases. The simulated signal and delayed observations of it for p ¼ 0:2 and p ¼ 0:8 are displayed in Figs. 3 and 5, respectively. Figs. 4 and 6 display the signal, the smoothing estimate, ^zk;kþ2 , the filtering estimate, ^zk;k , and the prediction estimate, ^zk;k1 , for p ¼ 0:2 and p ¼ 0:8, respectively. These figures show again how, in both cases, the signal evolution is followed more accurately by the
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
75
0.65 Filtering error variances Prediction error variances
0.6
0.55
0.5
0.45
0.4 (a) (b) 0.35
(c) (a’)
0.3
(b’) (c’)
0.25 0
5
10
15 Time k
20
25
30
Fig. 1. Error variances of the filter and predictor when p ¼ 0:8 [(a0 ), (a)], p ¼ 0:4 [(b0 ), (b)] and p ¼ 0:2 [(c0 ), (c)].
0.5 Filtering error variances P(k,k) Smoothing error variances P(k,k+1) Smoothing error variances P(k,k+2)
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0
5
10
15 Time k
20
25
30
Fig. 2. Error variances of the filter ^zk;k and fixed-point smoothers ^zk;kþ1 and ^zk;kþ2 when p ¼ 0:2.
smoothing estimates and how the estimates are better when the probability of delay is smaller.
76
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79 3 Signal Observation
2
1
0
–1
–2
–3
0
10
20
30
40
50 Time k
60
70
80
90
100
Fig. 3. Signal and observations when p ¼ 0:2.
3 Signal Predictor Filter Smoother
2
1
0
–1
–2
–3
0
10
20
30
40
50 Time k
60
70
80
90
100
Fig. 4. Signal zk , predictor ^zk;k1 , filter ^zk;k and smoother ^zk;kþ2 when p ¼ 0:2.
Finally, filtering estimates of a simulated signal from delayed observations with delay probabilities p ¼ 0:2 and p ¼ 0:8 are simultaneously represented in
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
77
3 Signal Observation
2
1
0
–1
–2
–3 0
10
20
30
40
50 Time k
60
70
80
90
100
Fig. 5. Signal and observations when p ¼ 0:8.
3 Signal Predictor Filter Smoother
2
1
0
–1
–2
–3 0
10
20
30
40
50 Time k
60
70
80
90
100
Fig. 6. Signal zk , predictor ^zk;k1 , filter ^zk;k and smoother ^zk;kþ2 when p ¼ 0:8.
Fig. 7; the conclusion, agreeing with that previously commented, is that the performance of the filter is better as p is smaller.
78
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79 3.5 Signal Filter p=0.2 Filter p=08
3 2.5 2 1.5 1 0.5 0 –0.5 –1 –1.5 0
10
20
30
40
50 Time k
60
70
80
90
100
Fig. 7. Signal and filtering estimates when p ¼ 0:2 and p ¼ 0:8.
5. Conclusions In this paper, using covariance information, the linear one-stage predictor, filter and fixed-point smoother are derived from randomly delayed measurements. The random delay is modelled by a sequence of independent Bernoulli random variables whose parameters, which represent the probabilities of delay, are known. If a Bernoulli variable has value zero, the corresponding measurement is up-to-date; otherwise, if it takes value one, the measurement is delayed by one sampling period. The proposed estimators do not require the knowledge of the state-space model; they use as information the second-order moments of the involved processes and the probability of delay in the observations. Recursive algorithms for the linear estimators of the signal are derived, from an invariant imbedding method and the Orthogonal Projection Lemma, by an innovation approach.
Acknowledgements This work is partially supported by the ÔMinisterio de Ciencia y TecnologıaÕ under contract BFM2002-00932.
S. Nakamori et al. / Appl. Math. Comput. 162 (2005) 65–79
79
References [1] M. Boutayeb, M. Darouach, Observers for discrete-time systems with multiple delays, IEEE Transactions on Automatic Control 46 (2001) 746–750. [2] J.S. Evans, V. Krishnamurthy, Hidden Markov model state estimation with randomly delayed observations, IEEE Transactions on Signal Processing 47 (1999) 2157–2166. [3] A. Ray, L.W. Liou, J.H. Shen, State estimation using randomly delayed measurements, Journal of Dynamic Systems, Measurement, and Control 115 (1993) 19–26. [4] E.E. Yaz, Harmonic estimation with random sensor delay, in: Proceedings of the 36th International Conference on Decision & Control, 1997, pp. 1524–1525. [5] E. Yaz, A. Ray, Linear unbiased state estimation under randomly varying bounded sensor delay, Applied Mathematics Letters 11 (1998) 27–32. [6] Y.I. Yaz, E.E. Yaz, M.J. Mohseni, LMI-based state estimation for some discrete-time stochastic models, in: Proceedings of the 1998 IEEE International Conference on Control Applications, 1998, pp. 456–460. [7] C.L. Su, C.N. Lu, Interconnected network state estimation using randomly delayed measurements, IEEE Transactions on Power Systems 16 (2001) 870–878. [8] A.S. Matveev, A.V. Savkin, The problem of state estimation via asynchronous communication channels with irregular transmission times, IEEE Transactions on Automatic Control 48 (2003) 670–676. [9] J. Nilsson, B. Bernhardsson, B. Wittenmark, Stochastic analysis and control of real-time systems with random time delays, Automatica 34 (1998) 57–64. [10] I.V. Kolmanovsky, T.L. Maizenberg, Optimal control of continuous-time linear systems with a time-varying, random delay, Systems & Control Letters 44 (2001) 119–126. [11] Y. Orlov, L. Belkoura, J.P. Richard, M. Dambrine, Identifiability analysis of linear time-delay systems, in: Proceedings of the 40th IEEE Conference on Decision and Control, 2001, pp. 4776–4781. [12] I. Kolmanovsky, T.L. Maizenberg, Stochastic stability of a class of nonlinear systems with randomly varying time-delay, in: Proceedings of the American Control Conference, 2000, pp. 4304–4308. [13] V.B. Kolmanovskii, T.L. Maizenberg, J.P. Richard, Mean square stability of difference equations with a stochastic delay, Nonlinear Analysis 52 (2003) 795–804. [14] S. Nakamori, R. Caballero-Aguila, A. Hermoso-Carazo, J. Linares-Perez, Linear recursive discrete-time estimators using covariance information under uncertain observations, Signal Processing 83 (2003) 1553–1559. [15] S. Nakamori, R. Caballero-Aguila, A. Hermoso-Carazo, J. Linares-Perez, New design of estimators using covariance information with uncertain observations in linear discrete-time systems, Applied Mathematics and Computation 135 (2003) 429–441. [16] S. Nakamori, R. Caballero-Aguila, A. Hermoso-Carazo, J. Linares-Perez, Linear estimation from uncertain observations with white plus coloured noises using covariance information, Digital Signal Processing 13 (2003) 552–568. [17] T. Kailath, Lectures on Linear Least-Squares Estimation, Springer-Verlag, New York, 1976.