WeB08.1
43rd IEEE Conference on Decision and Control December 14-17, 2004 Atlantis, Paradise Island, Bahamas
Polynomial Filtering for Stochastic non-Gaussian Descriptor Systems Alfredo Germani
Costanzo Manes
Abstract—Stochastic descriptor systems, also named singular systems, have been widely investigated and many important results in the linear filtering theory have been achieved in the framework of Gaussian processes. Nevertheless, such results could be far to be optimal, especially in the case of highly asymmetrical non Gaussian noises. This paper presents a polynomial solution for filtering singular systems affected by non-Gaussian noises. The performance of polynomial filters can be improved by increasing their degree. Simulation results support theoretical results. Index Terms—Descriptor systems, singular systems, Kalman filtering, non-Gaussian noise.
I. INTRODUCTION Since the ¯rst works of Luenberger [1, 2], a growing literature has been concerned with descriptor systems, named also singular systems. This paper is focused on the ¯ltering problem for stochastic descriptor systems J(k + 1)x(k + 1) = A(k)x(k) + B(k)u(k) + fk ; y(k) = C(k)x(k) + gk ;
(1)
where x(k) 2 IRn is the descriptor vector, u(k) 2 IRp is a deterministic input and y(k) 2 IRq is the measured output, fk 2 IRm and gk 2 IRq are independent noise sequences. J(k); A(k) are matrices in IRm£n , B(k) 2 IRm£p and C(k) 2 IRq£n . Our main interest is focused on the cases where J(k) is rectangular, with m < n, or square and singular, so that the descriptor form (1) can not be reduced to a standard linear system [3, 4]. Early papers on the ¯ltering problem are due to Dai (see [5, 6]) for the case in which J(k) is square and singular and noises are Gaussian. Systems with rectangular J(k) (m < n) are considered in [7, 8], where only second order noise moments are used to achieve the descriptor vector estimate through the minimization of a cost function, under a suitable estimability condition £ ¤T (matrix J T C T full column rank). In [9] a ¯lter is This work is supported by MIUR (Italian Ministry for Education and Research), project 2003090328{003, and by CNR (Italian National Research Council). A. Germani and C. Manes are with the Dipartimento di Ingegneria Elettrica, Universitµ a degli Studi dell'Aquila, Poggio di Roio, 67040 L'Aquila, Italy,
[email protected],
[email protected]. P. Palumbo is with Istituto di Analisi ed Informatica del CNR \A. Ruberti" (National Research Council of Italy), Viale Manzoni 30, 00185 Roma, Italy,
[email protected]. The extended version of this work has been published on the IEEE Trans. on Circuits and Systems{I: Regular Papers, Vol. 51, No. 8, August 2004
0-7803-8682-5/04/$20.00 ©2004 IEEE
Pasquale Palumbo
proposed for the Gaussian case, following the maximum likelihood approach. A recent contribution [10] further extends the class of estimable descriptor systems. This paper deals with the ¯ltering problem for descriptor systems in the presence of non-Gaussian noises, following the polynomial approach [11,12]. Preliminary results have been presented in [13], where the polynomial ¯lter of degree one has been presented. The paper is organized as follows: in section II the use of projections in estimation problems is illustrated; in section III the structure of the descriptor system and its fundamental properties concerning estimability are studied; in section IV the minimum variance solution of the ¯ltering problem is introduced and the polynomial estimation algorithm is presented. Simulation results are reported in section V. II. ESTIMATION AS A PROJECTION ¡ ¢ Consider the pair of random variables x(k); Yk , where x(k) is the descriptor vector of system (1) and Yk is the vector of measurements from 0 to k: Yk = [ y T (0) y T (1) ¢ ¢ ¢ y T (k) ]T 2 IR(k+1)¢q :
(2)
It is well known that the optimal estimate (minimum error variance) of x(k) is given by the conditional mean IEfx(k)jYk g, and coincides with the projection of x(k) onto the space of all Borel functions of the observations Yk with ¯nite L2 norm, i.e. ¡ ¢ IEfx(k)jYk g = ¦ x(k)jL2 (Yk ) : (3) The computation of the mean requires the ¡ conditional ¢ joint distribution of x(k); Yk , and in general does not admit a recursive ¯nite-dimensional implementation. Polynomial ¯lters provide the recursive computation of the estimate with minimum error variance in a subspace of polynomial functions of the measured output. For a given integer º, consider a sequence of extended measurements de¯ned as 2 y(k) 3 6 y [2] (k) 7 7 Yº (k) = 6 4 .. 5 ; . y [º] (k)
(4)
where the superscript [i] denotes the Kronecker power, de¯ned for a given matrix D by
2088
D[0] = 1;
D[i] = D l D[i¡1] ;
i ¸ 1;
(5)
with l the standard Kronecker product. Consider the vector Ykº of all the extended measurements from 0 to k T Ykº = [ YºT (0) YºT (1) ¢ ¢ ¢ YºT (k) ] ; (6)
and denote with L(Ykº ) the set of all a±ne functions of Ykº . This paper presents a recursive ¯lter that provides the optimal estimate of the descriptor vector among all functions in L(Ykº ). Note that L(Ykº ) is a subspace of the space of all the º-degree polynomials of Yk . In [14] a larger class of polynomials is considered: the set of all the a±ne functions of a vector Yk¢;º that contains all the components of Ykº and also some of the products between outputs at di®erent time instants. For brevity, this paper considers only the case ¢ = 0, so that Yk¢;º = Ykº . III. SOLVABLE LINEAR SINGULAR SYSTEMS
i) J(k + 1)»(k + 1) = A(k)»(k) + F (k)vk ; ii) y(k) = C(k)»(k) + G(k)wk :
(12)
The class of singular systems considered in this paper is characterized by the following de¯nition, proposed in [8] for the case of time-invariant systems with vk = wk = 0. De¯nition 4: Let (7) be a singular linear system solvable for any x(0). Such system is said to be estimable from the measurements if the evolution of©x(k) is univoª cally determined by the output sequence y(k); k ¸ 0 . The sequence of matrices · ¸ J(k + 1) H(k) = ; C(k + 1)
k¸0
(13)
plays a central role in estimability from the measurements for descriptor systems.
Consider a descriptor system of the form J(k + 1)x(k + 1) = A(k)x(k) + F (k)vk ; y(k) = C(k)x(k) + G(k)wk ;
is called a Complete Regular System (CRS) for (1) if 8¹ x, 8fvk ; k ¸ 0g, 8fwk ; k ¸ 0g, it results, 8k ¸ 0:
(7a) (7b)
where vk 2 IRp and wk 2 IRr can be either deterministic or stochastic sequences. De¯nition 1: (Luenberger [1]) A descriptor system of the form (7) is said solvable if ª © ª © 8 vk ; k ¸ 0 ; 9 x(k); k ¸ 0 : (8) J(k + 1)x(k + 1) = A(k)x(k) + F (k)vk ; 8k ¸ 0: Solvable descriptor systems have been de¯ned in [1] for the case J(k) 2 IRn£n , with a slightly di®erent notation. For a correct de¯nition of stochastic descriptor systems it is useful the de¯nition of systems solvable for any initial value of the descriptor vector. De¯nition 2: A descriptor system of the form (7) is said to be solvable for any x(0) if © ª © ª 8x0 ; 8 vk ; k ¸ 0 ; 9 x(k); k ¸ 0 : J(k + 1)x(k + 1) = A(k)x(k) + F (k)vk ; 8k ¸ 0; (9) with x(0) = x0 :
De¯nition 3: Consider a descriptor system solvable for any x(0) of the form (7). A regular system described by equations of the type »(k + 1) = M (k)»(k) + N (k)vk + T (k)y(k + 1) + S(k)wk+1 ; ( 10) »(0) = x ¹; for k ¸ 0, with »(k) 2 IRn and de¯ned by a sequence of matrices: M (k); N(k); T (k); S(k); (11)
Theorem 5: A singular causally solvable linear system (7) is estimable from the measurements if and only if matrices H(k) have full-column rank for all k > 0 (rankf(H(k)g = n; 8k ¸ 0). Theorem 5, proved in [8] for time-invariant systems, can be casted in the framework of Complete Regular Systems as follows: Proposition 6: For each CRS (10), associated to a linear singular system (1), the evolution of »(k) is invariant with respect to each sequence of matrices (11) if and only if the system is estimable from the measurements. In this case »(k) is equal to the unique evolution of the descriptor vector. Proof. Estimability from measurements guarantees © ª that there is only one sequence »(k) compatible with the singular recursive equation i) and the measurement equation ii) of (12) and it necessarily coincides with the exact evolution of the singular system. t u Theorem 7: [14] The class of CRS's associated to a linear singular system, solvable for any initial condition and estimable from the measurements, is given by the following sequence of matrices: · · ¸ ¸ A(k) F (k) + + M (k) = H (k) ; N (k) = H (k) ; Oq£n Oq£p ¸ · · ¸ Om£q Om£r + + T (k) = H (k) S(k) = H (k) Iq ¡G(k + 1) (14) where H + (k) is any sequence of left-inverses of H(k) (i.e. H + (k)H(k) = In ). IV. POLYNOMIAL FILTERING Consider the singular system (1), assumed solvable for any initial x(0). Assume that the state and measure-
2089
ment noise sequences fk and gk are independent and white, with ¯nite moments of order i = 1; : : : ; 2º: h i h i i i [i] [i] IE fk = ³fi (k) 2 IRm ; IE gk = ³gi (k) 2 IRq ; Assume also that x0 is independent of the noises and h¡ ¢[i] i i IE[x0 ] = x ¹; IE x0 ¡ x ¹ = ³0i 2 IRn ; i = 1; : : : ; 2º:
De¯nition 12: A º-degree polynomial estimate for the singular system (1) is intended to be the following: x ^º (k) = xnc (k) + x ^ºc (k); h ¯ ¡ º ¢i where x ^ºc (k) = ¦ xc (k)¯L Yc;k .
Lemma 13: [14] The strictly causal system (20) can be put in the following extended state form: Xe (k + 1) = Ae (k)Xe (k) + Fe (k)Nk ; k ¸ 0; · ¸ xc (0) Xe (0) = X¹e = ; g0
Theorem 8: [14] Let the stochastic system (1) be estimable from the measurements. Then a class of CRS's associated to (1) is given by the following: x(k+1) = A(k)x(k)+B(k)u(k)+D(k+1)y(k+1)+Nkf y(k) = C(k)x(k) + Nkg ; k¸0 (15) x(0) = x0 ; · · ¸ ¸ A(k) B(k) A(k) = H + (k) ; B(k) = H + (k) ; Oq£n Oq£p · ¸ Om£q D(k + 1) = H + (k) ; (16) Iq Nkf = H + (k)
·
¸ fk ; ¡gk+1
Nkg = gk :
(17)
H + (k) is any sequence of left-inverses of H(k) in (13). Remark 9: The new state noise sequence Nkf , although white, is no more independent of the measurement noise sequence Nkg . The regular system described in (15) is not strictly causal. Nevertheless it admits an interesting decomposition described below:
(18)
xnc (k+1) = A(k)xnc (k)+B(k)u(k)+D(k+1)y(k+1); ync (k+1) = C(k)xnc (k); xnc (0) = x ¹ (19) xc (k + 1) = A(k)xc (k) + Nkf ; yc (k) = C(k)xc (k) + Nkg ;
(22)
yc (k) = Ce (k)Xe (k); with extended state and extended noise de¯ned as · ¸ xc (k) (23) 2 IR´ ; ´ = n + q; Xe (k) = Nkg Nk =
·
fk gk+1
¸
2 IR% ;
% = m + q;
(24)
and the matrices Ae (k) 2 IR´£´ , Ce (k) 2 IRq£´ and Fe (k) 2 IR´£% de¯ned as · ¸ h i A(k) On£q Ae (k) = ; Ce (k) = C(k) Iq ; (25) Oq£n Oq£q Fe (k) =
2
+ 4 H (k)
·
Im Oq£m
¸
H + (k)
·
¸3 Om£q ¡Iq 5 : (26)
Iq Oq£m © ª Moreover, Nk ; k ¸ 0 is a white sequence, whose moments up to the 2º-th order are given by: i n ´ o X ³ [i] r i¡r IE Nk = Mr;i m;q ³f (k) l ³g (k + 1) ;
Proposition 10: System (15) can be split into two subsystems providing the descriptor variable as follows: x(k) = xnc (k) + xc (k); y(k) = ync (k) + yc (k);
(21)
(27)
r=0
%i £(mr q i¡r ) with matrices Mr;i m;q 2 IR ÷ ¸[r] · ¸[i¡r] ! I O m m£q i Mr;i ; (28) l m;q = Mr (%) Oq£m Iq i
i
xc (0) = x0 ¡ x ¹: (20)
where Mri (%) 2 IR% £% are the binomial coe±cient of a Kronecker power (see Lemma A.1 in Appendix).
t u
In order to construct an algorithm that recursively h ¯ ¡ º ¢i computes the estimate x ^ºc (k) = ¦ xc (k)¯L Yc;k , a linear model that generates the sequence of extended observation vectors Ycº (k) should be computed. De¯ne Ycº (k) and Xeº (k) as 3 2 yc (k) 6 yc[2] (k) 7 7 6 Ycº (k) = 6 . 7 2 IRqº ; qº = q + q 2 + ¢ ¢ ¢ + q º : 4 .. 5 [º] yc (k) (29)
Proof. The proof easily follows from linearity.
Remark 11: xnc (k) and ync (k) are the non strictly causal components of the descriptor vector and of the output, respectively, and can be computed from measurements and inputs. xc (k) and yc (k) are the causal components of x(k) and y(k), and evolve according to the stochastic equations (20). Note that xnc (k) is a computable linear function of the observations, and therefore only the component xc (k) needs to be estimated.
2090
with Qr;s (k) 2 IR´
3
2
Xe (k) 6 Xe[2] (k) 7 7 6 Xeº (k) = 6 7 2 IR´º ; .. 5 4 .
Qr;s (k) =
º
2
´º = ´ + ´ + ¢ ¢ ¢ + ´ ;
(30) © º ª Theorem 14: ª The processes Xe (k); k ¸ 0 and © º Yc (k); k ¸ 0 , de¯ned in (30) and (29) satisfy the following equations: Xeº (k + 1) = Aºe (k)Xeº (k) + Ueº (k) + »eº (k); Xeº (0) = X¹eº ; Yeº (k) = Ceº (k)Xeº (k); where matrices Aºe (k) are de¯ned as below: 2 H1;1 (k) 6 H2;1 (k) 6 Aºe (k) = 6 .. 4 . Hº;1 (k) 2 Ce (k) 6 Oq2 £´ 6 Ceº (k) = 6 . 4 .. Oqº £´ with Hi;l (k) 2 IR´
i
£´ l
k ¸ 0; (31)
2 IR´º £´º and Ceº (k) 2 IRqº £´º 3 O´£´2 ¢ ¢ ¢ O´£´º H2;2 (k) ¢ ¢ ¢ O´2 £´º 7 7 7; .. .. .. 5 . . . Hº;2 (k) : : : Hº;º (k) 3 Oq£´2 ¢ ¢ ¢ Oq£´º [2] Ce (k) ¢ ¢ ¢ Oq2 £´º 7 7 7; .. .. .. 5 . . . [º] Oqº £´2 ¢ ¢ ¢ Ce (k)
de¯ned as ³ ´ i (´) Fe[i¡l] (k) l A[l] Hi;l (k) = Mi¡l e (k) ³ h i ´ [i¡l] ¢ IE Nk l I´;l ;
(32)
(33)
(34)
1
6 H2;0 (k) 7 7; Ueº (k) = 6 .. 4 5 . Hº;0 (k)
6 Á2 (k) 7 7 »eº (k) = 6 4 .. 5 ; . Áº (k)
(35)
µ ¶ i¡1 X i [i¡l] [l] Mi¡l (´) Fe (k) l Ae (k) Ái (k) = l=0
¢
µ³
[i¡l] Nk
¶ h i´ [i¡l] ¡ IE Nk l I´;l Xe[l] (k):
(36) Moreover »eº (k) is a zero-mean, white sequence, whose h i covariance matrix Q(k) = IE »eº (k)»eºT (k) is given by Q1;2 (k) ¢ ¢ ¢ Q1;º (k) 3 6 Q2;1 (k) Q2;2 (k) ¢ ¢ ¢ Q2;º (k) 7 7; Q(k) = 6 .. .. .. .. 5 4 . . . . Qº;1 (k) Qº;2 (k) ¢ ¢ ¢ Qº;º (k)
l=0 m=0
r;s (k) ¢Pl;m
£´s
, 1 · r; s · º, de¯ned as:
³ ´ r Mr¡l (´) Fe[r¡l] (k) l A[l] e (k)
Fe[s¡m] (k) l A[m] e (k)
´T ¡
¢T s (´) Ms¡m (38)
¡ r;s ¢ r;s (k) = st¡1 Pel;m (k) , where: and Pl;m ³ ´ r;s (k) = I%;s¡m l C%Tr¡l ;´m l I´;l (39) Pel;m µ ¶ i h e s¡m;r¡l l C1;´m l I´;l ¢ IE Xe[l+m] (k) : ¢ N k where i h i h i h e s¡m;r¡l = IE N [s¡m+r¡l] ¡IE N [s¡m] lIE N [r¡l] : N k k k k Proof. See [14].
Ueº (k); »eº (k) 2 IR´º are the deterministic and stochastic input sequences, respectively, whose expressions are given by: 2 H (k) 3 2 Á (k) 3 1;0
r¡1 X s¡1 X
³
[º]
Xe (k)
r
2Q
1;1 (k)
(37)
t u
Remark 15: In (34), (36) and (39) appear the moments of the extended noise Nk , de¯ned in (24), up to the 2º-th order. These can be computed using (27) and (28) in Lemma 13. The formulas for the computation of [i] the mean values of Xe (k); i = 0; : : : ; 2º, can be found in [14]. System (31) can be ¯ltered using the Kalman algorithm, that provides the best estimate of the vector Xeº (k) among all linear transformations of the measurements Ycº (0); : : : ; Ycº (k) or, that is the same, among all the º-degree polynomials of the measurements yc (0); : : : ; yc (k). hThe best º-degree polynomial i º estimate of Xe (k), i.e. ¦ Xe (k)jL(Yc;k ) , is given by the ¯rst ´ components of Xbeº (k). Due to the Kalman ¯lter structure, such a polynomial estimate is computed by the recursive algorithm. presented below. Theorem 16: Consider the following vectors: ¸ · beº (k) = xnc (k) 2 IRn+´º X Xbeº (k) · ¸ y(k) Yeº (k) = 2 IRq+qº ; Ycº (k)
(40) (41)
where xnc (k) is the non strictly causal component h i of the º descriptor vector and Xbeº (k) = ¦ Xe (k)jL(Yc;k ) . Then x ^º (k), the best º-degree polynomial estimate of x(k) as in De¯nition 12, is given by the output of the following system: 8 º be (k + 1jk) = Aºe (k)X beº (k) + Ueº (k); X > > > > > beº (k + 1jk) + Keº (k + 1) beº (k + 1) = X <X k ¸ 0; ³ ´ º º beº (k + 1jk) ; > Y (k + 1) ¡ C (k + 1) X > c e > > > : º beº (k); x ^ (k) = Rºe X (42)
2091
beº (0j ¡ 1) = X ¹ eº = with initial prediction X Aºe (k) =
Ceº (k)
·
A(k) O´º £n
·
Oq£n = Oqº £n Ueº (k) =
h Rºe = In
In
the gain matrix computed as
On£´º Aºe (k) Oq£´º Ceº (k)
µ
¸
2 IR
covariance matrices (a very common case for singular systems [9]). In particular
2 IR ¶
;
2 IRn+´º ;
f3;k ]T ; gk = [ g1;k
0 ]T ;
(51)
(n+´º )£(q+qº )
¸ On£qº ; Keº (0) ¸ On£qº ; Keº (k)
PPº (k + 1) = Aºe (k)Peº (k)AºT e (k) ¡ º¢ º PP (0) = Cov X¹e ;
q ª p ª © © P f2;k = ¡ 6 = 0:4; P f2;k = 2 23 = 0:6; q ª q ª © © P f3;k = ¡ 56 = 0:8; P f3;k = 4 56 = 0:2; q ª p ª © © P (g1;k = ¡ 53 = 0:9; P (g1;k = 3 5 = 0:1:
(44)
(45) (46)
recursively
Systems with unknown inputs can be treated as singular systems through the de¯nition of an extended state x(k) that contains the unknown input: ·
¸ x ~(k) x(k) = : u(k ¡ 1)
(47) k > 0;
³ ´y Keº (k) = PPº (k)CeºT (k) Ceº (k)PPº (k)CeºT (k) ; ³ ´ Peº (k) = I´º ¡ Keº (k)Ceº (k) PPº (k); k ¸ 0;
(52)
With this position, system (49) can be written in the descriptor form (1), with: £ ¤ e 03£1 ; A= A ¤ £ e ; J = I3 ¡B
(48)
+ Q(k);
Proof. It comes applying the Kalman ¯lter to system (31) to get the estimate of Xeº (k) and adding the noncausal component xnc (k) to the ¯rst n components of t u Xbeº (k). V. SIMULATIONS This section presents computer simulations that show the improvements of a quadratic ¯lter (º = 2) with respect to a linear one (º = 1). The singular system simulated derives from a regular system with unknown input. The singularity models the lack of knowledge on the input (see [15, 16]). The regular system considered takes the form ( ex(k) + Bu(k) e x ~(k + 1) = A~ + fk ; (49) ex y(k) = C ~(k) + gk ; 2
fk = [ 0 f2;k
(43)
with discrete distributions (q+qº )£(n+´º )
i On£(´º ¡n) 2 IRn£(n+´º ) ;
Keº (k) ·
¸ x ¹ , and ¹º
2 IR(n+´º )£(n+´º ) ;
B(k)u(k) Ueº (k)
On£q = O´º £q · D(k) Keº (k) = O´º £q Keº (0)
¸
·
3 2 3 · ¸ 0:7 0:1 0 1 1 2 0 e e e 4 5 4 5 A = 0 0:4 0:5 B = ¡1:5 C = 0 ¡1 0 0 0 0:8 2 (50) where the deterministic input u(k) is unknown. The white noise sequences fk and gk are chosen with singular
B = 03£1 ; £ ¤ e 02£1 : C= C
(53)
The 4th component of the descriptor vector x(k) is the unknown input u(k ¡ 1). Due to its construction, the singular systems turns out to be solvable for any initial condition and estimable from measurements, so that the ¯ltering approach proposed in this paper can be applied. Fig. 1 reports the 2nd component of the true and estimated descriptor vectors, while ¯g. 2 reports the 4th component, that coincides with the unknown deterministic input u(k) applied to the original system (49). The asymptotic values of the error covariances allow to appreciate the improvement of the quadratic ¯lter w.r.t. the linear one. The elements on the diagonal of the error covariance matrices Pº=2 and Pº=1 are ¤ diag[Pº=1 = [ 0:6174 0:7684 0:7684 0:3814 ] ¤ diag[Pº=2 = [ 0:5612 0:5835 0:5835 0:3525 ] : (54) VII. CONCLUSIONS This paper presents some of the results of the work [14], concerning the ¯ltering problem for stochastic descriptor systems in a general non-Gaussian setting. Following a geometric approach the optimal polynomial ¯lter of a chosen degree is derived. Such ¯lter provides the minimum error variance estimate of the descriptor vector among all the recursive polynomial functions of the measurements of the chosen degree.
2092
REFERENCES [1] D. G. Luenberger, \Dynamic Equations in Descriptor Form," IEEE Trans. Automat. Contr., No. 3, pp. 312{ 321, 1977. [2] D. G. Luenberger, \Time-Invariant Descriptor Systems," Automatica, No. 14, pp. 473{480, 1978. [3] L. Dai, Singular Control Systems, Springer-Verlag, 1989. [4] F. L. Lewis, \A survey of linear singular systems," Circuits, Syst., Signal Process., No. 5, pp. 3{36, 1986. [5] L. Dai, \State estimation schemes in singular systems," in Proc. 10th IFAC World Congress, Munich, 1987, Vol. 9, pp. 211{215. [6] L. Dai, \Filtering and LQG problems for discretetime stochastic singular systems," IEEE Trans. Automat. Contr., No. 34, pp. 1105{1108, 1989. [7] M. Darouach and M. Zasadzinski, \State estimation for a class of singular systems," Int. J. Syst. Sci., Vol. 23, No. 4, pp. 517{530, 1992. Fig. 1. Linear and quadratic estimates of
[8] M. Darouach, M. Zasadzinski and D. Mehdi, \State estimation of stochastic singular linear systems," Int. J. Syst. Sci., Vol. 2, No. 2, pp. 345{354, 1993.
x2 .
[9] R. Nikoukhah , B. C. Levy and A. S. Willsky, \Kalman ¯ltering and Riccati equations for descriptor systems," IEEE Trans. Automat. Contr., No. 9, pp. 1325{1342, 1992. [10] R. Nikoukhah, S. L. Campbell and F. Delebecque, \Kalman ¯ltering for general discrete-time linear systems," IEEE Trans. Automat. Contr., Vol. 44, No. 10, pp. 1829{1839, 1999. [11] F. Carravetta, A. Germani and M. Raimondi, \Polynomial ¯ltering for linear discrete-time non-Gaussian systems," SIAM J. Control and Optim., Vol. 34, No. 5, pp. 1666{1690, 1996.
Fig. 2. Linear and quadratic estimates of
x4
(the unknown input
u).
APPENDIX Lemma A.1: (Binomial formula for Kronecker powers) Let a; b 2 IRn . For any integer h ¸ 0 there exist h + 1 h h matrices Mkh (n) 2 IRn £n , k = 0; 1; : : : ; h, such that (a + b)[h] =
h X
Mkh (n)(a[k] l b[h¡k] ):
(A.1)
[12] F. Carravetta, A. Germani and M. Raimondi, \Polynomial ¯ltering of discrete-time stochastic linear systems with multiplicative state noise," IEEE Trans. Automat. Contr., Vol. 42, No. 8, pp. 1106{1126, 1997. [13] A. Germani, C. Manes, P. Palumbo, \Optimal Linear Filtering for Stochastic non-Gaussian Descriptor Systems," in Proc. 40th IEEE Conf. on Decision and Control (CDC'01), Orlando, Fl, pp. 2514{2519, 2001. [14] A. Germani, C. Manes and P. Palumbo, \Polynomial Filtering for Stochastic non-Gaussian Descriptor Systems," IEEE Trans. on Circuits and Systems{I: Regular Papers, Vol. 51, No. 8, August 2004. [15] M. Darouach, M. Zasadzinski, A. Basson Onana and S. Nowakowski, \Kalman ¯ltering with unknown inputs via optimal state estimation of singular systems," Int. J. Syst. Sci., Vol. 26, No. 10, pp. 2015{2028, 1995. [16] A. Germani, C. Manes, P. Palumbo, \Optimal linear ¯ltering for bilinear stochastic di®erential systems with unknown inputs," IEEE Trans. Automat. Contr., Vol. 47, No. 10, pp. 1726{1730, 2002.
k=0
The computation of matrices Mjh (n) is reported in the Appendix of [11]. 2093