Kronecker operational matrices for fractional ... - Semantic Scholar

Report 11 Downloads 140 Views
Applied Mathematics and Computation 187 (2007) 250–265 www.elsevier.com/locate/amc

Kronecker operational matrices for fractional calculus and some applications Adem Kilicman *, Zeyad Abdel Aziz Al Zhour Department of Mathematics, University Putra Malaysia (UPM), 43400 Serdang, Selangor, Malaysia Institute for Mathematical Research, University Putra Malaysia (UPM), 43400 Serdang, Selangor, Malaysia Institute of Advanced Technology, University Putra Malaysia (UPM), 43400 Serdang, Selangor, Malaysia

Dedicated to Professor H.M. Srivastava on the occasion of his 65th birthday

Abstract The problems of systems identification, analysis and optimal control have been recently studied using orthogonal functions. The specific orthogonal functions used up to now are the Walsh, the block-pulse, the Laguerre, the Legendre, Haar and many other functions. In the present paper, several operational matrices for integration and differentiation are studied. we introduce the Kronecker convolution product and expanded to the Riemann–Liouville fractional integral of matrices. For some applications, it is often not necessary to compute exact solutions, approximate solutions are sufficient because sometimes computational efforts rapidly increase with the size of matrix functions. Our method is extended to find the exact and approximate solutions of the general system matrix convolution differential equations, the way exists which transform the coupled matrix differential equations into forms for which solutions may be readily computed. Finally, several systems are solved by the new and other approaches and illustrative examples are also considered.  2006 Elsevier Inc. All rights reserved. Keywords: Kronecker product; Convolution product; Kronecker convolution product; Vector operator; Operational matrix; Laplace transform

1. Introduction One of the principle reasons is that matrices arise naturally in communication systems, economic planning, statistics, control theory and other fields of pure and applied mathematics [2,4,7]. Another is because an m · n matrix can be valued as representing a linear map from an n-dimensional vector space to an m-dimensional vector space, and conversely, all such maps can be represented as m · n matrices. The operational matrices for integration and differentiation are studied by many authors [1,3,6,10]. For example, Mouroutsos and

* Corresponding author. Address: Institute of Advanced Technology, University Putra Malaysia (UPM), 43400 Serdang, Selangor, Malaysia. E-mail addresses: [email protected] (A. Kilicman), [email protected] (Z.A.A. Al Zhour).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.08.122

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

251

Sparis [6] solved the control problem using Taylor series operational matrices for integration, Chen et al. [3] solved the distributed systems by using Walsh operational matrices and Wang [10] introduced the inversions of rational and irrational transfer functions by using the generalized block pulse operational matrices for differentiation and integration. The Kronecker convolution product is an interesting area of current research and in fact plays an important role in applications, for example, Sumita [9], established the matrix Laguerre transform to calculate matrix convolutions and evaluated a matrix renewal function. In the present paper, several operational matrices for integration and differentiation are studied. we introduce the Kronecker convolution product and expanded to the Riemann–Liouville fractional integral of matrices. For some applications, it is often not necessary to compute exact solutions, approximate solutions are sufficient because sometimes computational efforts rapidly increase with the size of matrix functions. Our method is extended to find the exact and numerical solutions of the general system matrix convolution differential equations, the way exists which transform the coupled matrix differential equations into forms for which solutions may be readily computed. Finally, several systems are solved by the new and other approaches and illustrative examples are also considered. As usual, the notations A{1}(t) and det A(t) are the inverse and determinant of matrix function A(t), respectively, with respect to convolution. The notations AT(t) and Vec A(t) are transpose and vector-operator of matrix function A(t), respectively. The term ‘‘Vec A(t)’’ will be used to transform a matrix A(t) into a vector by stacking its column one underneath.The notations d(t) and Qn(t) = Ind(t) are the Dirac delta function and Dirac identity matrix, respectively, where In is the identity scalar matrix of order n · n. Finally, the notations A(t) ~ B(t) and A(t) * B(t) are the convolution and Kronecker convolution products, respectively. Finally, A  B stands for the Kronecker product ; A  B = [aijB]ij. 2. Operational matrices for integration and differentiation 2.1. Taylor series operational matrix for integration A function f(t) that is analytic function at the neighborhood of the point t0 can be expanded in the following formula: 1 X an un ðtÞ; ð1Þ f ðtÞ ¼ n¼1

n  and un(t) = tn. where an ¼ n!1 d dtf ð0Þ n To obtain an approximate expression of the analytic function f(t) we may truncate the series (1) up to the (r + 1)th term: r X f ðtÞ ’ an un ðtÞ: ð2Þ n¼1

By defining the coefficient vector cT as: cT ¼ ða0 ; . . . ; ar Þ and the power series basis vector uT(t) as: uT ðtÞ ¼ ðu0 ðtÞ; . . . ; ur ðtÞÞ:

ð3Þ ð4Þ

Eq. (2) can written in the following compact form: f ðtÞ ’ cT uðtÞ:

ð5Þ

The basis functions un(t) satisfy the following relation: un ðtÞ ¼ tun1 ðtÞ: Now, one can easily show that Z t tnþ1 t 1 un ðtÞ ¼ u ðtÞ:  un ðxÞ dx ¼ n þ 1 nþ1 nþ1 nþ1 0

ð6Þ

ð7Þ

252

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

On the basis of Eq. (2), the definite integral of analytic function f(t) in the interval (0, t) may be approximated as: Z t r r X X 1 unþ1 ðtÞ ¼ f ðxÞ dx ’ an bn unþ1 ðtÞ; ð8Þ nþ1 0 n¼1 n¼1 . where b0 ¼ 0; b1 ¼ a0 ; b2 ¼ a21 ; . . . ; br ¼ ar1 r In this equation the term br+1ur+1 has been truncated. Eq. (8) can also written as: Z t f ðxÞ dx ’ cT P uðtÞ;

ð9Þ

0

where P is called the Taylor 2 0 1 0 0 0 6 6 0 0 12 0 0 6 6 6 0 0 0 13 0 6 6 6 P ¼ 6 ... ... ... ... ... 6 6 60 0 0 0 0 6 6 60 0 0 0 0 4 0 0

0

0

series operational matrix for integration: 3  0 0 0 7  0 0 07 7 7  0 0 07 7 7 .. 7 .. .. .. : .7 . . . 7 7 1    0 r1 07 7 7 17  0 0 r 5

0 

0

0

0

ð10Þ

ðrþ1Þðrþ1Þ

The above expression indicates that the Taylor series operational matrix for integration is uniquely simple. If we integrate the function f(t) twice in the interval (0, t), we have    Z tZ t r r X X 1 1 2 f ðxÞðdxÞ ’ an cn unþ1 ðtÞ; ð11Þ unþ2 ðtÞ ¼ nþ1 nþ2 0 0 n¼1 n¼1 ar1 . where c0 ¼ c1 ¼ 0; c2 ¼ a0 ; c3 ¼ a21 ; . . . ; cr ¼ rðr1Þ Therefore, Eq. (11) can also written as: Z tZ t 2 f ðxÞðdxÞ ’ cT QuðtÞ 0

ð12Þ

0

and Q is called the operational matrix for double integration: 3 2 0 0 12 0 0 0    0 0 0 7 6 6 0 0 0 16 0 0    0 0 0 7 7 6 7 6 6 0 0 0 0 121 0    0 0 0 7 7 6 7 6 .. 7 .. .. .. 6 .. .. .. .. .. .. 7 6. . . . . . . . . . 7: 6 7 6 1 60 0 0 0 0 0  0 0 7 ðr1Þðr2Þ 7 6 7 6 60 0 0 0 0 0  0 1 7 0 6 rðr1Þ 7 7 6 60 0 0 0 0 0  0 0 0 7 5 4 0

0

0

0

0

0 

0

0

ð13Þ

0

2

It easily to show that Q = P . Therefore, Eq. (12) may now written as: Z tZ t 2 f ðxÞðdxÞ ’ cT P 2 uðtÞ: 0

0

ð14Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

253

Now we can extend Eq. (14) to the case of k multiple integrations as: Z

t



0

Z

t k

f ðxÞðdxÞ ’ aT P k uðtÞ:

ð15Þ

0

2.2. Generalized block pulse operational matrix for integration The block pulse functions form a complete set of orthogonal functions [10] which defined on the interval [0, b) by  b 6 t < mi b; 1 i1 m wi ðtÞ ¼ ð16Þ 0 elsewhere . . . for i = 1, 2, . . . , m. It is also known that for any absolutely integrable function f(t) on [0, b) can be expanded in block pulse functions: f ðtÞ ’ nT /m ðtÞ;

ð17Þ

where the mean-square error of approximation is minimized, nT = [f1, f2, . . . , fm] and /m(t) = [w1(t), w2(t), . . . , wm(t)]. Here m fi ¼ b

Z

b 0

Z i m ðmÞb f ðtÞwi ðtÞ dt ¼ f ðtÞwi ðtÞ dt: b ði1 m Þb

ð18Þ

There are many different starting points for the discussion of classical fractional calculus [8]. One begins with a generalization of repeated integration. If f(t) is absolutely integrable on [0, b), it can be found that in [8,10] Z t Z tn Z t2 Z t Z t3 1 1 n1 tn1  f ðtÞ; dtn dtn1    dt2 f ðt1 Þdt1 ¼ ðt  t1 Þ f ðt1 Þdt1 ¼ ð19Þ ðn þ 1Þ! ðn þ 1Þ! 0 0 0 0 0 where n = 1, 2, . . . , and 0 6 t < b. On writing C(n) = (n  1)!, an immediate generalization in the form of the operation Ia defined for a > 0 is Z t 1 1 a1 a1 a t  f ðtÞ; 0 6 t < b; ðI f ÞðtÞ ¼ ðt  t1 Þ f ðt1 Þdt1 ¼ ð20Þ CðaÞ 0 CðaÞ Rt a1 where C(a) is the Gamma function and ta1  f ðtÞ ¼ 0 f ðt  t1 Þ ðt1 Þdt1 is called the convolution product of a1 t and f(t). Eq. (20) is called the Riemann–Liouville fractional integral of order a for the function f(t). Now if f(t) is expanded in block pulse functions, as shown in Eq. (17), the Riemann–Liouville fractional integral becomes ðI a f ÞðtÞ ¼

1 a1 1 t  f ðtÞ ’ nT fta1  /m ðtÞg: CðaÞ CðaÞ

ð21Þ

Thus if ta1 * /m(t) can be integrated, then expanded in block pulse functions, the Riemann–Liouville fractional integral is solved via the block pulse functions. Let f1(t) = ta1, f2(t) = wi(t), a > 0, then 1 a1 1 t  wi ðtÞ ¼ f1 ðtÞ  f2 ðtÞ: CðaÞ CðaÞ Taking the Laplace transform of Eq. (22) gives     1 a1 1 1 t  wi ðtÞ ¼ £ f1 ðtÞ  f2 ðtÞ ¼ F 1 ðsÞF 2 ðsÞ; £ CðaÞ CðaÞ CðaÞ

ð22Þ

ð23Þ

254

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

where CðaÞ ; sa   o 1 d 1 n i1 i wi ðtÞ , e½ m bs  e½mbs : F 2 ðsÞ ¼ £fwi ðtÞg ¼ £ s dt s

F 1 ðsÞ ¼ £fta1 g ¼

Since C(a + 1) = aC(a), thus Eq. (23) can be written as:   o 1 a1 1 Cða þ 1Þ n ½i1 i m bs  e½mbs : t  wi ðtÞ ¼ £ e CðaÞ Cða þ 1Þ saþ1 Taking the inverse Laplace transform of Eq. (25) yields  a    a   Z t 1 1 i1 i1 i i t b u t b  t b u t b ; ðt  t1 Þa1 wi ðt1 Þdt1 ¼ CðaÞ 0 Cða þ 1Þ m m m m

ð24Þ

ð25Þ

ð26Þ

where u(t) is the unit step function. Further, it can be derived from Eq. (18) that ta uðtÞ ’ ½c1 ; c2 ; . . . ; cm /m ðtÞ ¼ C T /m ðtÞ;

ð27Þ

where ci ¼

 a aþ1 b i  ði  1Þaþ1 : m aþ1

ð28Þ

Moreover, from the disjoint property of the block pulse functions [5]  wi ðtÞ i ¼ j; wi ðtÞwj ðtÞ ¼ 0 i 6¼ j: It is obvious that  a   i1 i1 b u t b ’ ½0; 0; . . . ; 0; c1 ; c2 ; . . . ; cmiþ1 /m ðtÞ; t m m  a   i i t  b u t  b ’ ½0; 0; . . . ; 0; c1 ; c2 ; . . . ; cm1 /m ðtÞ: m m Thus Eq. (26) becomes Z t 1 1 a1 ½0; 0; . . . ; 0; c1 ; c2  c1 ; . . . ; cmiþ1  cmi /m ðtÞ ðt  t1 Þ wi ðt1 Þdt1 ’ CðaÞ 0 Cða þ 1Þ for i = 1, 2, . . . , m. Here,  a b 1 aþ1 aþ1 ½raþ1  2ðr  1Þ þ ðr  2Þ  cr  cr1 ¼ m aþ1

a 1 for r = 2, 3, . . . , m  i + 1 and c1 ¼ mb aþ1 . Now we can also write Eq. (32) as:  a Z t 1 b 1 a1 ½0; 0; . . . ; 0; n1 ; n2 ; . . . ; nmiþ1 /m ðtÞ; ðt  t1 Þ wi ðt1 Þdt1 ’ CðaÞ 0 m Cða þ 2Þ

ð29Þ

ð30Þ ð31Þ

ð32Þ

ð33Þ

ð34Þ

where n1 ¼ 1;

np ¼ paþ1  2ðp  1Þaþ1 þ ðp  2Þaþ1

ðp ¼ 2; 3; . . . ; m  i þ 1Þ:

Finally, for i = 1, 2, . . . , m, Eq. (34) can be written as: Z t 1 a1 ðt  t1 Þ /m ðt1 Þdt1 ’ F a /m ðtÞ; CðaÞ 0

ð35Þ

ð36Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

255

where 2

1

6 60  a 6 b 1 60 Fa ¼ 6 m Cða þ 2Þ 6 6 40 0

n2

n3



1

n2



0

1

0 0

0 0

 .. . 0

nm

3

7 nm1 7 7 nm2 7 7 .. 7 7 . 5 1

ð37Þ

is called the generalized block pulse operational matrix for integration. For example, if a = b = 1, we then have n1 = 1, np = p2  2 (2  1)2 + (p  2)2 = 2, for all p = 2, 3, . . . , m and 2 Z

t

0

3

1

2 2

 2

60 6 1 6 60 /m ðt1 Þdt1 ’ F 1 /m ðtÞ ¼ 2m 6 6 40 0

1 2 0 1

 27 7 7    2 7/m ðtÞ: 7 .. 7 .. . .5 0 1

0 0 0 0

The generalized block pulse operational matrices Da for differentiation can be derived simply by inverting the Fa matrices. Let 2

Da ¼ F 1 a

1 6 60 6 ma 6 ¼ Cða þ 2Þ6 0 6 b 6 40 0

n2 1

n3 n2

 

0

1

0 0

0 0

 .. . 0

31 2 nm d1 7 6 nm1 7 60 7 6 ma 6 nm2 7 Cða þ 2Þ6 0 7 ¼ 7 6 b .. 7 6 . 5 40 0 1

d2

d1

d1 0

d2 d1

0

0

0

0



dm

3

7    d m1 7 7    d m2 7 7: .. 7 .. 7 . 5 . 0 d1 ð38Þ

Thus 2

1 6 60 6 60 6 6 6 40 0

n2 1

n3 n2

 

0

1

0 0

0 0

 .. . 0

32 nm d1 76 nm1 7 6 0 76 6 nm2 7 7:6 0 7 .. 7 6 6 . 54 0 1

0

d2

d3

d1 0

d2 d1

0

0

0

0



dm

3

7    d m1 7 7    d m2 7 7 ¼ I m; .. 7 .. 7 . 5 . 0 d1

where Im is the identity matrix of order m · m. After equating both sides of Eq. (39), we get m X ni d miþ1 : d 1 ¼ 1; d 2 ¼ n2 d 1 ; . . . ; d m ¼ 

ð39Þ

ð40Þ

i¼2

Note that if a = 0, then F0 = D0 = Im which corresponds to s0 = 1 in the Laplace domain. It is obvious that the Fa and Da matrices perform as sa and sa in the Laplace domain and as fractional (operational) integrators and differentiators in the time domain. For example, let a = 0.5, m = 4, b = 1, the operational matrices F0.5 and D0.5 which correspond to s0.5 and 0.5 s are computed p below: ffiffiffi pffiffiffi pffiffiffi pffiffiffi pffiffiffi n1 = 1, n2 ¼ 2 2  2 ¼ 0:8284, n3 ¼ 3 3  4 2 þ 1 ¼ 0:5393, p n4ffiffiffi¼ 8  6 3 þ 2 2 ¼ 0:4631, d 1 ¼ 1, d2 = 0.8284, d3 = 0.1470, n4 = 0.1111, Cð2:5Þ ¼ 0:75Cð0:5Þ ¼ 0:75 p ¼ 1:3293 and

256

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

2

0:3761 0:3116

6 6 F 0:5 ¼ 6 6 4 2 D0:5

0:2028

0

0:3761

0:3116

0

0

0:3761

0

0

0

0:1640

7 0:2028 7 7 0:3116 7 5 0:3761

2:6857

2:2025

0:2028

0

2:6857

2:2025

0

0

2:6857

0

0

0

6 6 ¼6 6 4

3

0:2954

3

7 0:3098 7 7: 2:2025 7 5 2:6857

3. Convolution product and Riemann–Liouville fractional integral of matrices Definition 1. Let A(t) = [fij(t)] and B(t) = [gij(t)] be m · m absolutely integrable matrices on [0, b). The convolution and Kronecker convolution product s are matrix functions defined by (i) Convolution product: AðtÞ  BðtÞ ¼ ½hij ðtÞ; hij ðtÞ ¼

m Z X k¼1

t

fik ðt  t1 Þgkj ðt1 Þdt1 ¼

0

m X

fik ðtÞ  gkj ðtÞ:

ð41Þ

k¼1

(ii) Kronecker convolution product AðtÞ ~ BðtÞ ¼ ½fij ðtÞ  BðtÞij :

ð42Þ

Here, fij(t) * B(t) is the ijth submatrix of order m · m and A(t) ~ B(t) is of order m2 · m2. h i Definition 2. Let AðtÞ ¼ Cða1ij Þ taij 1 and B(t) = [gij(t)] be m · m absolutely integrable matrices on [0, b). The Riemann–Liouville Fractional Integral of A(t) and B(t) is a matrix functions defined by ðI aij BÞðtÞ ¼ AðtÞ  BðtÞ;

ð43Þ

aij > 0:

Definition 3. Let A(t) = [fij(t)] be m · m absolutely integrable matrices on [0, b). The determinant, inverse and k-power of A(t) respect to the convolution are defined by (i) Determinant: m X ð1Þjþ1 f1j ðtÞ  D1j ; det AðtÞ ¼

ð44Þ

j¼1

where Dij the determinant of the (m  1) · (m  1) matrix function obtained from A(t) by deleting row i and column j of A(t). We call Dij(t) the minor A(t) of corresponding to the entry of fij(t) of A(t).For example, if A(t) = [fij(t)] be 2 · 2 absolutely integrable matrices on [0, b), then det AðtÞ ¼ f11 ðtÞ  f22 ðtÞ  f12 ðtÞ  f21 ðtÞ: (ii) Inversion: Af1g ðtÞ ¼ ½qij ðtÞ;

qij ðtÞ ¼ ðdet AðtÞÞ

{1}

f1g

{1}

 adj AðtÞ:

ð45Þ {1}

{1}

Here, (det A(t)) exists, (det A(t)) (t) * A(t) = A(t) * A * det A(t) = d(t) and note that A Qm(t), where Qm(t) = Imd(t) is the Dirac identity matrix and d(t) is the Dirac identity function. (iii) k- power convolution product: m X fmg fmg fm1g fir ðtÞ  frj ðtÞ: Afkg ðtÞ ¼ ½fij ðtÞ; fij ðtÞ ¼ r¼1

(t) =

ð46Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

257

Theorem 4. Let A(t), B(t) and C(t) be n · n absolutely integrable matrices on [0, b).Then for any scalars a and b ðiÞ ½aAðtÞ þ bBðtÞ  CðtÞ ¼ aAðtÞ  CðtÞ þ bBðtÞ  CðtÞ;

ð47Þ

ðiiÞ ðAðtÞ  BðtÞÞ  CðtÞ ¼ AðtÞ  ðBðtÞ  CðtÞÞ; ðiiiÞ AðtÞ  Qn ðtÞ ¼ Qn ðtÞ  Qn ðtÞ  AðtÞ ¼ AðtÞ;

ð48Þ ð49Þ

T

Qn ðtÞ ¼ I n dðtÞ;

T

T

ðivÞ ½AðtÞ  BðtÞ ¼ B ðtÞ  A ðtÞ; ðvÞ ½AðtÞ  BðtÞ

f1g

¼B

f1g

ðtÞ  A

f1g

ð50Þ ðtÞ; if A

f1g

ðtÞ and B

f1g

ðtÞ are exist:

ð51Þ

Proof. Straightforward by the definition of the convolution product of matrices.

h

Lemma 5. Let A(t) = [fij(t)] and B(t) = [gij(t)] be m · m Semi-Markov Kernel matrices on [0, b). Then for fixed t ðAðtÞ  BðtÞÞði; jÞ 6 ðAðtÞBðtÞÞði; jÞ:

ð52Þ

Proof. Since t 2 [0, b) (fixed), then ðAðtÞ  BðtÞÞði; jÞ ¼

m X

fik ðtÞ  gkj ðtÞ 6

k¼1

m X

fik ðtÞgkj ðtÞ ¼ ðAðtÞBðtÞÞði; jÞ:



k¼1

Theorem 6. Let A(t) = [fij(t)] and B(t) = [gij(t)] be m · m Semi-Markov Kernel matrices on [0, b). Then for any matrix norm kÆk and positive integer k kAðtÞ  BðtÞk 6 kAðtÞkkBðtÞk; kAðtÞ

fkg

ð53Þ

k

k 6 kAðtÞk :

ð54Þ

Proof. Since kÆk is matrix norm, then by Eq. (52), we have kAðtÞ  BðtÞk 6 kAðtÞBðtÞk 6 kAðtÞkkBðtÞk: The proof of other inequality is by induction on k and Eq. (53), we have k

kAfkg ðtÞk ¼ kAfk1g ðtÞ  AðtÞk 6 kAfk1g ðtÞkkAðtÞk 6    6 kAðtÞk :



Theorem 7. Let A(t) = [fij(t)], B(t) = [gij(t)] be m · m Laplace transformable matrices. Then ðiÞ £fAðtÞ  BðtÞg ¼ £fAðtÞg£fBðtÞg;

ð55Þ

ðiiÞ £fAðtÞ ~ BðtÞg ¼ £fAðtÞg  £fBðtÞg:

ð56Þ

Proof. (i) Straightforward by the convolution Theorem. (ii) Suppose that £{fij(t)} = Fij(s) and £ {B(t)} = G(s), then by part (i), we have £fAðtÞ ~ BðtÞg ¼ £½fij ðtÞ  BðtÞij ¼ ½F ij ðsÞGðsÞ ¼ £fAðtÞg  £fBðtÞg:



Theorem 8. Let A(t), B(t) and C(t) be n · n absolutely integrable matrices on [0, b). Then ½AðtÞ ~ BðtÞ  ½CðtÞ ~ DðtÞ ¼ ½AðtÞ  CðtÞ ~ ½BðtÞ  DðtÞ:

ð57Þ

258

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

Proof. The (i, j)th block of [A(t) ~ B(t)]*[C(t) ~ D(t)] is obtained by taking the convolution product of ith row block of A(t) ~ B(t) and the jth column block of C(t) ~ D(t),i.e., 2 6 6 ½fi1 ðtÞ  BðtÞ    fin ðtÞ  BðtÞ  6 4

h1j ðtÞ  DðtÞ .. .

3 n 7 X 7 ðfik ðtÞ  hkj ðtÞ  BðtÞ  DðtÞÞ 7¼ 5 k¼1

hnj ðtÞ  DðtÞ and the (i, j)th block of the right hand side [A(t) * C(t)] ~ [B(t) * D(t)] obtained by the definition of the Kronecker convolution product xij(t)*(B(t) * D(t)) where xij is the (i, j)th element in A(t) * C(t). But the rule of convolution product xij ðtÞ ¼

n X

ðfik ðtÞ  hkj ðtÞÞ:



k¼1

Lemma 9. Let u(t) and v(t) be m-vectors. Then VecðuðtÞ  vT ðtÞÞ ¼ vðtÞ ~ uðtÞ:

ð58Þ

Proof. Straightforward by the definitions of convolution and Kronecker convolution products. h Theorem 10. Let A(t), X(t), and B(t) be m · m absolutely integrable matrices on [0, b).Then ðiÞ Vec½AðtÞ  X ðtÞ  BðtÞ ¼ ½BT ðtÞ ~ AðtÞ  Vec X ðtÞ;

ð59Þ

T

ðiiÞ Vec½AðtÞX ðtÞ  BðtÞ ¼ ½B ðtÞ  AðtÞ  Vec X ðtÞ:

ð60Þ

Proof. Let X(t) be contains x1(t), . . . , xm(t) vectors each is of order m ·P 1 and let ej(t) be m · 1a vector of zeros except for a d(t) in the jth position, for each j = 1, 2 . . . , m, then X ðtÞ ¼ mj¼1 xj ðtÞ  eTj ðtÞ. Now, since A(t) * xj(t) and BT(t) * ej(t) are vectors of order m · 1, then we have ( VecfAðtÞ  X ðtÞ  BðtÞg ¼ Vec

m X

) AðtÞ  xj ðtÞ 

eTj ðtÞ

j¼1

¼

m X

 BðtÞ

¼

m X

T

VecfðAðtÞ  xj ðtÞÞ  ðBT ðtÞ  ej ðtÞÞ g

j¼1

m X f½BT ðtÞ  ej ðtÞ ~ ðAðtÞ  xj ðtÞÞg ¼ f½BT ðtÞ ~ AðtÞ  ðej ðtÞ ~ xj ðtÞÞg

j¼1

j¼1

¼ ½BT ðtÞ ~ AðtÞ 

m X

Vecðxj ðtÞ ~ eTj ðtÞÞ ¼ ðBT ðtÞ ~ AðtÞÞ  Vec

j¼1

m X

ðxj ðtÞ ~ eTj ðtÞÞ

j¼1

¼ ðBT ðtÞ ~ AðtÞÞ  Vec X ðtÞ: Similarly, we can proof (ii).

h

4. Some applications 4.1. Solution of the state–space equations and optimal control problem using Taylor series and Kronecker product The state–space equation is given by x0 ðtÞ ¼ AxðtÞ þ BuðtÞ;

xð0Þ ¼ x0 ;

0 6 t 6 b;

ð61Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

259

where xðtÞ 2 Rn , uðtÞ 2 Rm analytic and A, B are known constant matrices. The input vector u(t) may be expanded in Taylor series as follows: 3 2 3 3 2 2 h10 h11 h12    h1;r1 u0 ðtÞ u1 ðtÞ 7 6 7 7 6 6 6 u2 ðtÞ 7 6 h20 h21 h22    h2;r1 7 6 u1 ðtÞ 7 7 6 7 7 6 6 ð62Þ 6 uðtÞ ¼ 6 . 7 ¼ 6 . 7 ¼ H  uðtÞ; .. 7 .. .. .. 7 6 .. 7 6 . 7 6 . . 5 4 . 5 . . . 4 . 5 4 . um ðtÞ

hm0

hm1

   hm;r1

hm2

ur1 ðtÞ

where H is a known constant matrix. Similarly, the state vector x(t) may also be expanded in Taylor series as follows: 3 2 3 32 2 f10 f11 f12    f1;r1 x1 ðtÞ u0 ðtÞ 7 6 7 76 6 6 x2 ðtÞ 7 6 f20 f21 f22    f2;r1 7 6 u1 ðtÞ 7 7 6 7 76 6 xðtÞ ¼ 6 . 7 ¼ 6 . :6 7 ¼ ½f0 ; f1 ; f2 ; . . . ; fr1 :uðtÞ ¼ F  uðtÞ; .. 7 .. .. .. 7 6 .. 7 6 . 7 6 . . 54 . 5 . . . 4 . 5 4 . xn ðtÞ

fn0

fn1

fn2

   fn;r1

ur1 ðtÞ

where F contains the unknown coefficients of the Taylor expansion of the state vector x(t). Next by integrating the space–state equation in Eq. (61), we have Z t Z t xðtÞ  xð0Þ ¼ A xðrÞ dr þ B uðrÞ dr: 0

ð63Þ

ð64Þ

0

Introducing Eqs. (62), (63) in (64) and using the integration property of the operational matrix P expressed by (15), we obtain ðF  AFP ÞuðtÞ ¼ ðBHP þ EÞuðtÞ;

ð65Þ

where E = [x0 0 0 . . . 0]. In Eq. (65), the matrix BHP + E is known; therefore, we have the following system for the unknown matrix F: AFP  D;

D ¼ BHP  E:

ð66Þ

This system may be written in a simpler form by using the Kronecker product, actually we have Mf ¼ ðA  P T  I nr Þf ¼ d;

ð67Þ

T

where M = P  A  Inr is a nr · nr matrix and 2 3 2 f0 d0 6 f 7 6 d 6 1 7 6 1 7 6 f ¼ Vec F ¼ 6 6 .. 7; d ¼ Vec D ¼ 6 .. 4 . 5 4 . fr1 i.e., fi and di are the ith columns of F and 2 pA pA    6 pA pA    6 M ¼ P T  A  I nr ¼ 6 .. .. 6 .. 4 . . . pA pA   

3 7 7 7; 7 5

d r1 D, respectively. Here, 3 pA pA 7 7 .. 7 7  I nr . 5 pA

is called the Kronecker operational matrix, where Inr is an identity matrix of order nr · nr. The solution of Eq. (67) is f = M1d = (PT  A  Inr)1d.

260

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

The main advantage of this approach is that matrix M = PT  A  Inr due to the form of the operational matrix P, can be easily proven lower triangular. Therefore, one does not have to inverse the matrix M to find the solution. Example 11. Consider the following state–space equation   0 1 0 xðtÞ ¼ xðtÞ þ uðtÞ; 2 3 1  0 , u(t) = 2t + 3, 0 6 t 6 b. where xð0Þ ¼ 1 If we choose r = 4 terms, then h = [3, 2, 0, 0]T, u(t) = [u0(t), u1(t), u2(t), u3(t)] = [1, t, t2, t3]. Now  1 0 0 0 D ¼ BHP  E ¼ 1 3 1 0 T d ¼ Vec D ¼ ½1; 1; 0; 3; 0; 1; 0; 0 2 1 0 0 0 0 0 0 6 0 1 0 0 0 0 0 2 3 6 6 0 0 0 I 2 1 1 0 0 0 0 6 0 6 6 A I 7 6 0 0 2 3 0 1 0 0 0 2 6 7 6 M ¼ P T  A  I8 ¼ 6 7¼6 A I 0 4 0 5 6 0 0 0 1=2 1 0 0 2 2 6 A I 0 0 0 0 1 3=2 0 1 0 6 2 3 6 4 0 0 0 0 0 1=3 1 0 0 0 0 2=3 1 0 Now, f = M1d = [0, 1, 1, 0, 0, 0, 0, 0]T. Therefore,   x1 ðtÞ 0 1 0 xðtÞ ¼ ¼ ½f0 ; f1 ; f2 ; f3 :uðtÞ ¼ 1 0 0 x2 ðtÞ

0 0

uðtÞ ¼

 t 1

3 0 0 7 7 7 0 7 7 0 7 7: 0 7 7 7 0 7 7 0 5 1

:

The optimal control problem for the system in Eq. (61) with the performance index Z 1 b T ½x Qx þ uT Rudr Y ¼ 2 0

ð68Þ

has the well known solution u ¼ R1 BT kðtÞ; where k(t) satisfies the following canonical equation: #  0 " xðtÞ x ðtÞ A BR1 BT ; ¼ kðtÞ k 0 ðtÞ Q AT with the boundary conditions x(0) = x0, and k(b) = 0. To simply the solution of the system (70), we change the independent variable t as s = b  t. We now have #   0 " xðsÞ x ðsÞ A BR1 BT xðsÞ ¼ N ; ¼ kðsÞ kðsÞ k 0 ðsÞ Q AT with the boundary conditions x(t = 0) = x(s = b) = x0 and k(t = b) = k(s = 0) = 0. Integration of Eq. (71) from 0 to s yields: " Rs #   xðrÞdr xðsÞ xð0Þ 0 :  ¼ N R s kðrÞdr kðsÞ kð0Þ 0

ð69Þ

ð70Þ

ð71Þ

ð72Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

261

If we assume that the functions x(s) and k(s) are analytic, then we may expand them in Taylor series as follows:  xðsÞ ¼ W uðsÞ: ð73Þ kðsÞ The matrix W is the 2n · r Taylor coefficient matrix that must be determined. With the use of Eqs. (73), (72) can be written as: W uðsÞ þ NWP uðsÞ ¼ SuðsÞ; ð74Þ where P is the operational matrix expressed by (15). By equating the corresponding terms of the Taylor series, we obtain the following matrix equation: W þ NWP ¼ S: ð75Þ Applying the Vec-notation and Kronecker product, we have Vec W þ VecðNWP Þ ¼ Vec W þ ðP T  N ÞVec W ¼ Vec S:

ð76Þ

Now, Eq. (75) may be written as: Tw ¼ s

where T ¼ I þ P T  N ;

ð77Þ

where T is a 2nr · 2nr square matrix. Also,  xðs ¼ 0Þ 0 0    0 S¼ 0 0 0  0 and

2 6 6 w¼6 4

w0 w1 .. .

3

2

7 7 7; 5

6 6 s¼6 4

s0 s1 .. .

3 7 7 7 5

wr1 sr1 are the vectors formed by the vector elements wi, si that are the columns of matrices W and S, respectively. Due to the structure of operational matrix of integration P, the matrix T formed by the kronecker product of PT and N, may be easily proved lower triangular. This fact simplifies immensely the solution of the system in Eq. (77) that generally has 2nr equations. 4.2. Solution of the general coupled convolution matrix differential equations using Kronecker Products The General Coupled Convolution Matrix Differential Equations is given by X 01 ðtÞ ¼ A11 ðtÞ  X 1 ðtÞ  B11 ðtÞ þ A12 ðtÞ  X 2 ðtÞ  B12 ðtÞ þ    þ A1p ðtÞ  X p ðtÞ  B1p ðtÞ X 02 ðtÞ ¼ A21 ðtÞ  X 1 ðtÞ  B21 ðtÞ þ A22 ðtÞ  X 2 ðtÞ  B22 ðtÞ þ    þ A2p ðtÞ  X p ðtÞ  B2p ðtÞ ; .. . X 0p ðtÞ ¼ Ap1 ðtÞ  X 1 ðtÞ  Bp1 ðtÞ þ Ap2 ðtÞ  X 2 ðtÞ  Bp2 ðtÞ þ    þ App ðtÞ  X p ðtÞ  Bpp ðtÞ

ð78Þ

where Aij(t), Bij(t) are given m · m matrices with entries in are a restricted set of functions on [0, b), which is closed under convolution and isomorphic via the Laplace transforms to the proper rational functions with natural multiplication, Xi(t) unknown matrix functions to be solved and Xi(0) = Ci (i, j = 1, 2, . . . , p). If we use the Vec- notation and the Kronecker convolution product as in Eq. (59) to the system in Eq. (78), we have 3 2 3 2 T B11 ðtÞ ~ A11 ðtÞ BT12 ðtÞ ~ A12 ðtÞ    BT1p ðtÞ ~ A1p ðtÞ 2 Vec X 1 ðtÞ 3 Vec X 01 ðtÞ 6 Vec X 0 ðtÞ 7 6 BT ðtÞ ~ A ðtÞ BT ðtÞ ~ A ðtÞ    BT ðtÞ ~ A ðtÞ 76 Vec X ðtÞ 7 21 22 2p 2 76 6 7 6 21 2 22 2p 7 76 6 7¼6 ð79Þ 7: .. .. .. .. .. .. 74 6 7 6 5 . 5 4 5 4 . . . . . Vec X 0p ðtÞ Vec X p ðtÞ BTp1 ðtÞ ~ Ap1 ðtÞ BTp2 ðtÞ ~ Ap2 ðtÞ    BTpp ðtÞ ~ App ðtÞ

262

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

This system may be written in a simpler form by x0 ðtÞ ¼ H ðtÞ  xðtÞ;

xðtÞ ¼ c;

ð80Þ

where 2

3

Vec X 01 ðtÞ

2

Vec X 1 ðtÞ

3

7 7 6 6 6 Vec X 2 ðtÞ 7 6 Vec X 02 ðtÞ 7 7 7 6 6 7 7; xðtÞ ¼ 6 x0 ðtÞ ¼ 6 7 7 6 6 .. .. 7 7 6 6 . . 5 5 4 4 0 Vec X p ðtÞ Vec X p ðtÞ 3 2 T B11 ðtÞ ~ A11 ðtÞ BT12 ðtÞ ~ A12 ðtÞ    BT1p ðtÞ ~ A1p ðtÞ 7 6 6 BT ðtÞ ~ A ðtÞ BT ðtÞ ~ A ðtÞ    BT ðtÞ ~ A ðtÞ 7 7 6 21 21 22 2p 22 2p 7 6 H ðtÞ ¼ 6 7: .. .. .. .. 7 6 7 6 . . . . 5 4 BTp1 ðtÞ ~ Ap1 ðtÞ

BTp2 ðtÞ ~ Ap2 ðtÞ



ð81Þ

ð82Þ

BTpp ðtÞ ~ App ðtÞ

Now if we use Laplace transform to Eq. (80), we have £fx0 ðtÞg ¼ £fH ðtÞ  xðtÞg:

ð83Þ

Let Y(s) = £{x(t)}, G(s) = £{H(t)}, we have Y ðsÞ ¼ fsI  GðsÞg

f1g

c;

sI > GðsÞ:

Taking the inverse Laplace transform of Eq. (84) yields (  (  f1g ) f1g !) GðsÞ 1 GðsÞ I xðtÞ ¼ £1 s I c ¼ £1 c ¼ I  ðI  I  H ðtÞÞf1g c; s s s

ð84Þ

ð85Þ

where I is an identity scalar matrix. Suppose that Q(t) = I * H(t), then (I  I * H(t)){1} = (I  Q(t)){1} can be obtained by two ways, either by truncated series development or by explicit inversion within the above described convolution algebra. In the first case, we have n X f1g ¼Iþ Qfkg ðtÞ þ Rn ðtÞ: ð86Þ ðI  QðtÞÞ k¼1

Here, fnþ1g

Qij

ðtÞ ¼

XZ k2N

t

fng

Qik ðuÞQkj ðt  uÞ du

ð87Þ

0

and Q{1}(t) = Q(t). The total error is given by 1 X Qfkg ðtÞ ¼ Qfnþ1g ðtÞ  ðI þ QðtÞ þ Qf2g ðtÞ þ   Þ ¼ Qfnþ1g ðtÞ  ðI  QðtÞÞf1g : Rn ðtÞ ¼

ð88Þ

k¼nþ1

Now for any matrix norm satisfying: n

kQfng ðtÞk 6 kQðtÞk ;

ð89Þ

then the total error due to the above truncation is also bounded by (89), if kQ(t)k < 1, then kRn ðtÞk 6

kQðtÞknþ1 : 1  kQðtÞk

ð90Þ

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

263

Thus for given maximum error, say e we have nþ16

lnfeð1  kQðtÞkÞg : ln kQðtÞk

ð91Þ

In the direct inversion method, we calculate (I  Q(t)){1} directly: ðI  QðtÞÞ

f1g

¼ ðdetðI  QðtÞÞÞ

f1g

 adjðI  QðtÞÞ:

ð92Þ

Example 12. The following coupled matrix convolution equations: AðtÞ  X ðtÞ þ Y ðtÞ  BðtÞ ¼ CðtÞ; DðtÞ  X ðtÞ þ Y ðtÞ  EðtÞ ¼ F ðtÞ has a unique solution if and only if the matrix " # Qn ðtÞ ~ AðtÞ BT ðtÞ ~ Qm ðtÞ 2 M I2mn H¼ Qn ðtÞ ~ DðtÞ ET ðtÞ ~ Qm ðtÞ is non-singular; in this case the unique solution is given by   Vec CðtÞ Vec X ðtÞ f1g ðtÞ  ¼H Vec F ðtÞ Vec Y ðtÞ and the corresponding homogeneous matrix convolution equations AðtÞ  X ðtÞ þ Y ðtÞ  BðtÞ ¼ 0; DðtÞ  X ðtÞ þ Y ðtÞ  EðtÞ ¼ 0 has a unique solution: X(t) = Y(t) = 0. Because if we use the Vec-notation and Kronecker product, we have " #   Vec X ðtÞ Vec CðtÞ Qn ðtÞ ~ AðtÞ BT ðtÞ ~ Qm ðtÞ  ¼ : Vec Y ðtÞ Vec F ðtÞ Qn ðtÞ ~ DðtÞ ET ðtÞ ~ Qm ðtÞ This system has a unique solution if and only if the matrix H is non-singular. 4.3. Invert the rational and irrational transfer functions We will use generalized block pulse operational matrices Fa and Da that defined in Eqs. (37) and (38) to get the time functions of rational and irrational transfer functions. The irrational transfer functions are derived from distributed systems. It can be found in [3]. Consider any transfer function G(s) whose time function is denoted by g(t). Let 1 ð93Þ GðsÞ ¼ ZðsÞ; s where 1s is the Laplace transform of unit step function. The meaning of (93) is that g(t) can be derived by applying a unit step function to a new system whose transfer function is Z(s) = sG(s). Given the rational transfer function G(s), b1 sn1 þ b2 sn2 þ    þ bn1 s þ bn GðsÞ ¼ : ð94Þ sn þ a1 sn1 þ    þ an1 s þ an Then b1 sn þ b2 sn1 þ    þ bn1 s2 þ bn s ¼ g1 þ g2 s1 þ g3 s2 þ    þ gkþ1 sk þ    ; ð95Þ ZðsÞ ¼ n s þ a1 sn1 þ    þ an1 s þ an where gi can be derived by cross multiplications of (95) to yield the following linear recursive equation: g 1 ¼ b1 ;

g 2 ¼ b 2  a1 g 1 ;

g n ¼ bn 

n1 X

ani gi ;

i¼1

gnþ1 ¼ 

n X i¼1

anþ1i gi ;

gnþr ¼ 

nþr1 X i¼1

anþri gi :

ð96Þ

264

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

Eq. (95) is the Maclaurin series of G(s). For irrational transfer functions G(s), Z(s) can always be expanded in Maclaurin series. This can be seen from the following examples. After associating each sa with Fa(a < 0) or Da(a > 0) operational matrices, the approximation of g(t) via the block pulse functions can be immediately derived and found in [3,10]. Example 13. Given the rational transfer function G(s), s4 þ 3s3 þ 2s2 þ 5s þ 1 1 ¼ ð1  2s1 þ 9s2  41s3 þ 192s4    Þ; þ 5s4 þ 3s3 þ 7s2 þ s þ 1 s ) gðtÞ ’ ½1; 1; . . . ; 1ðI  2F 1 þ 9F 2  41F 3 þ 192F 4    Þum ðtÞ: GðsÞ ¼

s5

For b = 5, m = 10, gðtÞ ’ ½0:7325; 0:5878; 0:6187; 0:6847; 0:7374; 0:7396; 0:6679; 0:5185; 0:3081; 0:0681u10 ðtÞ: Example 14. The Laplace transform of the Bessel function of the first kind and zero order J0(t) is   1 1 1 1 1 3 5 35 8 1  s2 þ s4  s6 þ s   ; GðsÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 2 8 16 128 s2 þ 1 s 1 þ s2 s   1 3 5 35 F 4     um ðtÞ: ) gðtÞ ’ ½1; 1; . . . ; 1 I  F 2 þ F 4  F 6 þ 2 8 16 128 For b = 8, m = 8, gðtÞ ’ ½0:9197; 0:5060; 0:0382; 0:3628; 0:3094; 0:0090; 0:2484; 0:2561u8 ðtÞ: Example 15. The Laplace transform of the generalized exponential function is p1ffi 1 1 s GðsÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi ¼ ð1 þ s0:5 Þ1 ¼ ð1  s0:5 þ s1  s1:5 þ s2  s2:5 þ   Þ; s sþ1 s ) gðtÞ ’ ½1; 1; . . . ; 1ðI  F 0:5 þ F 1  F 1:5 þ F 2  F 2:5 þ   Þum ðtÞ:

For b = 2.4, m = 4, gðtÞ ’ ½0:6201; 0:4448; 0:3742; 0:3306u4 ðtÞ: Example 16. The Laplace transform of the following irrational transfer function is   1 pffis 1 pffiffi pffis 1 0:5 1 1:5 1 2 1 2:5 1 3 se s sþ s  s þ s  s þ  ; GðsÞ ¼ pffiffi e ¼ ¼ s s 2 6 24 120 s   1 1 1 1 D3 þ    um ðtÞ: ) gðtÞ ’ ½1; 1; . . . ; 1 D0:5  D1 þ D1:5  D2 þ D2:5  2 6 24 120 For b = 5, m = 5, gðtÞ ’ ½0:3782; 0:3425; 0:1824; 0:0840; 0:1413u5 ðtÞ: Acknowledgements The present research has been partially supported by University Putra Malaysia (UPM) under the grant IRPA09-02-04-0259-EA001. References [1] Al Zhour Zeyad, Kilicman Adem, Some applications of the convolution and Kronecker products of matrices, in: Proceeding of the International Conference on Math. 2005, UUM, Kedah, Malaysia, 2005, pp. 551–562.

A. Kilicman, Z.A.A. Al Zhour / Applied Mathematics and Computation 187 (2007) 250–265

265

[2] T. Chen, B.A. Francis, Optimal Sampled-Data Control Systems, Springer, London, 1995. [3] C.F. Chen, Y.T. Tsay, T.T. Wu, Walsh operational matrices for fractional calculus and their applications to distributed systems, J. Frankin Inst. 303 (3) (1977) 267–284. [4] A.E. Gilmour, Circulant Matrix Methods for the Numerical solutions of Partial Differential Equations by FFT Convolutions, New Zealand, 1987. [5] K. Maleknejad, M. Shaherezaee, H. Khatami, Numerical solution of integral equations systems of second kind by Block Pulse Functions, Appl. Math. Comput. 166 (2005) 15–24. [6] S.G. Mouroutsos, P.D. Sparis, Taylor Series approach to system identification, analysis and optimal control, J. Frankin Inst. 319 (3) (1985) 359–371. [7] L. Nikolaos, Dependability analysis of Semi-Markov system, Reliab. Eng. Syst. Safety 55 (1997) 203–207. [8] B. Ross, Fractional Calculus and its Applications, Springer-Verlag, Berlin, 1975. [9] H. Sumita, The Matrix Laguerre Transform, Appl. Math. Comput. 15 (1984) 1–28. [10] C.-H. Wang, On the generalization of Block Pulse Operational matrices for fractional and operational calculus, J. Frankin Inst. 315 (2) (1983) 91–102.