The representation and approximation for the ... - Semantic Scholar

Report 5 Downloads 82 Views
Mathematical and Computer Modelling 47 (2008) 363–371 www.elsevier.com/locate/mcm

The representation and approximation for the weighted Minkowski inverse in Minkowski space Adem Kılıc¸man a,∗ , Zeyad Al Zhour b a Department of Mathematics, University Malaysia Terengganu, 21030 UMT, Kuala Terengganu, Terengganu, Malaysia b Department of Mathematics, Zarqa Private University (ZPU), P.O. Box 2000, Zarqa 1311, Jordan

Received 2 August 2006; accepted 19 March 2007

Abstract This paper extends some results for the weighted Moore–Penrose inverse A+ M,N in Hilbert space to the so-called weighted ⊕ Minkowski inverse A M,N of an arbitrary rectangular matrix A ∈ Mm,n in Minkowski spaces µ. Four methods are also

used for approximating the weighted Minkowski Inverse A⊕ M,N . These methods are: Borel summable, Euler–Knopp summable, Newton–Raphson and Tikhonov’s methods. c 2008 Published by Elsevier Ltd

Keywords: Weighted Moore–Penrose inverse; Weighted Minkowski inverses; Group inverse; Matrix norm; Weighted SVD; Minkowski space; Convergence; Sequence; Weighted range symmetric matrix; Positive definite matrix

1. Introduction and preliminaries Before starting, throughout we consider matrices over the field of complex numbers C or real numbers R. The set of m-by-n complex matrices is denoted by Mm,n (C) = Cm×n . For simplicity we write Mm,n instead of Mm,n (C) and when m = n, we write Mn instead of Mn,n . The notations AT , A∗ , A+ , rank(A), range(A), N (A), kAk, and σ (A) stand, respectively, for the transpose, conjugate transpose, Moore–Penrose inverse, rank, range, null space, spectrum norm, and the set of all eigenvalues of matrix A. Let us recall some concepts that will be used below. Let Cn be the space of complex n-tuples, we shall index the components of a complex vector in Cn from 0 to n − 1, that is u = (u 0 , u 1 , u 2 , . . . , u n−1 ). Let G be the Minkowski metric tensor defined by Gu = (u 0 , −u 1 , −u 2 , . . . , −u n−1 ) . Clearly, the Minkowski metric matrix is defined by   1 0 ∈ Mn ; G ∗ = G; G 2 = In . G= 0 −In−1 ∗ Corresponding author.

E-mail addresses: [email protected] (A. Kılıc¸man), [email protected], [email protected] (Z. Al Zhour). c 2008 Published by Elsevier Ltd 0895-7177/$ - see front matter doi:10.1016/j.mcm.2007.03.031

(1.1)

(1.2)

364

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

In [11,12], the Minkowski inner product on Cn is defined by (u, v) = [u, Gv], where [., .] denotes the conventional Hilbert (unitary) space inner product. A space with Minkowski inner product is called a Minkowski space and denoted as µ. For A ∈ Mn , x and y ∈ Cn , we have by using (1.2),   (Ax, y) = [Ax, Gy] = x, A∗ Gy i      h = x, G G A∗ G y = x, G A˜y = x, A˜y (1.3) where A˜ = G A∗ G. The matrix A˜ is called the Minskowski conjugate transpose of A in µ. Naturally, the matrix A ∈ Mn is called µ-symmetric in µ if A = A˜. From the definition A˜ = G A∗ G, we have the following equivalence: A −1 is µ-symmetric if and  only if AG is Hermitian if and only if G A is Hermitian. It is easy to verify also that G = G ˜ and σ (A) = σ A . More generally, if A ∈ Mm,n , then A˜ = G 1 A∗ G 2 (where G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively.) A matrix A ∈ Mm,n is said to be range symmetric in unitary space (or) equivalently A is said to be EP if N (A) = N (A∗ ) (or A A+ = A+ A). For further properties of EP matrices one may refer to [3,4,10,11]. The Weighted Moore–Penrose inverse (WMPI) is viewed as a generalized Moore–Penrose inverse, and is widely used in control system analysis, statistics, singular differential and difference equations, Markov chains, iterative methods, weighted least-squares problems, perturbation theory, neural network problems and many other subjects are to be found in the literature (see, e.g. [1,2,6–9,14,19]). The WMPI of a matrix A ∈ Mm,n with respect to the two positive definite matrices M ∈ Mm and N ∈ Mn is defined to be the unique solution of the following four matrix equations (see, e.g. [15–17,20]): AX A = A,

X AX = X,

(MAX)∗ = MAX,

(NXA)∗ = NXA

(1.4)

A+ M,N .

and is often denoted by X = In particular, when M = Im and N = In , the matrix satisfying (1.4) is called the Moore–Penrose inverse and it is denoted by X = A+ . The usual weighted Euclidean norm is used to obtain the weighted matrix norm for A ∈ Mm,n with respect to the two positive definite matrices M ∈ Mm and N ∈ Mn as follows [18]: 1

1

kAk M,N = kM 2 AN − 2 k2

(1.5)

where k.k2 is the Frobenius norm. The WMPI A+ M,N can be explicitly expressed from the Weighted Singular Value Decomposition (WSVD) due to Van Loan [13]. For any rectangular matrix A ∈ Mm,n with rank(A) = r , there exist U ∈ Mm and V ∈ Mn satisfying U ∗ MU = Im and V ∗ N −1 V = In such that   D 0 A=U V ∗. (1.6) 0 0 Then A+ M,N can be represented as  −1  D 0 −1 A+ = N V U∗M M,N 0 0

(1.7)

where D = diag(δ1 , δ2 , . . . , δr ) ∈ Mr , δ1 ≥ δ2 ≥ · · · ≥ δr > 0 and δi2 (i = 1, 2, . . . , r ) are the non-zero eigenvalues of N −1 A∗ M A. As for another important generalized inverse, the group inverse A g of A ∈ Mn with closed range is the unique solution of the following equations [2]: A A g A = A,

Ag A Ag = Ag ,

(1.8)

A A g = A g A. rang(A2 )

N (A2 )),

In this case, the matrix A satisfies rang(A) = (or N (A) = that is index(A) = 1. In this paper, we present a unified representation theorem for the so-called Weighted Minkowski inverse A⊕ M,N of ⊕ matrix A ∈ Mm,n in Minkowski space µ. Specific expressions and computational procedures for A M,N in Minkowski space can be uniformly derived.

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

365

2. Representation of weighted Minkowski inverse in Minkowski space The weighted Minkowski inverse of an arbitrary matrix A (including singular and rectangular), analogous to the weighted Moore–Penrose inverse of A, is defined as follows: Definition 1. Let A ∈ Mm,n be a matrix in µ, and let M ∈ Mm and N ∈ Mn be positive definite matrices. Then A⊕ M,N ⊕ ⊕ ⊕ ⊕ ⊕ is the weighted Minkowski inverse of A if A A⊕ M,N A = A, A M,N A A M,N = A M,N , and M A A M,N and N A M,N A are µ-symmetric matrices. ⊕ In particular, when M = Im and N = In , then A⊕ M,N reduces to the Minkowski inverse and is denoted by A . Definition 2. Let M ∈ Mm and N ∈ Mn be positive definite matrices. Given A ∈ Mm,n in µ, the weighted Minskowski conjugate transpose matrix A≈ of A will be defined as A≈ = N −1 A˜M = N −1 G 1 A∗ G 2 M ∈ Mn,m

(2.1)

where G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively. Obviously, A≈ satisfies the following nice properties: If A, B ∈ Mm,n and C ∈ Mn,l , then (A + B)≈ = A≈ , ∗ ≈ (AC)≈ = C ≈ A≈ , A≈ = A and A≈ = (A∗ )≈ . Definition 3. A matrix A ∈ Mm,n is said to be a weighted range symmetric matrix in µ if and only if N (A) = N (A≈ ),

(2.2)

or equivalently ⊕ A⊕ M,N A = A A M,N .

(2.3)

Theorem 1 in [12] can easily be generalized to the case of weighted Minskowski conjugate transpose as follows: Theorem 4. The matrix A ∈ Mm,n respect to the positive definite matrices M ∈ Mm and N ∈ Mn can be written in the form A = U QV ∗ with U ∗ MU = Im , V ∗ N −1 V = In and Q is a diagonal, if and only if the following conditions hold: (i) σ (A≈ A) are non-negative real numbers (ii) A≈ A is a diagonalizable (iii) N (A≈ A) = N (A) where A≈ is determined by (2.1). If only assumption (i) is violated, but (ii) and (iii) hold of Theorem 4, we can still get a weighted singular value decomposition. We note that all the conditions (i)–(iii) of Theorem 4 in Hilbert (Euclidean) space are always true. But in Minkowski space each of the assumptions can fail even if the other two hold. This is illustrated by the following three counterexamples where M = Im , and N = In . 1. Let   −1 1 A= . 1 1 h i −1 Then A≈ = A˜ = G A∗ G = −1 −1 1 , and   0 −2 A≈ A = 2 0 which has eigenvalues ±2i. 2. Let   1.5 1 A= . 0.5 1

366

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

Then A≈ = A≈ A =

h

i

1.5 −1



−0.5 1

2 −1

1 0

, and



which has a double eigenvalue 1 and cannot be diagonalized. 3. Let   1 −1 . A= −1 1 h i Then A≈ = 11 11 and A≈ A = 0. Now we derive a representation theorem for the weighted Minkowski inverse in Minkowski spaces µ, which may be viewed as an application of the classical theory summability to the representation of generalized inverses. The key to the representation theorem is the following lemma, analogous to the Wei’s result [20, Corollary 2.3]. Lemma 5. Let A ∈ Mm,n be a matrix in µ, and let M ∈ Mm and N ∈ Mn be positive definite matrices. Then ∝ A⊕ M,N = A

−1

A≈

(2.4)

where A∝ = A≈ A rang(A≈ ) is the restriction of A≈ A on rang(A≈ ) and A≈ is determined by (2.1). Now we are is a position to present the representation theorem for the weighted Minkowski inverse as follows: Theorem 6. Let A ∈ Mm,n in µ, A∝ = A≈ A rang(A≈ ) and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively). Suppose Ω is an open set such that σ (A∝ ) ⊂ Ω ⊂ (0, ∞). Let {Sn (x)} be a family of continuous real valued function on Ω with limn→∞ Sn (x) = x1 uniformly on σ (A∝ ). Then ∝ ≈ A⊕ M,N = lim Sn (A )A .

(2.5)

n→∞

Furthermore, kSn (A∝ )A≈ − A⊕ M,N k N ,M ≤

sup x∈

σ (A∝ )

|Sn (x)x − 1|kA⊕ M,N k N ,M .

(2.6)

Proof. Since by assumption that σ (A∝ ) ⊂ (0, ∞) and use of Theorem 10.27 in [14], we deduce −1 lim Sn (A∝ ) = A∝ n→∞

uniformly on σ (A∝ ). It follows from Lemma 5 that −1 ≈ lim Sn (A∝ )A≈ = A∝ A = A⊕ M,N .

(2.7)

n→∞

To obtain the error bound we note from (2.7) that A≈ = A∝ A⊕ M,N and therefore ⊕ ∝ ∝ ⊕ Sn (A∝ )A≈ − A⊕ M,N = Sn (A )A A M,N − A M,N  = Sn (A∝ )A∝ − I A⊕ M,N . 1

1

It follows that N 2 Sn (A∝ )A∝ N − 2 is self-adjoint (such is evidently the case for polynomials with real coefficients). The spectral radius formula for self-adjoint operators now gives 1

1

kSn (A∝ )A∝ − I k N ,N = kN 2 Sn (A∝ )A∝ N − 2 − I k 1

1

= |σ (N 2 {Sn (A∝ )A∝ − I }N − 2 )| = |σ (Sn (A∝ )A∝ − I )|.

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

367

But σ (Sn (A∝ )A∝ − I ) = {Sn (x)x − 1 : x ∈ σ (A∝ )} follows from the spectral mapping theorem. Therefore we see that kSn (A∝ )A∝ − I k N ,N =

sup

x∈ σ (A∝ )

|Sn (x)x − 1|.

Hence, ⊕ ∝ ∝ kSn (A∝ )A≈ − A⊕ M,N k N ,M ≤ kSn (A )A − I k N ,N kA M,N k N ,M

sup

= x∈

σ (A∝ )

This completes the proof of Theorem.

|Sn (x)x − 1|kA⊕ M,N k N ,M .



In order to use the error estimate above on specific approximation procedures it will be convenient to have lower and upper bounds for σ (A∝ ). In fact, this has been done in the proof of theorem above. Corollary 7. Let A ∈ Mm,n in µ, A∝ = A≈ A rang(A≈ ) and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively) such that σ (A∝ ) ⊂ (0, ∞). Then −2 2 kA⊕ M,N k N ,M ≤ λ ≤ kAk N ,M

for each λ ∈

(2.8)

σ (A∝ ).

Proof. Let λ ∈ σ (A∝ ). Notice by assumption that λ is positive real and  0 < λ ∈ σ (A∝ ) ⊆ σ A≈ A . It is obvious that index(A≈ A) = 1, and 1 1 1 1 1 ∈ σ {(A≈ A)g } = σ {N 2 (A≈ A)g N − 2 } = σ {(N 2 A≈ AN − 2 )g } λ 1

1

= σ {(N 2 A≈ AN − 2 )⊕ }. Thus 1

1

λ ≤ σ {(N 2 A≈ AN − 2 )⊕ } 1

1

≤ k(N 2 A≈ AN − 2 )⊕ k ⊕ 2 = k(A≈ A)⊕ N ,N k N ,N = kA M,N k N ,M , −2 we arrive for each λ ∈ σ (A∝ ) at λ ≥ kA⊕ M,N k N ,M . ≈ On the other hand, since k(A A)k N ,N ≥ kA≈ A rang(A≈ ) k N ,N , one gets for each λ ∈ σ (A∝ )

kA∝ k N ,N ≤ kA≈ Ak N ,N = kAk2N ,N . This completes the proof of the corollary.



3. Approximation of weighted Minkowski inverse in Minkowski space As is well known, the inverse of an invertible operator can be calculated by interpolating the function x1 ; in a similar manner we will approximate the Weighted Minkowski inverse by interpolating the function x1 and using Theorem 6 and Corollary 7. One way to produce a family of functions {Sn (x)} which is suitable for use in the theorems above is to employ the classical Borel summability transform on the geometric series: ∞ X n=0

(1 − x)n = 1 + (1 − x) + (1 − x)2 + · · · .

(3.1)

368

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

An infinite series t

Z

e−y

St = 0

P∞

n=0 an

∞ X an y n

n!

n=0

is said to be Borel summable to the value a if limt→∞ St = a, where

dy.

(3.2)

For our purposes it is sufficient to consider the function: f (x) =

∞ 1 X = (1 − x)n , x n=0

|1 − x| < 1.

(3.3)

Note thatR the Borel polygon of f contains (0, ∞) and the Borel transform of the geometric series expansion of f t is St (x) = 0 e−x y dy. It is trivial that Z ∞ 1 1 − e−t x = (3.4) e−x y dy = lim t→∞ x x 0 uniformly on any compact subset of (0, ∞). Therefore we may apply Theorem 6 to obtain the integral representation of the weighted Minkowski inverse as follows: Corollary 8. Let A ∈ Mm,n in µ and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively) such that σ (A∝ ) ⊂ (0, ∞). Then Z ∞ ≈ A⊕ = e−A Ay A≈ dy. (3.5) M,N 0

Rt ≈ Set = 0 e−A Ay A≈ dy, we present the error bound for this method. This follows directly from Theorem 6 and Corollary 7 we see that Z t St (x)x − 1 = x e−x y dy − 1 = −e−xt , (3.6) A⊕ M,N (t)

0

and hence by Corollary 7 for σ (A∝ ) we have   −2 |St (x)x − 1| ≤ exp −kA⊕ M,N k N ,M t ,

(3.7)

and the error bound follows by   ⊕ ⊕ −2 ⊕ kA⊕ M,N − A M,N (t)k N ,M ≤ exp −kA M,N k N ,M t kA M,N k N ,M . Another well known summability method is called the Euler–Knopp method. A series Euler–Knopp summable with parameter α > 0 to the value a if the sequence defined by Sn = α

n X k   X k k=0 j=0

j

(1 − α)k− j α j α j

(3.8) P∞

n=0 an

is said to be

(3.9)

converges to a. If ak = (1 − x)k for k = 0, 1, 2, . . . , then we obtain as the Euler–Knopp transform of the series P∞ k k=0 (1 − x) , the sequence given by Sn (x) = α

n X

(1 − αx)k .

(3.10)

k=0

Clearly limn→∞ Sn (x) =

1 x

uniformly on any compact subset of the set   2 . E α = {x : |1 − αx| < 1} = x : 0 < x < α

(3.11)

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

369

Therefore we have the following corollary: Corollary 9. Let A ∈ Mm,n in µ and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively) such that σ (A∝ ) ⊂ (0, ∞). Then the sequence {An } defined by  A0 = α A≈ , An+1 = 1 − α A≈ A An + α A≈ (3.12) −2 converges to A⊕ M,N , if 0 < α < 2kAk M,N . Furthermore, the error estimate is given by n+1 kAn − A⊕ kA⊕ M,N k N ,M ≤ β M,N k N ,M

(3.13)

where 0 < β < 1. −2 2 2 ∝ Proof. It follows from σ (A∝ ) ⊆ [kA⊕ M,N k N ,M , kAk M,N ] that σ (A ) ⊂ (0, kAk M,N ], and hence we apply Theorem 6 if we choose the parameter α is such a way that (0, kAk2M,N ] ⊆ E α , where E α is determined by (3.11). We may choose 1 α such that 0 < α < 2kAk−2 M,N . In fact, we may regard x as a fixed point of the function

S(y) = (1 − αx) y + α,

α > 0.

(3.14)

In order to approximate this fixed point we may use the sequence of successive approximations defined by S0 (x) = α,

Sn+1 (x) = (1 − αx) Sn (x) + α.

It is easy to verify that limn→∞ Sn (x) = then applying Theorem 6 we get

1 x

(3.15)

uniformly on any compact subset of E α . Hence if 0 < α < 2kAk−2 M,N ,

lim Sn (A∝ )A≈ = A⊕ M,N .

n→∞

But it is easy to see from (3.15) that Sn (A∝ )A≈ = An , where An is given by (3.12). This is surely the case if 0 < α < 2kAk−2 M,N , then for such α we have the representation: A⊕ M,N = α

n X

1 − α A≈ A

j

A≈ .

(3.16)

j=0

Note that if we set An = α

n X

I − α A≈ A

j

A≈ ,

(3.17)

j=0

then we get (3.12). To derive an error estimate for the Euler–Knopp method, suppose that 0 < α < 2kAk−2 M,N . If the sequence Sn (x) is determined by (3.15), then Sn+1 (x)x − 1 = (1 − αx) (Sn (x)x − 1) .

(3.18)

Therefore, since S0 = α, |Sn (x)x − 1| = |1 − αx|n+1 .

(3.19)

−2 −2 2 ∝ By kA⊕ M,N k N ,M ≤ x ≤ kAk M,N for x ∈ σ (A ) and 0 < α < 2kAk M,N , it follows that |1 − αx| < β, where β is given by −2 β = max{|1 − αkAk2M,N |, |1 − αkA⊕ M,N k N ,M |}.

(3.20)

Clearly, −2 2 > αkAk2M,N ≥ αkA⊕ M,N k N ,M > 0,

and therefore 0 < β < 1. From Theorem 6, we establish the error estimate (3.13).



370

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

To develop another iterative method, we regard

1 x

as the root of the function

s(y) = y −1 − x.

(3.21)

The Newton–Raphson method can be used to approximate this root. This done by generating a sequence yn , where yn+1 = yn −

s(yn ) = yn (2 − x yn ) 0 s (yn )

(3.22)

for suitable y0 . Suppose that for α > 0 we define a sequence of functions {Sn (x)} by S0 (x) = α,

Sn+1 (x) = Sn (x) (2 − x Sn (x)) .

(3.23)

In fact x Sn+1 (x) − 1 = − (x Sn (x) − 1)2 .

(3.24)

Iterating on this equality, it follows that if x is confined to a compact subset of E α = {x : 0 < x < constant β (defined on this compact set) with 0 < β < 1 and n

n

|x Sn (x) − 1| = |αx − 1|2 ≤ β 2 → 0

(n → ∞).

2 α },

there is a (3.25)

The great attraction of the Newton–Raphson method is the generally quadratic nature of the convergence, which is developed in (3.25). Using this in conjunction with Theorem 6, we see that the sequence of {Sn (A∝ )} defined by  S0 (A∝ ) = α I, Sn+1 (A∝ ) = Sn (A∝ ) 2I − A≈ ASn (A∝ ) (3.26) m×n . If we set A = S (A∝ )A≈ , then has the property that limn→∞ Sn (A∝ )A≈ = A⊕ n n M,N uniformly in C

A0 = α A≈ ,

An+1 = An (2I − A An ) .

σ (A∝ )

(3.27)

If x ∈ and 0 < α < then we see that |1 − αx| < β, where β is given by (3.20). It follows as in (3.25) and hence from Theorem 6 that the error bound is given by 2kAk−2 M,N ,

n

⊕ 2 kAn − A⊕ M,N k N ,M ≤ β kA M,N k N ,M .

(3.28)

Therefore we have proved the following corollary: Corollary 10. Let A ∈ Mm,n in µ and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively) such that σ (A∝ ) ⊂ (0, ∞). Then the sequence {An } determined by (3.27) converges to A⊕ M,N , and the error estimate is given by (3.28). In fact a much stronger result holds, for if we set St (x) = (t + x)−1 ,

t >0

(3.29)

in the representation theorem and using the idea of Tikhonov’s regularization [5], we obtain the following nice corollary: Corollary 11. Let A ∈ Mm,n in µ and A≈ = N −1 G 1 A∗ G 2 M (where M ∈ Mm and N ∈ Mn are positive definite matrices, and G 1 and G 2 are Minkowski metric matrices of order n × n and m × m, respectively) such that σ (A∝ ) ⊂ (0, ∞). Then −1 ≈ ≈ A⊕ A . (3.30) M,N = lim t I + A A t→0+

Furthermore, the error estimate is given by ( −1 ≈ ⊕ ≈ k tI + A A A − A M,N k N ,M ≤

t −2 kA⊕ M,N k N ,M + t

) kA⊕ M,N k N ,M .

(3.31)

A. Kılıc¸man, Z. Al Zhour / Mathematical and Computer Modelling 47 (2008) 363–371

371

4. Concluding remarks The representation and approximations for the weighted Moore–Penrose inverse A+ M,N of matrix A ∈ Mm,n in Hilbert space are discussed by Wei and Wu [17] and Wei [18]. In this paper, we have developed the representation and approximations for the so-called weighted Minkowski inverse A⊕ M,N of matrix A ∈ Mm,n in Minkowski space µ. In our opinion, it is worth establishing some necessary and sufficient conditions for the reverse order rule of weighted Minkowski inverse of two and multiple matrix products, and to derive also the conditions for the matrix products of weighted range symmetric matrices to be weighted range symmetric in Minkowski space µ. References [1] Z. Al Zhour, A. Kılıc¸man, Extensions and generalization inequalities involving the Khatri–Rao product of several positive matrices, J. Inequal. Appl. (2006) 1–21. Article ID 80878. [2] Z. Al Zhour, A. Kılıc¸man, M. Abu Hassan, New representations for weighted Drazin inverse of matrices, Int. J. Math. Anal. 1 (15) (2007) 697–708. [3] T.S. Baskett, I.J. Katz, Theorems on products of E Pr matrices, Linear Algebra Appl. 2 (1969) 87–103. [4] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, Wiley-Intersci., New York, 1974. [5] C.W. Groetsch, Generalized Inverse of Linear Operators, Marcel Dekker, New York, 1977. [6] M. Gulliksson, X.-O. Jin, Y. Wei, Perturbation bounds for constrained and weighted least squares problems, Linear Algebra Appl. 349 (2002) 221–232. [7] A. Kılıc¸man, Z. Al Zhour, The general common exact solutions of coupled linear matrix and matrix differential equations, J. Anal. Comput. 1 (1) (2005) 15–30. [8] A. Kılıc¸man, Z. Al Zhour, New algebraic method for solving the axial n–index transportation problem based on the kronecker product, Matematika 20 (2) (2005) 113–123. [9] A. Kılıc¸man, Z. Al Zhour, Iterative solutions of coupled matrix convolution equations, Soochow J. Math 33 (1) (2007) 167–180. [10] J.J. Koliha, A simple proof of the product theorem for EP matrices, Linear Algebra Appl. 294 (1999) 213–215. [11] A.R. Meenakshi, D. Krishnaswamy, Product of range symmetric block matrices in Minkowski space, Bull. Malays. Math. Sci. Soc. 29 (1) (2006) 59–68. [12] M. Renardy, Singular value decomposition in Minkowski space, Linear Algebra Appl. 236 (1996) 53–58. [13] C.F. Van Loan, Generalizing the singular value decomposition, SIAM J. Numer. Anal. 13 (1976) 76–83. [14] R. Walter, Functional Analysis, McGraw-Hill, New York, 1973. [15] D. Wang, Some topics on weighted Moore–Penrose inverse, weighted least squares and weighted regularized Tikhonov problems, Appl. Math. Comput. 157 (2004) 243–267. [16] Y. Wei, Recurrent neural networks for computing weighted Moore–Penrose inverse, Appl. Math. Comput. 16 (2000) 279–287. [17] Y. Wei, H. Wu, The representation and approximation for the weighted Moore–Penrose inverse, Appl. Math. Comput. 121 (2001) 17–28. [18] Y. Wei, The representation and approximation for the weighted Moore–Penrose inverse in Hilbert Space, Appl. Math. Comput. 136 (2003) 475–486. [19] Y. Wei, Perturbation bound of singular linear system, Appl. Math. Comput. 105 (1999) 211–220. (2) [20] Y. Wei, A characterization and representation for the generalized inverse A T,S and its applications, Linear Algebra Appl. 280 (1998) 87–96.