Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 515082, 6 pages http://dx.doi.org/10.1155/2014/515082
Research Article A Test Matrix for an Inverse Eigenvalue Problem G. M. L. Gladwell,1 T. H. Jones,2 and N. B. Willms2 1 2
Department of Civil and Environmental Engineering, University of Waterloo, Waterloo, ON, Canada N2L 3G1 Department of Mathematics, Bishopβs University, Sherbrooke, QC, Canada J1M 2H2
Correspondence should be addressed to N. B. Willms;
[email protected] Received 21 February 2014; Accepted 30 April 2014; Published 26 May 2014 Academic Editor: K. C. Sivakumar Copyright Β© 2014 G. M. L. Gladwell et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present a real symmetric tridiagonal matrix of order π whose eigenvalues are {2π}πβ1 π=0 which also satisfies the additional condition . that its leading principle submatrix has a uniformly interlaced spectrum, {2π + 1}πβ2 π=0 The matrix entries are explicit functions of the size π, and so the matrix can be used as a test matrix for eigenproblems, both forward and inverse. An explicit solution of a springmass inverse problem incorporating the test matrix is provided.
1. Introduction We are motivated by the following inverse eigenvalue problem first studied by Hochstadt in 1967 [1]. Given two strictly interlaced sequences of real values, (π π )π1 ,
πβ1
(πππ )1 ,
(1)
with π 1 < ππ1 < π 2 < ππ2 < β
β
β
< π πβ1 < πππβ1 < π π ,
(2)
find the π Γ π, real, symmetric, and tridiagonal matrix, π΅, such that π(π΅) = (π π )π1 are the eigenvalues of π΅, while π(π΅π ) = πβ1 (πππ )1 are the eigenvalues of the leading principal submatrix of π΅, where π΅π is obtained from π΅ by deleting the last row and column. The condition on the dataset (2) is both necessary and sufficient for the existence of a unique Jacobian matrix solution to the problem (see [2], Section 4.3 or [3], Section 1.2 for a history of the problem and Section 3 of this paper for additional background theory). A number of different constructive procedures to produce the exact solution of this inverse problem have been developed [4β9], but none provide an explicit characterization of the entries of the solution matrix, π΅, in terms of the dataset (2). Computer implementation of these procedures introduces floating point error and associated numerical stability
issues. Loss of significant figures due to accumulation of round-off error makes some of the known solution procedures undesirable. Determining the extent of round-off Μ computed from a given error in the numerical solution, π΅, dataset requires a priori knowledge of the exact solution π΅. In the absence of this knowledge, an additional numerical Μ computation of the forward problem to find the spectra π(π΅) π Μ and π(π΅ ) allows comparison to the original data. Test matrices, with known entries and known spectra, are therefore helpful in comparing the efficacy of the various solution algorithms in regard to stability. It is particularly helpful when test matrices can be produced at arbitrary size. However some existent test matrices given as a function of matrix size π suffer the following trait: when ordered by size, the minimum spacing between consecutive eigenvalues is a decreasing function of π. This trait is potentially undesirable since the reciprocal of this minimum separation between eigenvalues can be thought of as a condition number on the sensitivity of the eigenvectors (invariant subspaces) to perturbation (see [10], Theorem 8.1.12). Some of the algorithms for the inverse problem seem to suffer from this form of ill-conditioning. From a motivation to avoid confounding the numerical stability issue with potential increased ill-conditioning of the dataset as a function of π, the authors developed a test matrix which has equally spaced and uniformly interlaced simple eigenvalues.
2
Journal of Applied Mathematics
In Section 2 we provide the explicit entries of such a matrix, π΄(π). We claim that its eigenvalues are equally spaced as π (π΄ (π)) = {0, 2, 4, . . . , 2π β 2} ,
(3)
Now we show that π΄(π + 1) has eigenvalues {2π} βͺ {eigenvalues of π΄(π)}. Let πΆ = π΄(π + 1) β 2ππΌ. Factorize πΆ = βπΏπΏπ , where πΏ is lower bidiagonal. We find πππ = β
π
while its leading principal submatrix π΄ (π) has eigenvalues uniformly interlaced with those of π΄(π), namely, π (π΄π (π)) = {1, 3, 5, . . . , 2π β 3} .
(4)
A short proof verifies the claims. In Section 3 we present some background theory concerning Jacobian matrices, and in Section 4 we apply our test matrix to a model of a physical spring-mass system, an application which leads naturally to Jacobian matrices.
2π β π + 1 ; 2
π+1 =β ; 2
πππ
π ππ+1,π = ββ , 2
π = 1, 2, . . . , π β 1, (9)
ππ+1,π = ββπ;
ππ+1,π+1 = 0.
Therefore πΆ has eigenvalue 0 and thus π΄(π+1) has eigenvalue 2π. Define π· = 2ππΌ β πΏπ πΏ; so π·=[
π·π π ] π 2π
(10)
with
2. Main Result
πππ =
Let π΄(π) be an π Γ π real symmetric tridiagonal matrix with entries πππ = π β 1, ππ,π+1 =
2π β 1 ; 2
1 ππ+1,π = βπ (2π β π), 2 πππ =
π = 1, 2, . . . , π
1 βπ (2π β π β 1), 2
π = 1, 2, . . . , π β 2
(5)
π (π β 1) ππβ1,π = β 2
Theorem 1. π΄(π) has eigenvalues {0, 2, . . . , 2π β 2} and π΄π (π) has eigenvalues {1, 3, . . . , 2π β 3}. Proof. By induction, when π = 2 π΄ (2) = [
1 1 ] 1 1
(6)
has eigenvalues 0,2, and π΄π (2) has eigenvalue 1. Assume the result holds for π. So π΄(π) has eigenvalues {0, 2, . . . , 2π β 2}. Let π΅ = π΄π (π + 1) β ππΌ and π΄ = π΄(π) β (π β 1)πΌ. Then π΅ and π΄ are similar via π΅π
= π
π΄ where π
is upper triangular, with entries π (π β 1)! (2π β π β 1)! { { β { { { (π β 1)! (2π β π + 1)! πππ = { { π, π have same parity and π β₯ π, { { { {0 otherwise, 2 π =ΜΈ π, π={ 1 π = π. Therefore π΄π (π + 1) has eigenvalues {1, 3, . . . , 2π β 1}.
(7)
(8)
πβ1 . 2
(11)
Now π·π has the same eigenvalues as π΄(π) since they are similar matrices via ππ·π = π΄(π)π where π is upper triangular with entries π ππ = β2π β π;
and let π΄π (π) be the principal submatrix of π΄(π), that is, the (π β 1) Γ (π β 1) matrix obtained from π΄(π) by deleting the last row and column.
π = 1, 2, . . . , π β 1,
π π,π+1 = ββπ,
π ππ = β2π;
π ππ = 0,
π = 1, 2, . . . , π β 1, (12) otherwise.
Therefore π΄(π + 1) has eigenvalues {2π} βͺ {eigenvalues of π΄(π)}.
3. Discussion A real, symmetric π Γ π tridiagonal matrix π΅ is called a Jacobian matrix when its off-diagonal elements are nonzero ([2], page 46). We write 0 β
β
β
0 π1 βπ1 0 [βπ1 π2 βπ2 0 β
β
β
0 ] ] [ .. ] [ ] [ 0 βπ2 π3 βπ3 d . ]. π΅=[ ] [ 0 0 d d d 0 ] [ ] [ .. . .. d βπ ] [ . πβ2 ππβ1 βππβ1 0 βππβ1 ππ ] [ 0 0 β
β
β
(13)
The similarity transformation, π΅Μ = πβ1 π΅π, where π = πβ1 is the alternating sign matrix, π = diag(1, β1, 1, β1, . . . , (β1)πβ1 ), produces a Jacobian matrix π΅Μ with entries same as π΅ except for the sign of the off-diagonal elements, which are all reversed. If instead we use the self-inverse sign matrix, πβπ βββββββββββββββββββββββββ 1, . . . , 1, β1, β1, . . . , β1), to transform π΅, then π΅Μ π(π) = diag(1, βββββββββββββββββ π
is a Jacobian matrix identical to π΅ except for a switched sign on the πth off-diagonal element. In regard to the spectrum of
Journal of Applied Mathematics
3
the matrix, there is therefore no loss of generality in accepting the convention that a Jacobian matrix is expressed with negative off-diagonal elements; that is, ππ > 0, for all π = 1, . . . , π β 1 in (13). While Cauchyβs interlace theorem [11] guarantees that the eigenvalues of any square, real, symmetric (or even Hermitian) matrix will interlace those of its leading (or trailing) principal submatrix, the interlacing cannot be strict, in general [12]. However, specializing to the case of Jacobian matrices restricts the interlacing to strict inequalities. That is, Jacobian matrices possess distinct eigenvalues, and the eigenvalues of the leading (or trailing) principal submatrix are also distinct and strictly interlace those of the original matrix (see [2], Theorems 3.1.3 and 3.1.4. See also [10] exercise P8.4.1, page 475: when a tridiagonal matrix has algebraically multiple eigenvalues, the matrix fails to be
Jacobian). The inverse problem is also well-posed: there is a unique (up to the signs of the off-diagonal elements) Jacobian matrix π΅ having given spectra specified as per (2) (see [2], Theorem 4.2.1, noting that the interlaced spectrum πβ1 of π β 1 eigenvalues (ππ )1 can be used to calculate the last components of each of the π orthonormalized eigenvectors of π΅ via equation 4.3.31). Therefore, the matrix π΄(π) in Theorem 1 is the unique Jacobian matrix with eigenvalues equally spaced by two, starting with smallest eigenvalue zero, whose leading principal submatrix has eigenvalues also equally spaced by two, starting with smallest eigenvalue one. As a consequence of the theorem, we now have the following. Corollary 2. The eigenvalues of the real, symmetric π Γ π tridiagonal matrix
πβ1 0 0 β
β
β
0 π βπβ [ ] 2 [ ] [ ] 2π β 3 [βπβ π β 1 ] π βπβ 0 β
β
β
0 [ ] 2 2 [ ] [ ] . 2π β 3 3π β 6 . [ ] π βπβ d . 0 βπβ ] ππ = [ 2 2 [ ] [ ] 0 0 d d d 0 [ ] [ .. .. π (π β 1) ] (π β 2) (π + 1) [ ] β β . . d βπ π βπ [ ] 4 2 [ ] [ ] π (π β 1) 0 0 β
β
β
0 βπβ π 2 [ ]
form the arithmetic sequence, π
π (ππ ) = {ππ + 2π(π β 1)}π=1 ,
(15)
while the eigenvalues of its leading principal submatrix, πππ , form the uniformly interlaced sequence πβ1
π (πππ ) = {ππ + π + 2π(π β 1)}π=1 , π€βπππ π = ππ + π (π β 1) .
(16)
The form and properties of ππ were first hypothesised by the third author while programming Fortran algorithms to reconstruct band matrices from spectral data [3]. Initial attempts to prove the spectral properties of ππ by both he and his graduate supervisor (the first author) failed. Later, the first author produced the short induction argument of Theorem 1, in July 1996. Alas, the fax on which the argument was communicated to the third author was lost in a cross-border academic move, and so the matter languished until recently. In summer of 2013, the second and third authors assigned the problem of this paper as a summer undergraduate research project, βhypothesize, and then verify, if possible, the explicit entries of an π Γ π symmetric, tridiagonal matrix with eigenvalues (15), such that the eigenvalues of its principal submatrix are (16).β Meanwhile the misplaced fax from the first authorβs proof was found during an office cleaning. The student, A. De Serre-Rothney, was able to complete both parts
(14)
of the problem. His proof is now found in [13]. Though longer than the one presented here, his proof utilizes the spectral properties of another tridiagonal (nonsymmetric) matrix, the so-called Kac-Sylvester matrix, πΎπ , of size (π+1)Γ(π+1), with eigenvalues π(πΎπ ) = {2π β π}ππ=0 [14β17]: π πβ1 0 0 β
β
β
[1 π π β 2 0 β
β
β
[ [ [0 2 π πβ3 d πΎπ = [ [0 0 d d d [ [ .. . .. [. d πβ2 π β
β
β
0 πβ1 [0 0
0 0] ] .. ] .] ]. 0] ] ] 1] π]
(17)
The referee has pointed out the connection between the spectra (3) and (4) and the classical orthogonal Hahn polynomials of a discrete variable [18]. Using (3) as nodes with weights ππβ1 =
π βπβ1 π=1 (π π β π π )
βπ1β€πβ€π, π =ΜΈ π (π π β π π )
,
π = 1, . . . , π,
(18)
determine the Hahn polynomials, βπβ1/2,β1/2 (π₯/2, π), π = 0, 1, . . . , π β 1, whose three-term recurrence coefficients are the entries of a Jacobi matrix with eigenvalues (3), hence similar to our π΄(π).
4
Journal of Applied Mathematics
k2
k1
m1
k3 m2
k4
Β·Β·Β·
as π β β. To demonstrate this, we will explicitly solve for the stiffnesses and masses associated with π΅(π). With π΅(π) = π΄(π) + πΌ we note that
kn
m3
mn
π΅ππ = ππ = π,
(a)
k2
k1
k3
k4
1 π΅π,π+1 = βππ = β βπ (2π β π β 1), 2
kn Β·Β·Β·
m1
m2
π = 1, . . . , π
π΅πβ1,π = βππβ1 = ββ
m3
π = 1, . . . π β 2
(20)
π (π β 1) 2
(b)
Figure 1: Spring-mass system: (a) right hand end free, (b) right hand end fixed.
π with eigenvalues {2π + 1}πβ1 π=0 , while π΅ (π) has eigenvalues πβ1 {2π}π=1 . π
Let u = β¨π11/2 , . . . , ππ1/2 β© with ππ > 0 for all π. Let π = π βπ=1 ππ = uπ u. We wish to solve
4. A Spring-Mass Model Problem One simple problem where symmetric tridiagonal matrices arise naturally is the inverse problem for the spring-mass system shown in Figure 1. In this case the squares of the natural frequencies of free vibration for system (a) are the eigenvalues of a Jacobi matrix π΅, while those for system (b) are the eigenvalues of its principal minor π΅π . Specifically, let πΆ be the stiffness matrix, and let π be the mass (inertia) matrix for the system in Figure 1(a): π1 + π2 βπ2 ] [ βπ2 π2 + π3 βπ3 ] [ ], [ β
β
β
πΆ=[ ] [ βππβ1 ππβ1 + ππ βππ ] βππ ππ ] [ π1 [ π2 [ β
π=[ [ [ ππβ1 [
(19)
] ] ]. ] ]
π
π΅ (π) u = β¨π1β1/2 ππ , 0, . . . , 0β© for (ππ )ππ=1 and π1 . The bottom, πth, equation is 1/2 = ππβ1
βπππ1/2 π 1/2 = β2( ) πΌ, βππβ1 πβ1
(22)
where we choose ππ1/2 = πΌ. We will thus be able to express ππ1/2 in terms of the scaling parameter πΌ. The (π β 1)th equation is 1/2 = ππβ2
ππ ]
Then the squares of the natural frequencies of the systems in Figure 1 satisfy (πΆ β ππ)x = 0 and (πΆπ β ππ ππ )xπ = 0, where πΆπ is obtained from πΆ by deleting the last row and column. The solutions can be ordered 0 < π 1 < π01 < π 2 < β
β
β
< π πβ1 < πππβ1 < π π . We can also rewrite the systems as (π΅ β ππΌ)u = 0 and (π΅π β ππ πΌ)uπ = 0 where π΅ = πβ1/2 πΆπβ1/2 and u = π1/2 x. Note that the squares of the natural frequencies of the systems are the eigenvalues of π΅ and π΅π . Suppose that the matrix π΅(π) := π΄(π) + πΌ was to arise from a spring-mass system like in Figure 1; that is, we are considering the system whose squares of the natural frequencies are the equally spaced values {1, 3, . . . , 2π β 1} for system (a) and {2, 4, . . . , 2π β 2} for system (b). The system in Figure 1 is the simplest possible discrete model for a rod vibrating in longitudinal motion and more closely approximates the continuous system as π β β. In a physical system, we expect clustering of frequencies. The test matrix π΅(π) does not share this phenomenon and so we expect the stiffnesses and masses associated with it to become unrealistic
(21)
πΌππβ1 β πππ1/2 βπ (π β 1) /2 β πβ2βπ/ (π β 1) =πΌ βππβ2 β (1/2) β(π β 2) (π + 1)
= πΌβ2(
1/2 π(π + 1) ) . (π β 1)(π β 2)
(23)
The πth equation, for π =ΜΈ 1, π β 1, π, is 1/2 1/2 βππβ1 ππβπ + πππ1/2 β ππ ππ+1 = 0.
(24)
Then 1/2 ππβ1 =
1/2 2πππ1/2 β (π(2π β π β 1))1/2 ππ+1
((π β 1)(2π β π))1/2
.
(25)
Now suppose 1/2 = πΌβ2( ππβπ
π (π + 1) β
β
β
(π + π β 1) 1/2 ) (π β 1) (π β 2) β
β
β
(π β π)
(26)
Journal of Applied Mathematics
5
for π = 1, 2, . . . , π. Then cases π = 1, 2 are already verified, and the strong inductive assumption applied in (25) with π β 1 = π β (π + 1) implies π = π β π. So
Since πΆ = π1/2 π΅(π)π1/2 , then
π(π + 1) β
β
β
(π + π β 1) 1/2 = (2ππΌβ2( ) (π β 1)(π β 2) β
β
β
(π β π)
= πΌ2 (
ππβπβ1
1/2
β((π β π)(π + π β 1)) Γ (((π β π β 1)(π + π))
π1 = πΌ2
1/2 ππβπ+1 )
Γ [ (2π(
)
β1
Γ(((π β π β 1)(π + π))1/2 ) ]
Γ[
π (π + 1) β
β
β
(π + π β 1) 1/2 ) (π β 1)(π β 2) β
β
β
(π β π) 2π β (π β π)
((π β π β 1)(π + π))1/2
= πΌβ2(
(2π β 1)! . ((π β 1)!)2
(32)
(33)
5. Conclusion
π + π β 1 1/2 ) πβπ
β((π β π)(π + π β 1))1/2 )
= πΌβ2(
π! (2π β π β 1)! ), ((π β 1)!)2
From (26) we have π1 /ππ = 2((2πβ2)!/((π β 1)!)2 ) which goes to infinity as π β β and from (32) we see that π1 /ππ = (2π β 1)!/(π β 1)!π! which also goes to infinity as π β β. This is not a model of a physical rod, as expected.
1/2 β1
π (π + 1) β
β
β
(π + π β 1) 1/2 ) (π β 1) (π β 2) β
β
β
(π β π)
= πΌβ2(
1/2 1/2 ππ+1 = βπΆπ,π+1 = βππβ(πβπ) π΅π,π+1 ππβ(πβπβ1)
] 1/2
π (π + 1) β
β
β
(π + π β 1)(π + π) ) (π β 1) (π β 2) β
β
β
(π β π) (π β π β 1)
(27) 1/2 which verifies, by strong induction, the closed form for ππβπ given by (26). Finally, the first equation of (21) is
ππ11/2 β π1 π21/2 = π1β1/2 π1
(28)
A family of π Γ π symmetric tridiagonal matrices, ππ , whose eigenvalues are simple and uniformly spaced and whose leading principle submatrix has uniformly interlaced, simple eigenvalues has been presented (14). Members of the family are characterized by a specified smallest eigenvalue ππ and gap size π between eigenvalues. The matrices are termed Jacobian, since the off-diagonal entries are all nonzero. The matrix entries are explicit functions of the size π, ππ , and π; so the matrices can be used as a test matrices for eigenproblems, both forward and inverse. The matrix ππ for specified smallest eigenvalue ππ and gap π is unique up to the signs of the off-diagonal elements. In Section 4, the form of ππ was used as an explicit solution of a spring-mass vibration model (Figure 1), and the inverse problem to determine the lumped masses and spring stiffnesses was solved explicitly. Both the lumped masses ππβπ given by (30) and spring stiffnesses ππβπ from (32) show superexponential growth. Consequently ππ /π1 , ππ /π1 become vanishingly small as π β β. As a result, the springmass system of Figure 1 cannot be used as a discretized model for a physical rod in longitudinal vibration, as the model becomes unrealistic in the limit as π β β.
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
and so π1 = ππ1 β π1 (π1 π2 )1/2 .
(29)
[1] H. Hochstadt, βOn some inverse problems in matrix theory,β Archiv der Mathematik, vol. 18, pp. 201β207, 1967.
We note that the values ππβπ can be written as ππβπ = 2πΌ
2 (π
+ π β 1)! (π β π β 1)! ((π β 1)!)2
(30)
ππ = πΌ
+ 0 β 1)! (π β 0 β 1)! = πΌ2 . ((π β 1)!)2
[2] G. M. L. Gladwell, Inverse Problems in Vibration, vol. 9 of Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics. Dynamical Systems, Martinus Nijhoff Publishers, Dordrecht, The Netherlands, 1986. [3] N. Brad Willms, Some matrix inverse eigenvalue problems [M.S. thesis], University of Waterloo, Ontario, Canada, 1988.
for π = 1, . . . , π β 1, and 2 (π
References
(31)
[4] F. W. Biegler-KΒ¨onig, βConstruction of band matrices from spectral data,β Linear Algebra and Its Applications, vol. 40, pp. 79β87, 1981.
6 [5] C. de Boor and G. H. Golub, βThe numerically stable reconstruction of a Jacobi matrix from spectral data,β Linear Algebra and Its Applications, vol. 21, no. 3, pp. 245β260, 1978. [6] G. M. L. Gladwell and N. B. Willms, βA discrete Gelβfand-Levitan method for band-matrix inverse eigenvalue problems,β Inverse Problems, vol. 5, no. 2, pp. 165β179, 1989. [7] D. Boley and G. H. Golub, βInverse eigenvalue problems for band matrices,β in Numerical Analysis (Proc. 7th Biennial Conf., Univ. Dundee, Dundee, 1977), G. A. Watson, Ed., vol. 630 of Lecture Notes in Math., pp. 23β31, Springer, Berlin, Germany, 1978. [8] O. H. Hald, βInverse eigenvalue problems for Jacobi matrices,β Linear Algebra and Its Applications, vol. 14, no. 1, pp. 63β85, 1976. [9] H. Hochstadt, βOn the construction of a Jacobi matrix from spectral data,β Linear Algebra and Its Applications, vol. 8, pp. 435β446, 1974. [10] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 4th edition, 2013. [11] S.-G. Hwang, βCauchyβs interlace theorem for eigenvalues of Hermitian matrices,β The American Mathematical Monthly, vol. 111, no. 2, pp. 157β159, 2004. [12] S. Fisk, βA very short proof of Cauchyβs interlace theorem for eigenvalues of Hermitian matrices,β The American Mathematical Monthly, vol. 112, no. 2, p. 118, 2005. [13] A. De Serre Rothney, βEigenvalues of a special tridiagonal matrix,β 2013, http://www.ubishops.ca/fileadmin/bishops documents/natural sciences/mathematics/files/2013-De Serre.pdf. [14] P. A. Clement, βA class of triple-diagonal matrices for test purposes,β SIAM Review, vol. 1, pp. 50β52, 1959. [15] A. Edelman, E. Kostlan, and . In, βThe road from Kacβs matrix to Kacβs random polynomials,β in Proceedings of the SIAM Applied Linear Algebra Conference, pp. 503β507, Philadelphia, Pa, USA, 1994. [16] T. Muir, A Treatise on the Theory of Determinants, Dover, New York, NY, USA, 1960. [17] O. Taussky and J. Todd, βAnother look at a matrix of Mark Kac,β Linear Algebra and Its Applications, vol. 150, pp. 341β360. [18] A. F. Nikiforov, S. K. Suslov, and V. B. Uvarov, Classical Orthogonal Polynomials of a Discrete Variable, Springer Series in Computational Physics, Springer, Berlin, Germany, 1991.
Journal of Applied Mathematics
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Algebra Hindawi Publishing Corporation http://www.hindawi.com
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Journal of
Hindawi Publishing Corporation http://www.hindawi.com
Stochastic Analysis
Abstract and Applied Analysis
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
International Journal of
Mathematics Volume 2014
Volume 2014
Discrete Dynamics in Nature and Society Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Optimization Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014