Author's personal copy
Neural Networks 49 (2014) 11–18
Contents lists available at ScienceDirect
Neural Networks journal homepage: www.elsevier.com/locate/neunet
Exponential passivity of memristive neural networks with time delays Ailong Wu a,b,c , Zhigang Zeng c,∗ a
College of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China
b
Institute for Information and System Science, Xi’an Jiaotong University, Xi’an 710049, China
c
School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
article
info
Article history: Received 29 November 2012 Received in revised form 6 September 2013 Accepted 8 September 2013 Keywords: Hybrid systems Memristive neural networks Exponential passivity
abstract Memristive neural networks are studied across many fields of science. To uncover their structural design principles, the paper introduces a general class of memristive neural networks with time delays. Passivity analysis is conducted by constructing suitable Lyapunov functional. The analysis in the paper employs the results from the theories of nonsmooth analysis and linear matrix inequalities. A numerical example is provided to illustrate the effectiveness and less conservatism of the proposed results. Crown Copyright © 2013 Published by Elsevier Ltd. All rights reserved.
1. Introduction In recent years, memristive neural networks have attracted phenomenal worldwide attention (Cantley, Subramaniam, Stiegler, Chapman, & Vogel, 2011, 2012; Itoh & Chua, 2009; Kim, Sah, Yang, Roska, & Chua, 2012; Pershin & Di Ventra, 2010; Wang et al., 2012; Wen & Zeng, 2012; Wen, Zeng, & Huang, 2012; Wu & Zeng, 2012a, 2012b). Research have shown that memristive neural networks share brain-like computing capability (Pershin & Di Ventra, 2010). For the development of concurrent applications, memristive neural networks are essentially more valuable in instances where neuromorphic computing is necessary. From a physical standpoint, a memristive neural network is a memristor bridge weighting circuit that can perform analog multiplication, i.e., the memristive neural network is a neuromorphic computing system consisting of some identical memristors. Recently, Wu and Zeng (2012a), Wu and Zeng (2012b) formulate a general class of memristive neural networks and investigate the hysteresis behavior of memristive neurodynamic systems. Wen and Zeng (2012), Wen et al. (2012) discuss the stability analysis of memristor-based recurrent neural networks. To better simulate the memristive synapses that they may behave like the real thing about the memristor minds to some extent, from a viewpoint of system theory, consider a class of memristive neural networks described by the following delay differential
∗
Corresponding author. Tel.: +86 13871412009. E-mail addresses:
[email protected] (A. Wu),
[email protected] (Z. Zeng).
equations: x˙ i (t ) = −xi (t ) +
n
aij (xi (t ))fj (xj (t )) +
j =1
n
bij (xi (t ))
j =1
× fj (xj (t − τj )) + ui (t ),
t ≥ 0, i = 1, 2, . . . , n,
(1)
zi (t ) = fi (xi (t )) + fi (xi (t − τi )) + ui (t ), t ≥ 0, i = 1, 2, . . . , n,
(2)
where xi (t ) is the voltage of the capacitor Ci , τj is the time delay that satisfies 0 ≤ τj ≤ τ (τ ≥ 0 is a constant), ui (t ) denotes input, zi (t ) is the output of network, fj (·) is feedback function satisfying fj (0) = 0, aij (xi (t )) and bij (xi (t )) represent memristor-based weights, and aij (xi (t )) = bij (xi (t )) =
Ci Mij Ci
1, −1 ,
sginij =
Wij
× sginij , × sginij , i ̸= j, i = j,
in which Wij and Mij denote the memductances of memristors Rij and Fij , respectively. And Rij represents the memristor between the feedback function fi (xi (t )) and xi (t ), Fij represents the memristor between the feedback function fi (xi (t − τi )) and xi (t ). According to the feature of the memristor and the current– voltage characteristic, then
aij (xi (t )) =
0893-6080/$ – see front matter Crown Copyright © 2013 Published by Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2013.09.002
aˆ ij ,
sginij
aˇ ij ,
sginij
dfj (xj (t )) dt dfj (xj (t )) dt
− −
dxi (t ) dt dxi (t ) dt
≤ 0, (3)
> 0,
Author's personal copy
12
bij (xi (t )) =
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
bˆ ij ,
sginij
bˇ ij ,
sginij
dfj (xj (t − τj )) dt dfj (xj (t − τj )) dt
− −
dxi (t ) dt dxi (t ) dt
2. Preliminaries
≤ 0, (4)
> 0,
for i, j = 1, 2, . . . , n, where aˆ ij , aˇ ij , bˆ ij , and bˇ ij are constants. Obviously, the memristive neural network model is basically a state-dependent switched nonlinear systems. Switched systems have drawn considerable attention in the past decade. Passivity is the most important issue in the study of switched systems. Passivity is part of a broader and a general theory of dissipativeness. The main point of passivity theory is that the passive properties of systems can keep the systems internally stable. Over recent years, as a powerful tool, passivity has played an important role in the network control (Ji, Koo, Won, Lee, & Park, 2011), system analysis and design (Balasubramaniam & Nagamani, 2010a, 2010b, 2011, 2012; Balasubramaniam, Nagamani, & Rakkiyappan, 2011; Fu, Zhang, Ma, & Zhang, 2010; Kwon, Park, Lee, & Cha, 2011; Li, Gao, & Shi, 2010; Li, Lam, & Cheung, 2012; Li, Wang, Shi, & Gao, 2010; Mathiyalagan, Sakthivel, & Anthoni, 2012; Sakthivel, Arunkumar, Mathiyalagan, & Anthon, 2011; Wang, Wu, & Guo, 2011; Wu, Park, Su, & Chu, 2012a, 2012b; Wu, Shi, Su, & Chu, 2011; Zeng, He, Wu, & Xiao, 2011; Zhang, Mou, Lam, & Gao, 2009; Zhu & Shen, 2011; Zhu, Shen, & Chen, 2010; Zhu, Zhang, & Yuan, 2010), and so on. These are the main reasons why the passivity theory has become a very hot topic across many fields, and much investigative attention has been focused on this topic. However, there are few results on the passivity analysis of state-dependent switched nonlinear systems, especially with memristive neural networks, since there is no suitable approach to deal with memristive system, where it consists of too many subsystems. We have also noticed that the attention has mainly been focused on the passivity of conventional neural networks (Balasubramaniam & Nagamani, 2010a, 2010b, 2011, 2012; Balasubramaniam et al., 2011; Fu et al., 2010; Ji et al., 2011; Kwon et al., 2011; Li, Gao et al., 2010; Li et al., 2012; Li, Wang et al., 2010; Mathiyalagan et al., 2012; Sakthivel et al., 2011; Wang et al., 2011; Wu et al., 2012a, 2012b, 2011; Zeng et al., 2011; Zhang et al., 2009; Zhu & Shen, 2011; Zhu, Shen et al., 2010; Zhu, Zhang et al., 2010), where several novel research results have been proposed. In fact, one remarkable feature of passivity is that the passive system utilizes the product of input and output as the energy provision, and embodies the energy attenuation character. Passive system only burns energy, without energy production, i.e., passivity represents the property of energy consumption of the system. On the other hand, the passivity analysis for memristive neural networks can help us understand the complex brain functionalities, an important step forward to improve upon analog computing with the adoption of memristor-MOS technology designs. However, to the best of the authors’ knowledge, in most of the existing works on the passivity, this topic on the memristive neural networks has not been addressed. Currently, the results on qualitative analysis of memristive neural networks are mainly in classical stability theory (Wen & Zeng, 2012; Wen et al., 2012; Wu & Zeng, 2012a, 2012b). Thus, it is important and interesting to study the passivity of memristive neural networks, which partly motivates our present work. Actually, passivity is a higher abstraction level of stability. This paper studies the exponential passivity for a general class of memristive neural networks of (1)–(2). A systematic approach of constructing the suitable Lyapunov functional is also presented via the theories of differential inclusions and set-valued maps. It is worth noting that the proposed method in this paper can be applied to the general nonlinear hybrid systems. The rest of this paper is organized as follows. In Section 2, some preliminaries are described. The main results are given in Section 3. In Section 4, a numerical example is provided to illustrate the effectiveness of the proposed results. Concluding remarks are stated in Section 5.
In a similar way to Wu et al. (2012a), Zhang et al. (2009), Zhu, Shen et al. (2010), we also assume that the feedback function fi is bounded and satisfies 0≤
fi (χˆ ) − fi (χ˘ )
χˆ − χ˘
≤ ki ,
i = 1, 2, . . . , n, ∀ χˆ , χ˘ ∈ ℜ, and χˆ ̸= χ˘ ,
(5)
where constant ki > 0. Throughout this paper, solutions of all the systems considered in the following are intended in the Filippov’s sense. ℜn is ndimensional Euclidean space. C ([−τ , 0], ℜn ) is Banach space of all continuous functions. ∥·∥ denotes the Euclidean norm of a vector and its induced norm of a matrix. For any vector = ( 1 , 2 , , Π denotes . . . , m )T , | | = ( 1 , 2 , . . . , m )T .co Π
the closure of the convex hull generated by real numbers Π and Π or real matrices Π and Π . Let aij = max aˆ ij , aˇ ij , aij =
min aˆ ij , aˇ ij , bij = max bˆ ij , bˇ ij , bij = min bˆ ij , bˇ ij , a˜ ij = max
aˆ ij , aˇ ij , b˜ ij = max bˆ ij , bˇ ij , for i, j = 1, 2, . . . , n. Denote K = diag(k1 , k2 , . . . , kn ), A = (˜aij )n×n , B = (b˜ ij )n×n . For symmetric matrix S , S > 0 (S < 0) means that S is a positive definite (negative definite) matrix. Matrix I and matrix 0 represent the identity matrix and a zero matrix with appropriate dimensions, respectively. For matrices M = (mij )m×n , N = (nij )m×n , M ≫ N (M ≪ N ) means that mij > nij (mij < nij ), for i = 1, 2, . . . , m, j = 1, 2, . . . , n. And by the interval matrix [M , N ], it follows that M ≪ N . For ∀L = (lij )m×n ∈ [M , N ], it means M ≪ L ≪ N , i.e., mij < lij < nij for i = 1, 2, . . . , m, j = 1, 2, . . . , n. λmax (Q), λmin (Q) denote the maximum eigenvalue and minimum eigenvalues of the matrix Q, respectively. QT , Q−1 represent the transpose and inverse of matrix Q, respectively. The symmetric term in a symmetric matrix is denoted by ∗, i.e.,
W1
∗
W1 W2 = W3 W2T
W2 . W3
Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations. In addition, the initial conditions of system (1) are assumed to be xi (t ) = φi (t ),
−τ ≤ t ≤ 0, i = 1, 2, . . . , n,
where φ(ϑ) = (φ1 (ϑ), φ2 (ϑ), . . . , φn (ϑ))T ∈ C ([−τ , 0], ℜn ). Without of generality, considering the z T (t )u(t ) is supply t loss T rate, i.e., 0 z (s)u(s)ds exists, for all t ≥ 0, where z (t ) = (z1 (t ), z2 (t ), . . . , zn (t ))T , u(t ) = (u1 (t ), u2 (t ), . . . , un (t ))T . By the theories of differential inclusions and set-valued maps, from (1) and (2), it follows that x˙ i (t ) ∈ −xi (t ) +
n
co aˆ ij , aˇ ij fj (xj (t ))
j=1
+
n
co bˆ ij , bˇ ij fj (xj (t − τj )) + ui (t ),
j =1
t ≥ 0 , i = 1 , 2 , . . . , n,
(6)
zi (t ) = fi (xi (t )) + fi (xi (t − τi )) + ui (t ), t ≥ 0 , i = 1 , 2 , . . . , n,
(7)
Author's personal copy
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
or equivalently, for i, j = 1, 2, . . . , n, there exist Zij ∈ co aˆ ij , aˇ ij
and Hij ∈ co bˆ ij , bˇ ij , such that
13
3. Main results
x˙ i (t ) = −xi (t ) +
n
n
Zij fj (xj (t )) +
j=1
Theorem 1. The system in (1) and (2) is exponentially passive, if there exist symmetric matrices P1 > 0, P2 > 0, P3 > 0, and diagonal matrices X1 > 0, X2 > 0, X3 > 0, such that
Hij fj (xj (t − τj ))
j=1
+ ui (t ), t ≥ 0, i = 1, 2, . . . , n. zi (t ) = fi (xi (t )) + fi (xi (t − τi )) + ui (t ), t ≥ 0, i = 1, 2, . . . , n.
(8) (9)
Clearly, for i, j = 1, 2, . . . , n,
co aˆ ij , aˇ ij = [aij , aij ],
co bˆ ij , bˇ ij
= [bij , bij ].
Of course, the above parameters Zij and Hij (i, j = 1, 2, . . . , n) in (8) depend on the initial condition of system (1) and time t. A solution x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T (in the sense of Filippov) of system (1) with initial conditions xi (t ) = φi (t ), −τ ≤ t ≤ 0, is absolutely continuous on any compact interval of [0, +∞), and for i = 1, 2, . . . , n, x˙ i (t ) ∈ −xi (t ) +
n
co aˆ ij , aˇ ij fj (xj (t ))
j =1
+
n
co bˆ ij , bˇ ij fj (xj (t − τj )) + ui (t ),
t ≥ 0.
j =1
For convenience, transform (6) and (7) into the compact form as follows: x˙ (t ) ∈ −x(t ) + co Aˆ, Aˇ f (x(t ))
+ co Bˆ , Bˇ f (x(t − τ )) + u(t ),
t ≥ 0,
z (t ) = f (x(t )) + f (x(t − τ )) + u(t ),
t ≥ 0,
x˙ (t ) = −x(t ) + A f (x(t )) + B f (x(t − τ )) + u(t ), z (t ) = f (x(t )) + f (x(t − τ )) + u(t ),
t ≥ 0,
t ≥ 0,
(12) (13)
T = f1 (x1 (t − τ1 )), f2 (x2 (t − τ2 )), . . . , fn (xn (t − τn )) , u(t ) = (u1 (t ), u2 (t ), . . . , un (t ))T , z (t ) = (z1 (t ), z2 (t ), . . . , zn (t ))T ,
co Aˆ, Aˇ = [A, A],
where A = (aij )n×n ,
A = (aij )n×n ,
B = (bij )n×n ,
B = (bij )n×n .
Definition 1. System in (1) and (2) is said to be exponential passive from input u(t ) to output z (t ), if there exist a Lyapunov functional V defined on ℜn , and a constant ϱ > 0 such that for all u(t ), the following inequality holds: V˙ (x(t )) + ϱV (x(t )) ≤ 2z T (t )u(t ),
(14)
Ω11 Ω14 Ω15 Ω16 Ω22 Ω25 Ω33 Ω44 Ω55
= −2(P1 + K ) + P2 + K X1 K − P3 + I , = (P1 + K ) A + K X2 + I , = (P1 + K ) B, = P1 + K , = −K X1 K − 2P3 − I , = K X3 − I , = −P2 − P3 , = −X1 − 2X2 + I , = X1 − 2X3 − I .
Proof. From (5), for any diagonal matrices X1 > 0, X2 > 0, X3 > 0, one has
T
f (x(t − τ ))X3 Kx(t − τ ) − f (x(t − τ )) ≥ 0. T
(17)
On the other hand, from (14), by using the Schur complement,
Ω 11 ∗ ∗ ∗ ∗ ∗
P3
Ω14
0 P3
Ω15 Ω25
Ω22 0 ∗ Ω33 0 0 ∗ ∗ Ω44 0 ∗ ∗ ∗ Ω55 ∗ ∗ ∗ ∗ T −I −I 0 0 0 0 + τ 2 T P3 T < 0. A A BT BT
Ω16 0 0 −I −I −2I
I
Choose sufficiently small positive constants ϵ1 > 0 and ϵ2 > 0 such that
co Bˆ , Bˇ = [B, B],
Ω55 ∗ ∗
where
I
Clearly,
0 0
−τ P3 0 0 τ A T P3 < 0, τ BT P3 τ P3 −P 3
0 0 −I −I −2I 0
(16)
f (x(t − τ ))
Bˇ = (bˇ ij )n×n .
Ω44 ∗ ∗ ∗
Ω16
f (x(t ))X2 Kx(t ) − f (x(t )) ≥ 0,
T
Aˇ = (ˇaij )n×n ,
Ω33 ∗ ∗ ∗ ∗
Ω15 Ω25
(15)
f (x(t )) = f1 (x1 (t )), f2 (x2 (t )), . . . , fn (xn (t )) ,
Bˆ = (bˆ ij )n×n ,
0 0
(11)
x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T ,
Aˆ = (ˆaij )n×n ,
Ω14
0 P3
xT (t )K X1 Kx(t ) − f T (x(t ))X1 f (x(t )) ≥ 0,
where
P3
Ω22 ∗ ∗ ∗ ∗ ∗
(10)
or equivalently, there exist A ∈ co Aˆ, Aˇ , B ∈ co Aˆ, Aˇ , such that
Ω 11 ∗ ∗ ∗ ∗ ∗ ∗
t ≥ 0.
Ω11 + τ 2 ϵ1 I P3 0 Ω14 ∗ Ω P3 0 22 ∗ ∗ Ω33 0 ∆= ∗ ∗ ∗ Ω44 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ T −I −I 0 0 0 0 + τ 2 T (P3 + ϵ2 I ) T < 0. A A BT BT I
I
Ω15 Ω25 0 0
Ω55 ∗
Ω16
0 0 −I −I −2I
(18)
Author's personal copy
14
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
and hence
Consider the following Lyapunov functional:
V (t ) ≤ (∥P1 + K ∥) ∥x(t )∥2
V (t ) = V1 (t ) + V2 (t ) + V3 (t ) + V4 (t ),
+ 1 + ∥K ∥2 + ∥P 2 ∥ + 2 ∥K ∥2 ∥X 1 ∥ + τ 2 ϵ 1
where V1 (t ) = xT (t )(P1 + K )x(t ), V2 (t ) =
T η (s)η(s) + xT (s)P2 x(s) ds,
V3 (t ) =
0
t
ϵ1 xT (s)x(s) + x˙ T (s)(P3 + ϵ2 I )˙x(s) dsdϑ,
ατ 2 ∥P3 + ϵ2 I ∥ − ϵ2 τ < 0,
t +ϑ
−τ
in which η(t ) = (xT (t ), f T (x(t )))T . Evaluating the time derivative of V along the trajectory of (10) or (12) gives V˙ 1 (t ) = 2xT (t )(P1 + K )˙x(t ),
then it easily yields V˙ (t ) − 2z T (t )u(t ) ≤ −α(∥P1 + K ∥) ∥x(t )∥2
− α 1 + ∥K ∥2 + ∥P2 ∥ + 2 ∥K ∥2 ∥X1 ∥ + τ 2 ϵ1 t t ∥x(s)∥2 ds − ατ 2 ∥P3 + ϵ2 I ∥ ∥˙x(s)∥2 ds ×
(19)
V˙ 2 (t ) = ηT (t )η(t ) − ηT (t − τ )η(t − τ )
t −τ
+ xT (t )P2 x(t ) − xT (t − τ )P2 x(t − τ ),
(20)
V˙ (t ) + α V (t ) ≤ 2z T (t )u(t ),
− xT (t − τ )K X1 Kx(t − τ ) + f T (x(t − τ ))X1 f (x(t − τ )),
(21)
V˙ 4 (t ) = τ 2 ϵ1 xT (t )x(t ) + τ 2 x˙ T (t )(P3 + ϵ2 I )˙x(t )
− ϵ1 τ −τ
t
t
∥x(s)∥2 ds − ϵ2 τ t −τ
∥˙x(s)∥2 ds
t −τ
x˙ T (s)P3 x˙ (s)ds,
(22)
t −τ
where x(t −τ ) = (x1 (t −τ1 ), x2 (t −τ2 ), . . . , xn (t −τn ))T , η(t −τ ) = (xT (t − τ ), f T (x(t − τ )))T . Together with (16) and (17), and (19)–(22), then V˙ (t ) − 2z T (t )u(t ) ≤ ξ T (t )1ξ (t ) − ϵ1 τ
t
∥x(s)∥2 ds t −τ
− ϵ2 τ
t
∥˙x(s)∥2 ds t −τ
≤ λmax (∆) ∥x(t )∥2 − ϵ1 τ
t
∥x(s)∥2 ds
t −τ
− ϵ2 τ
∥˙x(s)∥2 ds, t −τ
where ξ (t ) =
xT (t ), xT (t − τ ), xT (t − τ ), f T (x(t )), f T (x(t −
T
On the other side, it is easy to check that V1 (t ) ≤ (∥P1 + K ∥) ∥x(t )∥2 , V2 (t ) ≤
t
∥x(s)∥ + ∥K ∥ ∥x(s)∥ + ∥P2 ∥ ∥x(s)∥ 2
2
2
2
ds,
t −τ
V3 (t ) ≤
t
2 ∥K ∥2 ∥X1 ∥ ∥x(s)∥2 ds, t
τ 2 ϵ1 ∥x(s)∥2 ds + t −τ
√ τI 0 < 0, −Q6
(23)
where
Ξ11 ∗ ∗ Ξ = ∗ ∗ ∗
Ξ12 Ξ22 ∗ ∗ ∗ ∗
Ξ13 Ξ23 Ξ33 ∗ ∗ ∗
Ξ14 −M 4 A T M4 Ξ44 ∗ ∗
Ξ15 Ξ25 Ξ35 Ξ45 Ξ55 ∗
−M6 −M 6 − I AT M6 − I −I T B M6 − I −Q4 − 2I
with
Ξ11 = Q3 + Q4 + τ Q5 − M1T − M1 + 2I ,
Ξ14 = −M4 + I , Ξ15 = M1T B + I − M5 , Ξ22 Ξ23 Ξ25 Ξ33 Ξ35 Ξ44 Ξ45
t −τ
V4 (t ) ≤
√ τI −Q6 ∗
Ξ ∗ ∗
1
τ )), u (t ) . T
The proof is completed.
Ξ12 = Q1 − M1T − M2 + I , Ξ13 = MT A + K X2 − M 3 + I ,
t
t ≥ 0.
Theorem 2. The system in (1) and (2) is exponentially passive, if there exist symmetric matrices Q1 > 0, Q2 > 0, Q3 > 0, Q4 > 0, Q5 > 0, Q6 > 0, diagonal matrices X1 > 0, X2 > 0, X3 > 0, and matrix M = [M1 , M2 , M3 , M4 , M5 , M6 ] ≫ 0, such that
t
t −τ
≤ −α V (t ). Thus, it follows that
V˙ 3 (t ) = xT (t )K X1 Kx(t ) − f T (x(t ))X1 f (x(t ))
t
τ 2 ∥P3 + ϵ2 I ∥ ∥˙x(s)∥2 ds, t −τ
∥˙x(s)∥2 ds. t −τ
α(∥P1 + K ∥) + λmax (∆) < 0, α 1 + ∥K ∥2 + ∥P2 ∥ + 2 ∥K ∥2 ∥X1 ∥ + τ 2 ϵ1 − ϵ1 τ < 0,
xT (s)K X1 Kx(s) − f T (x(s))X1 f (x(s)) ds,
t
Choose a sufficiently small positive constant α > 0 such that
t t −τ
V4 (t ) = τ
∥x(s)∥2 ds + τ 2 ∥P3 + ϵ2 I ∥ t −τ
t −τ
t
×
t
Ξ55
= 2τ Q6 − M2 − M2 T , = M2 T A − M3 + I , = M2 T B − M5 , = Q2 − 2X2 + M3 T A + AT M3 , = AT M5 + M3 T B, = X1 − Q3 , = K X3 + M 4 T B, 1 1 1 = −2X3 − X1 diag 2 , 2 , . . . , 2 k1 k2
− Q2 v + M5 T B + BT M5 .
kn
Author's personal copy
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
Proof. From (5), for any diagonal matrices X1 > 0, X2 > 0, X3 > 0, one has x (t − τ )X1 x(t − τ ) − f (x(t − τ ))X1 T
T
× diag
1 k21
,
1 k22
,...,
1 k2n
f T (x(t − τ ))X3 Kx(t − τ ) − f (x(t − τ )) ≥ 0,
V1 (t ) ≤ (∥Q1 ∥ + ∥K ∥) ∥x(t )∥2 ,
(26)
and hence
t
2τ ∥Q6 ∥ ∥˙x(s)∥2 ds, t −τ
t −τ
V (t ) ≤ (∥Q1 ∥ + ∥K ∥) ∥x(t )∥2
+ ∥Q3 + Q4 ∥ + ∥Q2 ∥ ∥K ∥ + τ ∥Q5 ∥ 2
t
∥x(s)∥2 ds + 2τ ∥Q6 ∥
×
t
t −τ
∥˙x(s)∥2 ds. t −τ
Choose a sufficiently small positive constant α > 0 such that
− B |f (x(t − t ))| − u(t ) ≤ 0,
(28)
) < 0, α (∥Q1 ∥ + ∥K ∥) + λmax (∆ α ∥Q3 + Q4 ∥ + ∥Q2 ∥ ∥K ∥2 + τ ∥Q5 ∥ − λmin (Q5 ) < 0, 2τ α ∥Q6 ∥ − λmin (Q6 ) < 0,
where
then it easily yields
|x(t )| , |˙x(t )| , |f (x(t ))| , |x(t − τ )| ,
|f (x(t − τ ))| , |x(t − τ )|
T
V˙ (t ) − 2z T (t )u(t ) ≤ − α (∥Q1 ∥ + ∥K ∥) ∥x(t )∥2
− α ∥Q3 + Q4 ∥ + ∥Q2 ∥ ∥K ∥2 + τ ∥Q5 ∥
.
Consider the following Lyapunov functional:
i =1
V˙ (t ) + α V (t ) ≤ 2z T (t )u(t ),
0
0
−τ
Remark 1. Theorems 1 and 2 can directly derive the exponential stability conditions in terms of linear matrix inequalities for system (1) if the u(t ) = 0. One such contrast, it will be easier to understand passivity is a higher abstraction level of stability.
t
x (s)Q5 x(s) + 2x˙ (s)Q6 x˙ (s) dsdϑ.
T
T
t +ϑ
Evaluating the time derivative of V along the trajectory of (10) or (12) gives V˙ 1 (t ) = 2xT (t )Q1 x˙ (t ) + 2f T (x(t ))˙x(t ),
(29)
V˙ 2 (t ) = xT (t )(Q3 + Q4 )x(t ) − xT (t − τ )(Q3 + Q4 )x(t − τ )
+ f T (x(t ))Q2 f (x(t )) − f T (x(t − τ ))Q2 f (x(t − τ )),
(30)
V˙ 3 (t ) = τ x (t )Q5 x(t ) + 2τ x˙ (t )Q6 x˙ (t ) T
T
t
∥x(s)∥ ds − 2Q6 2
− Q5
t
t −τ
∥˙x(s)∥2 ds, t −τ
where x(t − τ ) = (x1 (t − τ1 ), x2 (t − τ2 ), . . . , xn (t − τn ))T . Together with (24)–(26), and (28)–(31), then
)xT (t )x(t ) V˙ (t ) − 2z T (t )u(t ) ≤ λmax (∆ − λmin (Q5 )
t
∥x(s)∥ ds − λmin (Q6 ) 2
t −τ
) ∥x(t )∥ − λmin (Q5 ) = λmax (∆
t
∥˙x(s)∥ ds. 2
t −τ
∥˙x(s)∥2 ds
t
∥x(s)∥2 ds t −τ
− λmin (Q6 )
t t −τ
2
t ≥ 0.
The proof is completed.
xT (s)(Q3 + Q4 )x(s) + f T (x(s))Q2 f (x(s)) ds,
t −τ
∥˙x(s)∥2 ds t −τ
Thus, it follows that f (xi (s))ds,
t
t
≤ − α V (t ). xi (t )
n
∥x(s)∥ ds − 2τ α ∥Q6 ∥ 2
t −τ
where V1 (t ) = xT (t )Q1 x(t ) + 2
t
×
V (t ) = V1 (t ) + V2 (t ) + V3 (t ),
V3 (t ) =
τ ∥Q5 ∥ ∥x(s)∥ ds + 2
(27)
ν T (t )MT x˙ (t ) + x(t ) − A |f (x(t ))|
V2 ( t ) =
t
(25)
Notice that for any matrix M = [M1 , M2 , M3 , M4 , M5 , M6 ] ≫ 0, we have
ν(t ) =
∥Q3 + Q4 ∥ ∥x(s)∥2 + ∥Q2 ∥ ∥K ∥2 ∥x(s)∥2 ds,
t −τ
V3 (t ) ≤
where x(t − τ ) = (x1 (t − τ1 ), x2 (t − τ2 ), . . . , xn (t − τn ))T . On the other hand, from (23), by using the Schur complement,
= Ξ + 2τ Q6 −1 < 0. ∆
t
(24)
f (x(t ))X2 Kx(t ) − f (x(t )) ≥ 0, T
On the other side, it is easy to check that
V2 (t ) ≤
f (x(t − τ )) ≥ 0,
15
(31)
Remark 2. The conditions on passivity derived in Theorems 1 or 2 may not be the best. However, in many applications, firstly one often needs to understand whether the designed network has the desired properties such as the exponential passivity. In Theorems 1 and 2, criteria for the exponential passivity are established in terms of linear matrix inequalities. Such condition in terms of linear matrix inequalities is a convex constraint. A more computationally efficient algorithm for solving these sorts of issues is the interior point method, which allows the constrained optimization problem to be replaced with an unconstrained optimization problem to be solved using Newton’s method. Consequently, reducing a control design problem to the linear matrix inequality can be considered as a practical solution to this problem. Remark 3. In Theorems 1 and 2, a core idea is to employ the theories of differential inclusions and set-valued maps. Generally speaking, nonsmooth analysis is suitable for analyzing switched nonlinear systems, while smooth analysis is suitable for analyzing the continuous nonlinear systems. Remark 4. It is worth pointing out that the obtained results in some earlier publications may not be applied for analyzing the
Author's personal copy
16
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
passivity of memristive neural networks, since the memristive neural network model is basically a state-dependent switched nonlinear system, thus the proposed passivity conditions can improve and generalize some existing results. Remark 5. Within mathematical framework of the Filippov solution, the analysis techniques of Theorems 1 and 2 are similar to that used in Wu et al. (2012a), Zhu, Shen et al. (2010). However, it is worth observing that memristive neural networks is a hybrid system. The network model discussed in Wu et al. (2012a), Zhu, Shen et al. (2010) is a continuous system, which is only a special case of the model proposed in this paper. Therefore, the obtained results in this paper can be used in the wider scope. What is more, from the logical analysis of mathematical reasoning, one can see that the obtained results have a certain robustness, which may also be capable of tackling the external distractions. Meanwhile, the method in this paper may be applied for analyzing other classes of switched neural networks or some other complex switched nonlinear systems. 4. An illustrative example In this section, we discuss an example to illustrate the results. Example 1. Consider a two-dimensional memristive neural network model
x˙ (t ) = −x (t ) + a (x (t ))f (x (t )) 1 1 11 1 1 1 + b11 (x1 (t ))f1 (x1 (t − 0.01)) + b12 (x1 (t ))f2 (x2 (t − 0.01)) + u1 (t ), x˙ (t ) = −x (t ) + a (x (t ))f (x (t )) 2 2 22 2 2 2 + b21 (x2 (t ))f1 (x1 (t − 1)) + b22 (x2 (t ))f2 (x2 (t − 1)) + u2 (t ), z1 (t ) = f1 (x1 (t )) + f1 (x1 (t − 0.01)) + u1 (t ), z2 (t ) = f2 (x2 (t )) + f2 (x2 (t − 1)) + u2 (t ), for t ≥ 0, where f (ρ) = f1 (·) = f2 (·) =
a11 (x1 (t )) =
b11 (x1 (t )) =
b12 (x1 (t )) =
a22 (x2 (t )) =
b21 (x2 (t )) =
b22 (x2 (t )) =
−0.03,
−
−0.01,
−
−0.05,
−
−0.03,
−
0.07, 0.05,
df1 (x1 (t )) dt df1 (x1 (t )) dt
− −
dx1 (t )
−
dt df2 (x2 (t )) dt
df1 (x1 (t − 1)) dt df1 (x1 (t − 1)) dt
−
−0.03,
−
−
dt
dx2 (t ) dt dx2 (t )
dt df2 (x2 (t − 1))
dt
− −
≤ 0, > 0,
≤ 0, > 0, Fig. 1. The state curves of system (32) with input u(t ) = (2 + sin(t ), 2 − sin(t ))T .
≤ 0, >0
dt
df2 (x2 (t − 1))
dt
dt
dt dx2 (t )
−
−
dt dx1 (t )
dx2 (t )
−
−
dx1 (t )
dt dx1 (t )
−
dt
−0.04,
−
dx1 (t )
−
dt df2 (x2 (t − 0.01)) df2 (x2 (t ))
> 0,
dt
df2 (x2 (t − 0.01))
,
≤ 0,
dt dx1 (t )
dt
−0.005,
0.05,
2
dt df1 (x1 (t − 0.01))
−
0.07,
|ρ+1|−|ρ−1|
df1 (x1 (t − 0.01))
−0.01,
(32)
≤ 0, > 0,
dx2 (t ) dt dx2 (t ) dt
Solving the linear matrix inequality in (14) by MATLAB ToolBox, an optimal feasible solution can be as follows: 1.6511 0.0295
0.0295 , 1.6292
0.1231 0.0029
0.0029 , 0.1402
1.1068 P3 = 0.0249
0.0249 , 1.0703
P1 =
P2 =
≤ 0, > 0.
X1 = diag(0.1607, 0.2427), X2 = diag(1.436, 1.4272),
Author's personal copy
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
17
5. Conclusion In this paper, the exponential passivity for a general class of memristive neural networks is studied. Some sufficient conditions in terms of linear matrix inequalities are obtained, in order to achieve the exponential passivity. This article is a preliminary study of memristive systems in passivity analysis. Indeed, we should note that robustness issues of memristive neurodynamic systems are rarely considered. It is desirable to design robust criteria to deal with the parameter perturbations and model errors. In addition, further investigations will include how to use our proposed approach with specific memristive neurodynamic systems (e.g., memristive cellular neurodynamic systems) and how to extend the presented approach with other cases (e.g., memristive fuzzy neurodynamic systems). Acknowledgments The authors thank the Action Editor and referees for his/her valuable suggestions to improve the article. The work is supported by the Natural Science Foundation of China under Grant 61304057. The work of A. Wu was done with the School of Automation, Huazhong University of Science and Technology, Wuhan, China. References
Fig. 2. The state curves of system (32) with input u(t ) = (0, 0)T .
X3 = diag(3.6962, 4.1918). According to Theorem 1, the exponential passivity can be achieved. By using standard arguments similar as above, according to Theorem 2, solving the linear matrix inequality in (23) to obtain the optimal feasible solution via MATLAB Tool-Box, the exponential passivity can be achieved, and therefore we will not repeat it here. Figs. 1 and 2 show the state curves of system (32) with input u(t ) = (2 + sin(t ), 2 − sin(t ))T and u(t ) = (0, 0)T , respectively. By Fig. 1, we can know system (32) with input u(t ) = (2 + sin(t ), 2 − sin(t ))T can keep internally stable. From Fig. 2, it follows that system (32) with input u(t ) = (0, 0)T is exponential stability. One might see, now, passivity is a higher abstraction level of stability by contrast. It is worth pointing out that the above results in Example 1 cannot be obtained by using any existing results since the research object is a state-dependent nonlinear hybrid system.
Balasubramaniam, P., & Nagamani, G. (2010a). Passivity analysis of neural networks with Markovian jumping parameters and interval time-varying delays. Nonlinear Analysis: Hybrid Systems, 4(4), 853–864. Balasubramaniam, P., & Nagamani, G. (2010b). Passivity analysis for uncertain stochastic neural networks with discrete interval and distributed time-varying delays. Journal of Systems Engineering and Electronics, 21(4), 688–697. Balasubramaniam, P., & Nagamani, G. (2011). A delay decomposition approach to delay-dependent passivity analysis for interval neural networks with timevarying delay. Neurocomputing, 74(10), 1646–1653. Balasubramaniam, P., & Nagamani, G. (2012). Global robust passivity analysis for stochastic fuzzy interval neural networks with time-varying delays. Expert Systems with Applications, 39(1), 732–742. Balasubramaniam, P., Nagamani, G., & Rakkiyappan, R. (2011). Passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term. Communications in Nonlinear Science and Numerical Simulations, 16(11), 4422–4437. Cantley, K. D., Subramaniam, A., Stiegler, H. J., Chapman, R. A., & Vogel, E. M. (2011). Hebbian learning in spiking neural networks with nanocrystalline silicon TFTs and memristive synapses. IEEE Transactions on Nanotechnology, 10(5), 1066–1073. Cantley, K. D., Subramaniam, A., Stiegler, H. J., Chapman, R. A., & Vogel, E. M. (2012). Neural learning crcuits utilizing nano-crystalline silicon transistors and memristors. IEEE Transactions on Neural Networks and Learning Systems, 23(4), 565–573. Fu, J., Zhang, H. G., Ma, T. D., & Zhang, Q. L. (2010). On passivity analysis for stochastic neural networks with interval time-varying delay. Neurocomputing, 73(4–6), 795–801. Itoh, M., & Chua, L. O. (2009). Memristor cellular automata and memristor discretetime cellular neural networks. International Journal of Bifurcation and Chaos, 19(11), 3605–3656. Ji, D. H., Koo, J. H., Won, S. C., Lee, S. M., & Park, J. H. (2011). Passivity-based control for Hopfield neural networks using convex representation. Applied Mathematics and Computation, 217(13), 6168–6175. Kim, H., Sah, M. P., Yang, C. J., Roska, T., & Chua, L. O. (2012). Neural synaptic weighting with a pulse-based memristor circuit. IEEE Transactions on Circuits and Systems I: Regular Papers, 59(1), 148–158. Kwon, O. M., Park, J. H., Lee, S. M., & Cha, E. J. (2011). A new augmented Lyapunov–Krasovskii functional approach to exponential passivity for neural networks with time-varying delays. Applied Mathematics and Computation, 217(24), 10231–10238. Li, H. Y., Gao, H. J., & Shi, P. (2010). New passivity analysis for neural networks with discrete and distributed delays. IEEE Transactions on Neural Networks, 21(11), 1842–1847. Li, H. Y., Lam, J., & Cheung, K. C. (2012). Passivity criteria for continuous-time neural networks with mixed time-varying delays. Applied Mathematics and Computation, 218(22), 11062–11074. Li, H. Y., Wang, C., Shi, P., & Gao, H. J. (2010). New passivity results for uncertain discrete-time stochastic neural networks with mixed time delays. Neurocomputing, 73(16–18), 3291–3299.
Author's personal copy
18
A. Wu, Z. Zeng / Neural Networks 49 (2014) 11–18
Mathiyalagan, K., Sakthivel, R., & Anthoni, S. M. (2012). New robust passivity criteria for stochastic fuzzy BAM neural networks with time-varying delays. Communications in Nonlinear Science and Numerical Simulations, 17(3), 1392–1407. Pershin, Y. V., & Di Ventra, M. (2010). Experimental demonstration of associative memory with memristive neural networks. Neural Networks, 23(7), 881–886. Sakthivel, R., Arunkumar, A., Mathiyalagan, K., & Anthon, S. M. (2011). Robust passivity analysis of fuzzy Cohen–Grossberg BAM neural networks with timevarying delays. Applied Mathematics and Computation, 218(7), 3799–3809. Wang, F. Z., Helian, N., Wu, S. N., Yang, X., Guo, Y. K., Lim, G., et al. (2012). Delayed switching applied to memristor neural networks. Journal of Applied Physics, 111(7), 07E317-1–07E317-3. Wang, J. L., Wu, H. N., & Guo, L. (2011). Passivity and stability analysis of reaction–diffusion neural networks with Dirichlet boundary conditions. IEEE Transactions on Neural Networks, 22(12), 2105–2116. Wen, S. P., & Zeng, Z. G. (2012). Dynamics analysis of a class of memristor-based recurrent networks with time-varying delays in the presence of strong external stimuli. Neural Processing Letters, 35(1), 47–59. Wen, S. P., Zeng, Z. G., & Huang, T. W. (2012). Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays. Neurocomputing, 97, 233–240. Wu, Z. G., Park, J. H., Su, H. Y., & Chu, J. (2012a). New results on exponential passivity of neural networks with time-varying delays. Nonlinear Analysis: Real World Applications, 13(4), 1593–1599.
Wu, Z. G., Park, J. H., Su, H. Y., & Chu, J. (2012b). Passivity analysis of Markov jump neural networks with mixed time-delays and piecewise-constant transition rates. Nonlinear Analysis: Real World Applications, 13(5), 2423–2431. Wu, Z. G., Shi, P., Su, H. Y., & Chu, J. (2011). Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays. IEEE Transactions on Neural Networks, 22(10), 1566–1575. Wu, A. L., & Zeng, Z. G. (2012a). Exponential stabilization of memristive neural networks with time delays. IEEE Transactions on Neural Networks and Learning Systems, 23(12), 1919–1929. Wu, A. L., & Zeng, Z. G. (2012b). Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Networks, 36, 1–10. Zeng, H. B., He, Y., Wu, M., & Xiao, S. P. (2011). Passivity analysis for neural networks with a time-varying delay. Neurocomputing, 74(5), 730–734. Zhang, Z. X., Mou, S. S., Lam, J., & Gao, H. J. (2009). New passivity criteria for neural networks with time-varying delay. Neural Networks, 22(7), 864–868. Zhu, S., & Shen, Y. (2011). Passivity analysis of stochastic delayed neural networks with Markovian switching. Neurocomputing, 74(10), 1754–1761. Zhu, S., Shen, Y., & Chen, G. C. (2010). Exponential passivity of neural networks with time-varying delay and uncertainty. Physics Letters A, 375(2), 136–142. Zhu, J., Zhang, Q. L., & Yuan, Z. H. (2010). Delay-dependent passivity criterion for discrete-time delayed standard neural network model. Neurocomputing, 73(7–9), 1384–1393.