Receding Horizon Stabilization and Disturbance ... - Semantic Scholar

Report 3 Downloads 122 Views
IEEE TRANSACTIONS ON CYBERNETICS

1

Receding Horizon Stabilization and Disturbance Attenuation for Neural Networks with Time-Varying Delay Choon Ki Ahn, Senior Member, IEEE, Peng Shi, Fellow, IEEE, and Ligang Wu, Senior Member, IEEE

Abstract—This paper is concerned with the problems of receding horizon stabilization and disturbance attenuation for neural networks with time-varying delay. New delay-dependent conditions on the terminal weighting matrices of a new finite horizon cost functional for receding horizon stabilization are established for neural networks with time-varying or timeinvariant delays using single- and double-integral Wirtingertype inequalities. Based on the results, delay-dependent sufficient conditions for the receding horizon disturbance attenuation are given to guarantee the infinite horizon H∞ performance of neural networks with time-varying or time-invariant delays. Three numerical examples are provided to illustrate the effectiveness of the proposed approach. Index Terms—receding horizon stabilization, disturbance attenuation, cost functional, neural network, time delay

I. I NTRODUCTION The past three decades have seen significant advances in science and engineering for building artificial neural networks [1], [2], including recurrent neural networks implemented using digital hardware that have been widely used in science and engineering, such as in image processing, voice analysis, satellite data transmissions, and DNA micro-array analysis [3], [4], [5]. Many practical applications require that neural networks have a well-defined solution for all initial conditions. From a theoretical point of view, neural networks should have a stable and globally attractive equilibrium point. Thus, ensuring the stability or stabilization of neural networks is an important research topic for many applications [2]. The implementation of neural networks using very largescale integrated (VLSI) circuits creates inevitable time delays due to the inherent transmission time of neurons and the finite speed of information processing. This, in turn, leads to problems of greater complexity, such as the oscillation, divergence, poor performance, and instability of neural networks [6]. For this reason, many important and interesting results have been Manuscript received September 17, 2014. Choon Ki Ahn is with the School of Electrical Engineering, Korea University, Seoul, 136-701, Korea (Corresponding author, e-mail: [email protected]). Peng Shi is with the College of Automation, Harbin Engineering University, Harbin 150001, China; the College of Engineering and Science, Victoria University, Melbourne, VIC 8001, Australia; and the School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA 5005, Australia (e-mail: [email protected]). Ligang Wu is with the Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, 150001, China (e-mail: [email protected]).

reported in recent years on the analysis and synthesis of neural networks with time delays, including stability analysis, stabilization, filtering, and learning (for examples, see [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18]). The stability analysis problem has been studied for neural networks with time delay in, inter alia, [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], based on the linear matrix inequality (LMI) approach. In [29], [30], [31], [32], some state estimation problems were solved for neural networks with time delays, and novel algorithms were proposed to obtain the desired state estimators. The passivity analysis problem for neural networks with time delays was investigated in [33], [34], [35], [36], [37], and some delay-dependent conditions for passivity, passive learning, and passive filtering were established. Receding horizon control has been recognized as a significant feedback strategy in various industry fields, especially process industries [38], [39], [40], [41], [42], [43], [44]. Receding horizon control has many advantages, such as guaranteed stability, good tracking performance, and adaptation to changing parameters. The receding horizon control idea was introduced in [45], [46], [47], [48] to solve new filtering and output feedback control problems. Recently, a receding horizon stabilizer was proposed for neural networks in [49]. Based on the linear differential inclusion of neural networks, the receding horizon stabilization problem for unknown nonlinear systems was solved in [50]. Ahn also presented a new receding horizon disturbance attenuator for fuzzy switched neural networks with disturbances in [51]. However, these approaches require the stringent assumption that the neural networks do not have time delays. Unfortunately, at present, no published literature has reported on receding horizon stabilization and disturbance attenuation for neural networks with time delays. Therefore, the main purpose of the present research is to fill this gap by establishing the first trial that provides solutions for the problems of receding horizon stabilization and disturbance attenuation for neural networks with time delays. In this paper, we propose the first approach for receding horizon stabilization and disturbance attenuation for neural networks with time-varying delay. Based on integral Wirtingertype inequalities and the lower bounds lemma proposed in [52], a new condition on the terminal weighting matrices of a new finite horizon cost functional for the receding horizon stabilization is established, ensuring the asymptotic stability of the given neural networks. The cost functional taken in this paper is different from the existing forms used in [49],

2

IEEE TRANSACTIONS ON CYBERNETICS

[50], [51], which considered the receding horizon stabilization and disturbance attenuation problems for neural networks. The cost functional in this paper has additional terminal weighting matrices on the delayed state and its time-derivative. Thus, a new sufficient LMI condition for the receding horizon disturbance attenuation is given to ascertain the infinite horizon H∞ performance of these networks. Based on the derived conditions, we also obtain results on time-invariant delay and delay-free cases. The terminal weighting matrices can be obtained by solving the LMI problem, which can be done efficiently using standard convex optimization software [53]. The main contributions of this paper are as follows: 1) The receding horizon stabilization problem for neural networks with time-varying delay is formulated and, based on this formulation, a new sufficient condition on the terminal weighting matrices of a new cost functional is proposed. 2) An LMI condition for the receding horizon disturbance attenuation of neural networks with time-varying delay and disturbance is presented. 3) Results for time-invariant delay and delay-free cases are provided. This paper is organized as follows. In Section II, we present a new LMI condition for the receding horizon stabilization of neural networks with time-varying delay. In Section III, a new sufficient condition for the receding horizon disturbance attenuation of neural networks with time-varying delay and disturbance is proposed. In Section IV, three numerical examples are given. Finally, conclusions are presented in Section V. Notation: The notation P > 0 (P ≥ 0) represents that P is a symmetric and positive definite (semi-definite) matrix. ⋆ denotes an entry that can be deduced from the symmetry of the matrix. diag{·} denotes a block-diagonal matrix. minu J means the minimization of J with respect to u. Matrices whose dimensions are not explicitly mentioned are assumed to be compatible for algebraic operation. II. R ECEDING H ORIZON S TABILIZATION FOR N EURAL N ETWORKS WITH T IME -VARYING D ELAY Consider the following neural network with time-varying delay: x(t) ˙ = Ax(t) + W1 ϕ(x(t)) + W2 ϕ(x(t − τ (t))) + u(t), (1) where x(t) = [x1 (t) ... xn (t)]T ∈ Rn is the state vector, A = diag{−a1 , . . . , −an } ∈ Rn×n (ak > 0, k = 1, . . . , n) is the self-feedback matrix, Wi ∈ Rn×n (i = 1, 2) is the connection weight matrix, ϕ(x(t)) = [ϕ1 (x1 (t)) ... ϕn (xn (t))]T ∈ Rn is the activation function of neurons, and u(t) ∈ Rn is the control input vector. The delay τ (t) is a time-varying continuous function satisfying 0 ≤ τ (t) ≤ τ and τ˙ (t) ≤ λ < 1, where τ and λ are known constants. It is assumed that each activation function ϕi (·) (i = 1, ..., n) in (1) is bounded and satisfies li− ≤

ϕi (s1 ) − ϕi (s2 ) ≤ li+ , s1 ̸= s2 ∈ R, s1 − s2

(2)

where li− ≥ 0 and li+ > 0 are known real constants. Define L− = diag{l1− , l2− , ..., ln− } and L+ = diag{l1+ , l2+ , ..., ln+ }.

Consider the following new finite horizon cost functional associated with the neural network (1): J(x(t0 ), t0 , t1 ) ∫ t1 5 ∑ [ T ] = x (t)Qx(t) + uT (t)Ru(t) dt + Vi (x(t1 )), (3) t0

i=1

where ∫ V1 (x(t)) = xT (t)P x(t) + ∫

0



0

¯ xT (t + σ)Rx(t + σ)dσ −τ (t)

t

¯ xT (σ)Qx(σ)dσdθ,

+ −τ t

t+θ

[

]T [ ][ ] Z1 Z2 x(σ) V2 (x(t)) = dσ, ⋆ Z3 ϕ(x(σ)) t−τ (t) [ ∫t ]T [ ] x(σ)dσ U1 U2 t−τ V3 (x(t)) = ∫ 0 ∫ t ⋆ U3 x(σ)dσdθ [ −τ∫ tt+θ ] x(σ)dσ × ∫ 0 ∫t−τ , t x(σ)dσdθ −τ t+θ ∫ 0 ∫ t V4 (x(t)) = x˙ T (σ)S x(σ)dσdθ ˙ −τ t+θ ∫ 0 ∫ 0∫ t + x˙ T (σ)S¯x(σ)dσdθdµ, ˙ ∫

x(σ) ϕ(x(σ))

−τ



V5 (x(t)) = τ [ Z1 ⋆

µ 0



t+θ t

x˙ T (σ)Sˆx(σ)dσdθ, ˙ −τ t+θ ] [ ] Z2 U1 U2 > 0, > 0, Z3 ⋆ U3

¯ = Q ¯ T > 0, R ¯ = Q ≥ 0, R = RT > 0, P = P T > 0, Q T T T ¯ R > 0, U1 = U1 > 0, U3 = U3 > 0, Z1 = Z1T > 0, Z3 = Z3T > 0, S = S T > 0, S¯ = S¯T > 0, Sˆ = SˆT > 0, t0 ≥ 0 is an initial time, and t1 is a final time (t1 > t0 ). The optimal control minimizing the cost functional (3) and the corresponding optimal cost are denoted by u∗ (t) (t0 ≤ t ≤ t1 ) and J ∗ (x(t0 ), t0 , t1 ), respectively. We can then obtain the receding horizon stabilizer by minimizing the cost functional (3) with t0 and t1 replaced by the current time, t, and the future time, t + T , respectively, where T > 0 is a constant horizon size. In other words, the receding horizon stabilizer can be obtained by solving the following minimization problem: min J(x(t), t, t + T ) u(t)

(4)

subject to (1) at each time t. Given the matrices Q and R, the aim of this section is to establish a new delay-dependent ¯ R, ¯ Z1 , condition on the terminal weighting matrices, P , Q, ¯ ˆ Z2 , Z3 , U1 , U2 , U3 , S, S, and S, of the cost functional (3) such that the neural network (1) with the receding horizon stabilizer is asymptotically stable. Before moving on, the following results are required: Lemma 1: [28], [54] Let x(t) ∈ Rn have the continuous derived function x(t) ˙ on the interval [t−τ, t]. For any matrices S > 0, S¯ > 0, and the scalar τ > 0, the following inequalities then hold:

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

¯+R ¯ + Z1 Φa1,1 = Q + (P A + C) + (P A + C)T + τ Q ˆ − 2L+ M1 L− − S,

(1) Single Integral Wirtinger-Type Inequality: ∫ t − x˙ T (σ)S x(σ)dσ ˙ t−τ

≤− −

2 τ3

[∫

]T

t

x(σ)dσ

[∫

ˆ Φa1,2 = Sˆ − Z, ˆ Φa1,3 = Z,

]

t

S

x(σ)dσ

t−τ

t−τ

2 T 4 x (t − τ )Sx(t − τ ) + 2 τ τ

[∫

]T

t

x(σ)dσ t−τ

Φa1,4 = Z2 + (L+ + L− )M1 + P W1 , a Sx(t − τ ).Φ1,5 = P W2 , Φa1,6 = U1 + τ U2T ,

(2) Double Integral Wirtinger-Type Inequality: ∫ 0 ∫ t − x˙ T (σ)S¯x(σ)dσdθ ˙ −τ

≤− −

t+θ

[∫

2 τ4 [∫

2 τ2

4 + 3 τ



0

t+θ

]T

t

x(σ)dσ

[∫

0



]

t

t+θ

x(σ)dσ t−τ ]T

t

x(σ)dσdθ −τ

]

t

x(σ)dσdθ −τ



t−τ

[∫



0

Φa1,7 = U2 + τ U3 , ¯ − 2L+ M2 L− − (1 − λ)Z1 − 2Sˆ Φa2,2 = −(1 − λ)R



x(σ)dσdθ −τ

[∫

]T

t

t+θ

[∫

]

t



x(σ)dσ . t−τ

Lemma 2: [52], [25] For any scalars τ and τ (t) satisfying 0 ≤ τ (t) ≤ τ , the following inequality: ∫ t −τ x˙ T (σ)Sˆx(σ)dσ ˙ t−τ

 T  ˆ −S Sˆ − Zˆ Zˆ x(t) ≤ x(t − τ (t))  ⋆ −2Sˆ + Zˆ + Zˆ T −Zˆ + Sˆ x(t − τ ) ⋆ ⋆ −Sˆ   x(t) × x(t − τ (t)) (5) x(t − τ ) ] [ Sˆ Zˆ ≥ 0. holds for any matrix ⋆ Sˆ Remark 1: Lemma 2 is a special case of Theorem 1 in [52]. The inequality (5) reduces to Jensen’s inequality [55] for the time-invariant delay case. In the following theorem, we obtain a sufficient condition for the asymptotic stability of the neural network (1) with the receding horizon stabilizer. Theorem 1: For given matrices Q ≥ 0 and R = RT > 0, ¯ > 0, assume that there exist symmetric matrices P > 0, Q ¯ > 0, U1 > 0, U3 > 0, Z1 > 0, Z3 > 0, S > 0, S¯ > 0, R ˆ and C Sˆ > 0, M1 > 0, and M2 > 0, and matrices U2 , Z2 , Z, such that [ a ] [ ] ¯a Φ Φ Z1 Z2 < 0, > 0, ˜a ⋆ Z3 ⋆ Φ [ ] [ ] U1 U2 Sˆ Zˆ > 0, ≥ 0, (6) ⋆ U3 ⋆ Sˆ 

where



    a Φ =    

Φa1,1 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

Φa1,2 Φa2,2 ⋆ ⋆ ⋆ ⋆ ⋆

Φa1,3 Φa2,3 Φa3,3 ⋆ ⋆ ⋆ ⋆

Φa1,4 0 0 Φa4,4 ⋆ ⋆ ⋆

Φa1,5 Φa2,5 0 0 Φa5,5 ⋆ ⋆

3

Φa1,6 0 Φa3,6 0 0 Φa6,6 ⋆

Φa1,7 0 Φa3,7 0 0 Φa6,7 Φa7,7

     ,    

+ Zˆ + Zˆ T , ˆ Φa2,3 = −Zˆ + S, Φa2,5 = (L+ + L− )M2 − (1 − λ)Z2 , 2 ˆ Φa3,3 = − S − S, τ 2 Φa3,6 = −U1 + 2 S, τ Φa3,7 = −U2 , Φa4,4 = Z3 − 2M1 , Φa5,5 = −2M2 − (1 − λ)Z3 , 1¯ 2 2 ¯ Φa6,6 = − Q − U2 − U2T − 3 S − 2 S, τ τ τ 2 ¯ a Φ6,7 = −U3 + 3 S, τ 2 ¯ a Φ7,7 = − 4 S, [τ ] √ T τ T a T T ¯ Φ = χ ˆa τ χ ˜a √ χ ˜a τ χ ˜a , 2] [ χ ˆa = C 0 0 0 0 0 0 , [ ] χ ˜a = P A + C 0 0 P W1 P W2 0 0 , ˜ a = diag{−2P + R, −2P + S, −2P + S, ¯ −2P + S}. ˆ Φ The receding horizon stabilizer with the terminal weighting ¯ R, ¯ Z1 , Z2 , Z3 , U1 , U2 , U3 , S, S, ¯ and matrices P , Q, ˆ S then guarantees that the delayed neural network (1) is asymptotically stable. Proof: Let u ˆ(·) and u ¯(·) be the optimal controls to minimize J(x(α), α, β + ∆) and J(x(α), α, β), respectively, where α is an initial time of J(x(α), α, β), β is a final time of J(x(α), α, β), and ∆ is a small positive constant. Take the partial derivative of J ∗ (x(α), α, β) with respect to β. We then have ∂J ∗ (x(α), α, β) ∂β J ∗ (x(α), α, β + ∆) − J ∗ (x(α), α, β) = lim ∆→0 ∆ ∫ 1{ β T = lim [ˆ x (t)Qˆ x(t) + u ˆT (t)Rˆ u(t)]dt ∆→0 ∆ α ∫ β + J ∗ (ˆ x(β), β, β + ∆) − [¯ xT (t)Q¯ x(t) + u ¯T (t)R¯ u(t)]dt −

5 ∑ i=1

} Vi (¯ x(β)) .

α

(7)

4

IEEE TRANSACTIONS ON CYBERNETICS

u ˆ(·) is replaced by u ¯(·) up to β and u ˆ(t) = K x ¯(t) for t ≥ β, where K is a matrix. We then have ∂J ∗ (x(α), α, β) ∂β ∫ 1{ β T ≤ lim [¯ x (t)Q¯ x(t) + u ¯T (t)R¯ u(t)]dt ∆→0 ∆ α ∫ β + J(¯ x(β), β, β + ∆) − [¯ xT (t)Q¯ x(t) + u ¯T (t)R¯ u(t)]dt −

5 ∑

} Vi (¯ x(β))

i=1



1 = lim ∆→0 ∆ + lim

∆→0

1 ∆

α



∫ β τ2 T x ¯˙ (β)S¯x ¯˙ (β) − x ¯˙ T (σ)S¯x ¯˙ (σ)dσdθ 2 −τ β+θ [ ] τ2 ¯˙ (β) ≤x ¯˙ T (β) τ S + S¯ x 2 [∫ ]T [∫ ] β β 2 − 3 x ¯(σ)dσ S x ¯(σ)dσ τ β−τ β−τ

+

β+∆

+

β 5 ∑

[Vi (¯ x(β + ∆)) − Vi (¯ x(β))]



i=1

=x ¯T (β)[Q + K T RK]¯ x(β) +

5 ∑

V˙ i (¯ x(β)).

(8)

i=1

Evaluating the derivative of Vi (¯ x(β)) (i = 1, 2, 3, 4, 5) with respect to β, we obtain V˙ 1 (¯ x(β))



x ¯˙ T (σ)S x ¯˙ (σ)dσ β−τ ∫ 0

− x ¯T (t)[Q + K T RK]¯ x(t)dt

β

= τx ¯˙ T (β)S x ¯˙ (β) −

− +

2 T x ¯ (β − τ )S x ¯(β − τ ) τ [∫ ]T β 4 x ¯(σ)dσ S x ¯(β − τ ) τ 2 β−τ ]T [∫ ∫ ] [∫ ∫ 0 0 β β 2 x ¯(σ)dσdθ S¯ x ¯(σ)dσdθ τ 4 −τ β+θ −τ β+θ [∫ ]T [∫ ] β β 2 x ¯(σ)dσ S¯ x ¯(σ)dσ τ 2 β−τ β−τ [∫ ∫ ]T [∫ ] 0 β β 4 x ¯(σ)dσdθ S¯ x ¯(σ)dσ , (12) τ 3 −τ β+θ β−τ

where Lemma 1 is applied, and

β

¯ x(β) − = 2¯ xT (β)P x ¯˙ (β) + τ x ¯T (β)Q¯

¯ x(σ)dσ x ¯T (σ)Q¯

V˙ 5 (¯ x(β))



β−τ

¯ x(β) − (1 − τ˙ (β))¯ ¯ x(β − τ (β)) +x ¯T (β)R¯ xT (β − τ (β))R¯ T T ¯ ¯ x(β) ≤x ¯ (β)[P (A + K) + (A + K) P + τ Q + R]¯ + 2¯ xT (β)P W1 ϕ(¯ x(β)) + 2¯ xT (β)P W2 ϕ(¯ x(β − τ (β))) ] [∫ ]T [∫ β β 1 ¯ − x ¯(σ)dσ x ¯(σ)dσ Q τ β−τ β−τ ¯ x(β − τ (β)), − (1 − λ)¯ xT (β − τ (β))R¯

(9)

where Jensen’s inequality [55] is applied, V˙ 2 (¯ x(β)) [ ]T [ ][ ] x(β) Z1 Z2 x(β) ≤ − (1 − λ) ϕ(x(β)) ⋆ Z3 ϕ(x(β)) [ ]T [ ][ ] x(β − τ (β)) Z1 Z2 x(β − τ (β)) × , (10) ϕ(x(β − τ (β))) ⋆ Z3 ϕ(x(β − τ (β))) V˙ 3 (¯ x(β)) [∫ ] β T = 2 [¯ x(β) − x ¯(β − τ )] U1 x ¯(σ)dσ β−τ

[∫

0

T

+ 2 [¯ x(β) − x ¯(β − τ )] U2 [



[



]T

β

+ 2 τx ¯(β) −

x ¯(σ)dσ β−τ

β+θ

[∫

]

β

U2T

x ¯(σ)dσ β−τ

]

β

x ¯(σ)dσdθ −τ

]T

β

+ 2 τx ¯(β) −



x ¯(σ)dσ [∫

U3

β−τ 0



β

] x ¯(σ)dσdθ ,

−τ

β+θ

(11) V˙ 4 (¯ x(β))

β

=τ x ¯ (β)Sˆx ¯˙ (β) − τ

x ¯˙ T (σ)Sˆx ¯˙ (σ)dσ

2 ˙T

β−τ

T x ¯(β) ¯(β − τ (β)) ≤ τ 2x ¯˙ T (β)Sˆx ¯˙ (β) + x x ¯(β − τ )    ˆ ˆ ˆ −S S−Z Zˆ x ¯(β) ¯(β − τ (β)) , ×  ⋆ −2Sˆ + Zˆ + Zˆ T −Zˆ + Sˆ x x ¯(β − τ ) ⋆ ⋆ −Sˆ (13) 

where Lemma 2 is applied. On the other hand, by (2), we have [ ]T 2 L+ x ¯(β) − ϕ(¯ x(β)) M1 [ ] × ϕ(¯ x(β)) − L− x ¯(β) ≥ 0, (14) [ + ]T 2 L x ¯(β − τ (β)) − ϕ(¯ x(β − τ (β))) M2 [ ] × ϕ(¯ x(β − τ (β))) − L− x ¯(β − τ (β)) ≥ 0. (15) Thus, from (9)-(15), we have ∂J ∗ (x(α), α, β) ∂β [ ] 2 ˆ a + Λ + τ χTa Sχa + τ χTa Sχ ¯ a + τ 2 χTa Sχ ˆ a ξ(β), ≤ ξ T (β) Φ 2 where  a  a a a a a ˆ Φ Φa1,7 1,1 Φ1,2 Φ1,3 Φ1,4 Φ1,5 Φ1,6  ⋆ Φa2,2 Φa2,3 0 Φa2,5 0 0    a a a  ⋆  ⋆ Φ 0 0 Φ Φ 3,3 3,6 3,7   a ˆa =  ⋆ Φ ⋆ ⋆ Φ4,4 0 0 0   , a  ⋆ ⋆ ⋆ ⋆ Φ5,5 0 0    a  ⋆ ⋆ ⋆ ⋆ ⋆ Φ6,6 Φa6,7  ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ Φa7,7

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

ˆ a1,1 = Q + P (A + K) + (A + K)T P + τ Q ¯+R ¯ + Z1 Φ ˆ − 2L+ M1 L− − S, Λ = diag{K T RK, 0, 0, 0, 0, 0, 0}, [ ξ(β) = x ¯T (β) x ¯T (β − τ (β)) x ¯T (β − τ ) ϕT (¯ x(β)) [∫ [∫

0



]T

β

ϕT (¯ x(β − τ (β)))

x ¯(σ)dσ β−τ

β

]T ]T

x ¯(σ)dσdθ −τ β+θ [ ] χa = A + K 0 0 W1 W2 0 0 .

, (16)

The condition 2 ˆ a + Λ + τ χTa Sχa + τ χTa Sχ ¯ a + τ 2 χTa Sχ ˆ a 0, the LMI condition < 0 in (6) ˜a ⋆ Φ implies (19). This completes the proof. Remark 2: If u ˆ(·) is replaced by u ¯(·) up to β and u ˆ(t) = Kx ¯(t) for t ≥ β in (7), the first inequality of (8) comes from the fact that u ¯(·) up to β and K x ¯(t) for t ≥ β are not optimal controls for J ∗ (x(α), α, β + ∆). Here, other forms of control (for example, K1 x ¯(t)+ K2 x ¯(t− τ ), where K1 and K2 are constant matrices) can be also used instead of K x ¯(t) for t ≥ β. In Theorem 1, we employed K x ¯(t) for simplicity. The control K x ¯(t) was introduced only for obtaining the terminal ¯ R, ¯ Z1 , Z2 , Z3 , U1 , U2 , U3 , S, S, ¯ weighting matrices, P , Q, ˆ of the cost functional (3) in Theorem 1. Note that we and S, do not use the control K x ¯(t) in the actual receding horizon stabilization. The actual receding horizon stabilizer is obtained by solving the minimization problem in (4) with the cost functional (3). Remark 3: The condition proposed in Theorem 1 depends on the matrices Q and R. Given Q and R, solving the LMI condition in Theorem 1 gives the terminal weighting matrices ¯ R, ¯ Z1 , Z2 , Z3 , U1 , U2 , U3 , S, S, ¯ and Sˆ of the cost P , Q, functional (3). Here, these terminal weighting matrices play a

5

key role in stabilizing the delayed neural network (1). Q and R are usually selected as symmetric matrices. As the elements of the matrices Q and R increase, the feasibility of the condition in Theorem 1 decreases. Remark 4: In this paper, we assume that the state variable x(t) of the neural network (1) is available. However, in many applications, the state variable x(t) is often not completely available in the network outputs. In this case, we can design a filter for the state estimation of the neural network (1) using the methods in [30], [32], [37], and define a new cost functional for an augmented system consisting of the neural network and filter. Theorem 1 can then be extended to obtain a new output feedback receding horizon stabilization method for the delayed neural network (1). This extension remains to be completed in future work. Remark 5: The input and state constraints in neural networks can be easily considered with the proposed receding horizon stabilizer. For example, input and state constraints (−u ≤ u(t) ≤ u and −x ≤ x(t) ≤ x, where u ∈ Rn and x ∈ Rn , respectively, are known constant vectors) can be considered by introducing some LMIs to the condition in Theorem 1 using the techniques in Lemma 3.4 of [50]. Thus, following the line of the proof and result of Theorem 1, we can further design a receding horizon stabilizer for delayed neural networks with constraints via the LMI approach. Remark 6: The horizon size T in the cost functional J(x(t), t, t + T ) can affect the performance of the receding horizon stabilizer. The performance of the receding horizon stabilizer can be improved by increasing the horizon size T , but this leads to large computational burden. Thus, the horizon size T should be selected by a trade-off between performance and computational complexity of the receding horizon stabilizer. Remark 7: The terminal state equality approach presented in [38] is a typical method for the closed-loop stability of the receding horizon control for linear systems. Application of the terminal state equality approach to the receding horizon stabilization and disturbance attenuation problems for neural networks with time-varying delays can make solving the minimization problem (4) subject to the terminal state equality and obtaining the receding horizon stabilizer too demanding. For this reason, this paper employs the finite terminal weighting matrix approach to such problems for neural networks with time-varying delays. This approach does not require the terminal state equality constraint in the minimization problem (4). Next, we consider the time-invariant delay case: τ (t) = τ ≥ 0. We then obtain the following result using a method similar to that used in Theorem 1: Corollary 1: For given matrices Q ≥ 0 and R = RT > 0, ¯ > 0, assume that there exist symmetric matrices P > 0, Q ¯ > 0, U1 > 0, U3 > 0, Z1 > 0, Z3 > 0, S > 0, S¯ > 0, R Sˆ > 0, M1 > 0, and M2 > 0, and matrices U2 , Z2 , and C such that [

Φb ⋆

¯b Φ ˜ Φa

]

[ < 0,

Z1 ⋆

] Z2 > 0, Z3

[ U1 ⋆

] U2 > 0, (20) U3

6

IEEE TRANSACTIONS ON CYBERNETICS

where     b Φ =   

Φa1,1 ⋆ ⋆ ⋆ ⋆ ⋆

Φb1,2 Φb2,2 ⋆ ⋆ ⋆ ⋆

Φa1,4 0 Φa4,4 ⋆ ⋆ ⋆

Φa1,5 Φb2,5 0 Φb5,5 ⋆ ⋆

Φa1,6 Φa3,6 0 0 Φa6,6 ⋆

Φa1,7 Φa3,7 0 0 Φa6,7 Φa7,7

    ,   

III. R ECEDING H ORIZON D ISTURBANCE ATTENUATION FOR N EURAL N ETWORKS WITH T IME -VARYING D ELAY AND D ISTURBANCE Consider the following neural network with time-varying delay and disturbance: x(t) ˙ = Ax(t) + W1 ϕ(x(t)) + W2 ϕ(x(t − τ (t)))

ˆ Φb1,2 = S, Φb2,2 Φb2,5

+ u(t) + w(t),

¯ − 2L+ M2 L− − Z1 − Sˆ − 2 S, = −R τ = (L+ + L− )M2 − Z2 ,

(23)

where w(t) = [w1 (t) ... wn (t)]T ∈ Rn is the disturbance vector. We use the same notations and definitions as in Section II. Given a level γ > 0, consider the following new finite horizon cost functional associated with the neural network (23):

Φb5,5 = −2M2 − Z3 , [ ] √ T τ T ¯b = χ Φ ˜b √ χ ˜b τ χ ˆTb τ χ ˜Tb , [ ]2 χ ˆb = C 0 0 0 0 0 , [ ] χ ˜b = P A + C 0 P W1 P W2 0 0 , ˜ a , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φ 1,1 1,4 1,5 1,6 1,7 3,6 3,7 4,4 6,6 6,7 and Φa7,7 are defined in Theorem 1. The receding horizon ¯ R, ¯ stabilizer with the terminal weighting matrices P , Q, ¯ ˆ Z1 , Z2 , Z3 , U1 , U2 , U3 , S, S, and S then guarantees that the delayed neural network (1) with the time-invariant delay τ (t) = τ is asymptotically stable. We now consider the neural network (1) without time delay (i.e., τ (t) = 0). The finite horizon cost functional (3) then becomes J(x(t0 ), t0 , t1 ) ∫ t1 [ T ] = x (t)Qx(t) + uT (t)Ru(t) dt + xT (t1 )P x(t1 ). t0

(21) We can obtain the following simplified LMI condition on the terminal weighting matrix P of the cost function (21): Corollary 2: For given matrices Q ≥ 0 and R = RT > 0, assume that there exist symmetric matrices P > 0, M1 > 0, and a matrix C such that   b Θ1,1 Θb1,2 Θb1,3  ⋆ Θb2,2 0  < 0, (22) ⋆ ⋆ Θb3,3 where Θb1,1 = Q + (P A + C) + (P A + C)T − 2L+ M1 L− , Θb1,2 = (L+ + L− )M1 + P (W1 + W2 ), Θb1,3 = C T , Θb2,2 = −2M1 , Θb3,3 = −2P + R. The receding horizon stabilizer with the terminal weighting matrix P then guarantees that the neural network (1) with τ (t) = 0 is asymptotically stable.

J(x(t0 ), t0 , t1 ) ∫ t1 [ T ] = x (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt t0

+

5 ∑

Vi (x(t1 )),

(24)

i=1

where Vi (x(t)) (i = 1, 2, 3, 4, 5) is defined in Section II. If a feedback saddle-point solution for the finite horizon optimal differential game with the cost functional (24) exists, we denote the solution as u∗ (t) and w∗ (t) (t0 ≤ t ≤ t1 ). We also denote the saddle-point value function of the finite horizon optimal differential game as J ∗ (x(t0 ), t0 , t1 ). We minimize and maximize the cost functional (24) with respect to u(t) and w(t) (t0 ≤ t ≤ t1 ), respectively. The receding horizon disturbance attenuator is then obtained by replacing t0 and t1 with the current time t and the future time t + T , respectively. In other words, the receding horizon disturbance attenuator can be obtained by solving the following minimaximization problem: min max J(x(t), t, t + T ) u(t) w(t)

(25)

subject to (23) at each time t. In this section, given matrices Q and R, we find a new ¯ R, ¯ LMI condition on the terminal weighting matrices P , Q, ¯ and Sˆ of the cost functional Z1 , Z2 , Z3 , U1 , U2 , U3 , S, S, (24) for the receding horizon disturbance attenuation of the neural network (23) in the following theorem: Theorem 2: For given matrices Q ≥ 0 and R = RT > 0, and level γ > 0, assume that there exist symmetric matrices ¯ > 0, R ¯ > 0, U1 > 0, U3 > 0, Z1 > 0, Z3 > 0, P > 0, Q ¯ ˆ S > 0, S > 0, S > 0, M1 > 0, and M2 > 0, and matrices ˆ and C such that U2 , Z2 , Z, [ [

Φc ⋆ U1 ⋆

] [ ] ¯c Φ Z1 Z2 < 0, > 0, ˜a ⋆ Z3 Φ ] [ ] U2 Sˆ Zˆ > 0, ≥ 0, U3 ⋆ Sˆ

(26)

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

where

      c Φ =     

Φa1,1 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ Φa1,5 Φa2,5 0 0 Φa5,5 ⋆ ⋆ ⋆

Φa1,2 Φa2,2

Φa1,3 Φa2,3 Φa3,3 ⋆ ⋆ ⋆ ⋆ ⋆

Φa1,6

Φa1,7

Φc1,8

0 Φa3,6 0 0 Φa6,6 ⋆ ⋆

0 Φa3,7 0 0 Φa6,7 Φa7,7 ⋆

0 0 0 0 0 0 Φc8,8

¯c Φ χ ˆc χ ˜c

α

α

− γ2w ˆ T (t)w(t)]dt ˆ − = lim

∆→0



= −γ I, [ ] √ T τ T = χ ˆTc τ χ ˜c √ χ ˜c τ χ ˜Tc , 2 ] [ = C 0000000 , [ ] = P A + C 0 0 P W1 P W2 0 0 P ,

1 ∆



5 ∑

}

Vi (¯ x(β))

i=1 β+∆

{ T x ¯ (t)[Q + K T RK]¯ x(t)

β

5 } 1 ∑ [Vi (¯ x(β + ∆)) − Vi (¯ x(β))] − γ 2 wT (t)w(t) dt + lim ∆→0 ∆ i=1

     ,     

=x ¯T (β)[Q + K T RK]¯ x(β) − γ 2 wT (β)w(β) +

5 ∑

V˙ i (¯ x(β)).

i=1

The derivative of V1 (¯ x(β)) with respect to β satisfies V˙ 1 (¯ x(β))

Φc1,8 = P, Φc8,8

β

[¯ xT (t)Q¯ x(t) + u ¯T (t)R¯ u(t) − γ 2 w ˆ T (t)w(t)]dt ˆ ∫ β + J(¯ x(β), β, β + ∆) − [¯ xT (t)Q¯ x(t) + u ¯T (t)R¯ u(t)

Φa1,4 0 0 Φa4,4 ⋆ ⋆ ⋆ ⋆

⋆ ⋆ ⋆ ⋆ ⋆ ⋆



1{ ≤ lim ∆→0 ∆

7

¯ + R]¯ ¯ x(β) ≤x ¯T (β)[P (A + K) + (A + K)T P + τ Q

2

+ 2¯ xT (β)P W1 ϕ(¯ x(β)) + 2¯ xT (β)P W2 ϕ(¯ x(β − τ (β))) [∫ ]T [∫ ] β β 1 ¯ + 2¯ xT (β)P w(β) − x ¯(σ)dσ Q x ¯(σ)dσ τ β−τ β−τ (27)

˜ a , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φ 2,5 2,3 2,2 1,7 1,6 1,5 1,4 1,3 1,2 1,1 Φa3,3 , Φa3,6 , Φa3,7 , Φa4,4 , Φa5,5 , Φa6,6 , Φa6,7 , and Φa7,7 are defined in Theorem 1. The receding horizon disturbance attenuator ¯ R, ¯ Z1 , Z2 , Z3 , with the terminal weighting matrices P , Q, ¯ and Sˆ then guarantees the following infinite U1 , U2 , U3 , S, S, horizon H∞ performance: ] ∫∞[ T x (t)Qx(t) + uT (t)Ru(t) dt 0 ∫∞ ≤ γ2. (28) T (t)w(t)dt w 0

¯ x(β − τ (β)). − (1 − λ)¯ xT (β − τ (β))R¯

(30)

Thus, from (10)-(15) and (30), we have

∂J ∗ (x(α), α, β) ∂β ] [ 2 ¯ ˆ c ξ(β), ˆc + Λ ¯ + τ χTc Sχc + τ χTc Sχ ¯ c + τ 2 χTc Sχ ≤ ξ¯T (β) Φ 2 where  a a a a ˆ Φ 1,1 Φ1,2 Φ1,3 Φ1,4 a a  ⋆ 0 Φ2,2 Φ2,3  a  ⋆ 0 ⋆ Φ 3,3 Proof: Let (ˆ u(·), w(·)) ˆ and (¯ u(·), w(·)) ¯ be the saddle-point  a  ⋆ ⋆ ⋆ Φ c solutions for J(x(α), α, β + ∆) and J(x(α), α, β), respec4,4 ˆ = Φ  ⋆ ⋆ ⋆ ⋆ tively, where α is an initial time of J(x(α), α, β), β is a final   ⋆ ⋆ ⋆ ⋆ time of J(x(α), α, β), and ∆ is a small positive constant. We   ⋆ ⋆ ⋆ ⋆ then have ⋆ ⋆ ⋆ ⋆ ∂J ∗ (x(α), α, β)  a a a Φ1,5 Φ1,6 Φ1,7 Φc1,8 ∂β Φa2,5 0 0 0   J ∗ (x(α), α, β + ∆) − J ∗ (x(α), α, β) a a = lim 0 Φ3,6 Φ3,7 0   ∆→0 ∆ ∫ β 0 0 0 0  { , 1 Φa5,5 0 0 0  = lim [ˆ xT (t)Qˆ x(t) + u ˆT (t)Rˆ u(t) − γ 2 w ˆ T (t)w(t)]dt ˆ  ∆→0 ∆ α ⋆ Φa6,6 Φa6,7 0   ∫ β ⋆ ⋆ Φa7,7 0  + J ∗ (ˆ x(β), β, β + ∆) − [¯ xT (t)Q¯ x(t) + u ¯T (t)R¯ u(t) ⋆ ⋆ ⋆ Φc8,8 α 5 } ¯ = diag{K T RK, 0, 0, 0, 0, 0, 0, 0}, ∑ Λ [ − γ2w ¯ T (t)w(t)]dt ¯ − Vi (¯ x(β)) . (29) ¯ i=1 ξ(β) = x ¯T (β) x ¯T (β − τ (β)) x ¯T (β − τ ) ϕT (¯ x(β)) Replacing u ˆ(·) and w(·) ¯ with u ¯(·) and w(·), ˆ respectively, up [∫ β ]T to β, and u ˆ(t) = K x ¯(t) for t ≥ β, where K is a matrix, we ϕT (¯ x(β − τ (β))) x ¯(σ)dσ β−τ have [∫ 0 ∫ β ]T ]T ∗ ∂J (x(α), α, β) x ¯(σ)dσdθ wT (t) , ∂β −τ β+θ

8

IEEE TRANSACTIONS ON CYBERNETICS

[ ] χ c = A + K 0 0 W1 W2 0 0 I .

(31)

If the following condition is satisfied: 2 ˆc + Λ ¯ + τ χT Sχc + τ χT Sχ ¯ c + τ 2 χT Sχ ˆ c < 0, Φ c c 2 c ∗

(32)

we have ∂J (x(α),α,β) < 0. Using the results in [51], we can ∂β easily show that the H∞ performance (28) is guaranteed. By the [ arguments ] in the proof of Theorem 1, the LMI condition ¯c Φc Φ < 0 in (26) implies (32). This completes the ˜a ⋆ Φ proof. Remark 8: The proposed receding horizon disturbance attenuator can guarantee the asymptotic stability of the neural network (23) without disturbance (i.e., w(t) = 0). This fact can be proven by using the proof of Corollary 1 in [51]. Remark 9: The LMI condition obtained in Theorem 2 depends on not only the delay bound τ and delay derivative λ, but also on the scalar γ, which can represent the H∞ performance bound. As mentioned in Remark 3, this condition also depends on matrices Q and R. As the diagonal elements of Q and R increase, γ decreases, and λ approaches 1, the feasibility of the LMI condition in Theorem 2 decreases. Remark 10: The optimal H∞ performance bound of the receding horizon disturbance attenuator can be obtained by ¯> solving the following problem: min γ 2 subject to P > 0, Q ¯ 0, R > 0, U1 > 0, U3 > 0, Z1 > 0, Z3 > 0, S > 0, S¯ > 0, Sˆ > 0, M1 > 0, M2 > 0, and four LMIs in (26), then obtaining γ 2 and taking the square root of γ 2 . Remark 11: The receding horizon stabilizer and disturbance attenuator proposed in this paper can be extensively utilized in nonlinear control applications. For example, delayed neural networks of form (1) are introduced to describe unknown nonlinear systems with time-varying delays. The receding horizon stabilizer can then be used to stabilize these types of nonlinear systems with time-varying delays. When a disturbance exists, delayed neural networks of form (23) are introduced and then the receding horizon disturbance attenuator can reduce the effect of the disturbance in these nonlinear systems in the H∞ sense. Therefore, the proposed receding horizon stabilizer and disturbance attenuator for neural networks with timevarying delays become significant for several practical control applications. Remark 12: In real physical secure communication systems using chaotic neural networks with time-varying delays, noise or disturbance always exists. In this case, applying the proposed receding horizon disturbance attenuator becomes of practical importance for synchronization of the master-slave configuration of neural networks with time-varying delays and disturbance. Once a receding horizon disturbance attenuator is established for synchronization of this master-slave configuration, it can be utilized for the effective disturbance attenuation in real physical secure communication systems. Furthermore, we can extend the proposed receding horizon disturbance attenuator to the synchronization behavior of time-delayed brain networks, where each node is described by neural networks with time-varying delays. In this study, some topological and spatial characteristics of delayed brain networks can be considered based on graph theory.

Remark 13: Some results on the receding horizon stabilization and disturbance attenuation for neural works have been reported in [49], [50], [51]. However, these results were restricted to neural networks without time delays. In this paper, we have considered neural networks with time-varying delays and propose, for the first time, new stabilizer and disturbance attenuator for neural networks with time-varying delays in the receding horizon control sense. For this reason, the results proposed in this paper represent a new contribution to stabilization studies of neural networks with time-varying delays. Remark 14: This paper requires that the delay τ (t) is a time-varying continuous function and its rate is less than one. This condition for τ (t) can be weakened using the techniques presented in [56] and [57]. This would be an interesting problem for future research. By using a method similar to that employed in Corollary 1 and Theorem 2, we have the following result on the receding horizon disturbance attenuation for the neural network (23) with the time-invariant delay (τ (t) = τ ≥ 0). Corollary 3: For given matrices Q ≥ 0, R = RT > 0 and level γ > 0, assume that there exist symmetric matrices ¯ > 0, R ¯ > 0, U1 > 0, U3 > 0, Z1 > 0, Z3 > 0, P > 0, Q ¯ ˆ S > 0, S > 0, S > 0, M1 > 0, and M2 > 0, and matrices U2 , Z2 , and C such that [ d ] [ ] [ ] ¯d Φ Φ Z1 Z2 U1 U2 < 0, > 0, > 0, (33) ˜a ⋆ Z3 ⋆ U3 ⋆ Φ where

Φd

¯d Φ χ ˆd χ ˜d



Φa1,1 Φb1,2 Φa1,4 Φa1,5  ⋆ 0 Φb2,5 Φb2,2  a  ⋆ 0 ⋆ Φ4,4  b ⋆ ⋆ ⋆ Φ = 5,5   ⋆ ⋆ ⋆ ⋆   ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ [ ] √ T τ T = χ ˆTd τ χ ˜d √ χ ˜d τ χ ˜Td , 2 [ ] = C 000000 , [ = P A + C 0 P W1 P W2 0 0

Φa1,6 Φa3,6 0 0 Φa6,6 ⋆ ⋆

Φa1,7 Φa3,7 0 0 Φa6,7 Φa7,7 ⋆

Φc1,8 0 0 0 0 0 Φc8,8

     ,    

] P ,

˜ a , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φa , Φ 4,4 6,6 6,7 3,7 1,6 1,7 3,6 1,5 1,1 1,4 and Φa7,7 are defined in Theorem 1, Φb1,2 , Φb2,2 , Φb2,5 , and Φb5,5 are defined in Corollary 1, and Φc1,8 and Φc8,8 are defined in Theorem 2. The receding horizon disturbance attenuator with ¯ R, ¯ Z1 , Z2 , Z3 , U1 , the terminal weighting matrices P , Q, ¯ ˆ U2 , U3 , S, S, and S then guarantees the H∞ performance (28) for the neural network (23) with the time-invariant delay τ (t) = τ . If we consider the neural network (23) without time delay (i.e., τ (t) = 0), then the finite horizon cost functional (24) is given by J(x(t0 ), t0 , t1 ) ∫ t1 [ T ] = x (t)Qx(t) + uT (t)Ru(t) − γ 2 wT (t)w(t) dt t0

+ xT (t1 )P x(t1 ),

(34)

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

where Θb1,1 , Θb1,2 , Θb1,3 , Θb2,2 , and Θb3,3 are defined in Corollary 2, and Φc1,8 and Φc8,8 are defined in Theorem 2. The receding horizon disturbance attenuator with the terminal weighting matrix P then guarantees the H∞ performance (28) for the neural network (23) with τ (t) = 0.

1.4

1.2

1

0.8

x2(t)

and the following LMI condition on the terminal weighting matrix P of the cost function (34) for the receding horizon disturbance attenuation is obtained: Corollary 4: For given matrices Q ≥ 0 and R = RT > 0, assume that there exist symmetric matrices P > 0, M1 > 0, and a matrix C such that  b  Θ1,1 Θb1,2 Φc1,8 Θb1,3  ⋆ Θb2,2 0 0    < 0, (35) c  ⋆ ⋆ Φ8,8 0  ⋆ ⋆ ⋆ Θb3,3

9

0.6

0.4

0.2

0

−0.2 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

x1(t)

Fig. 1. Phase plot of state variables

IV. N UMERICAL E XAMPLES In this section, three examples are provided to illustrate the effectiveness of the approach proposed in the previous sections. The first example demonstrates the validity of the condition proposed in Theorem 1 for the receding horizon stabilization. The second example shows the H∞ performance of the receding horizon disturbance attenuator. The third example illustrates the variation in the H∞ performance with respect to Q and R.

[ Zˆ =

−34.127 3.572

−3.828 −23.879

]

[ ,C =

72.272 46.149

−37.044 −95.399

]

which ensures the asymptotic stability of delayed neural network (1) with the receding horizon stabilizer, according to Theorem 1. Figure 1 shows the phase plot of state variables x(t) = [x1 (t) x2 (t)]T when the initial state condition is given by x(0) = [−0.4 1.2]T , from which we find that x(t) converges from x(0) = [−0.4 1.2]T to the origin.

A. Example 1 Consider the neural network (1) with the parameters [ ] [ ] −3.34 0 0.34 0.68 A= , W1 = , 0 −1.49 −1.05 0.06 [ ] 1.51 −0.124 W2 = , τ (t) = 0.25(1 + cos(2t)), 0.54 1.29 ϕi (s) = tanh(s), i = 1, 2, which imply −

τ = λ = 0.5, L =

[

0 0

0 0

]

[ +

, L =

1 0

0 1

] .

B. Example 2 Consider the neural network (23) with   −2.34 0 0 0 −1.99 0 , A= 0 0 −1.3   1.34 0.58 0.124 W1 =  0.725 −1.01 2.15  , 0.84 0.263 −1.29   −0.397 1.274 0.11 W2 =  −0.947 0.559 −0.481  , 1.78 0.92 −0.62

(36)

(37)

(38) A feasible solution to the LMI problem in Theorem 1 with Q = R = 0.1I ∈ R2×2 is given by [ ] [ ] τ (t) = 0.3(sin(t) + 1), ϕi (s) = s2 /(1 + s2 ), (39) 159.846 −18.590 29.725 −5.188 ¯ P = ,Q = , −18.590 132.702 −5.188 28.579 [ ] [ ] where i = 1, 2, 3. A straightforward calculation gives 41.857 −7.423 74.448 −17.823 ¯ R= , Z1 = , −7.423 30.140 −17.823 63.395 τ = 0.6, λ = 0.3, (40) [ ] [ ]     −41.904 7.748 152.123 5.781 0 0 0 0.65 0 0 Z2 = , Z3 = , 33.540 −37.452 5.781 121.004 0.65 0  . (41) L− =  0 0 0  , L+ =  0 [ ] [ ] 0 0 0 0 0 0.65 103.851 −10.706 −53.922 6.175 U1 = , U2 = , −10.706 103.059 −2.713 −72.377 Let Q = R = 0.1I ∈ R3×3 . Solving the LMI condition in [ ] [ ] 160.263 −6.819 19.448 −1.692 Theorem 2 with γ = 0.45 gives terminal weighting matrices U3 = ,S = , −6.819 160.644 −1.692 17.926 of the cost functional (24). Applying Theorem 2, the receding [ ] [ ] 9.937 −0.134 95.433 −14.484 horizon disturbance attenuator can be seen to guarantee the S¯ = , Sˆ = , −0.134 10.640 −14.484 82.496 H performance. First, we examine the H∞ performance [ ] [ ] ∞ of the receding horizon disturbance attenuator with the cost 226.539 2.997 161.091 −12.468 M1 = , M2 = , functional (24) for the delayed neural network (23). Define 2.997 166.108 −12.468 101.545

,

10

IEEE TRANSACTIONS ON CYBERNETICS

0.3

−3

3.5

x 10

x1(t)

0.2

3

0.1

0

2.5 −0.1

0

5

0

5

time(sec)

10

15

10

15

10

15

1.5

H(t)

2

1 x2(t)

1.5

0.5 0

1

−0.5

time(sec)

0.5

1

0

−1

0

2

4

6

8

10 time(sec)

12

14

16

18

20

x3(t)

0

−2 −3

Fig. 2. The plot of H(t)

−4

0

5 time(sec)

Fig. 3. State responses of the delayed neural network (23)

(42)

which represents the effect of the disturbance on the state and input variables from 0 to t. The H∞ performance (28) can be equivalently expressed as H(∞) < γ 2 . Assume x(0) = [0 0 0]T and w(t) = [cos(10t) sin(10t) cos(5t) + sin(10t)]T . Figure 2 shows H(∞) < γ 2 = 0.2025, which verifies that the receding horizon disturbance attenuator ensures the H∞ performance under the zero initial state condition. Next, we investigate the state responses of the delayed neural network (23) with the receding horizon disturbance attenuator. The initial state condition is given by x(0) = [−0.1 1.5 − 4]T , and the elements of the disturbance w(t) are Gaussian noises with mean 0 and variance 0.1. Figure 3 shows state responses x(t) = [x1 (t) x2 (t) x3 (t)]T for the delayed neural network (23) with the receding horizon disturbance attenuator under the cost functional (24). Figure 4 also shows the corresponding phase diagram of x(t). These figures show that the receding horizon disturbance attenuator ensures the reduction of the effect of w(t) on the delayed neural network (23). These results demonstrate the effectiveness of the proposed approach. C. Example 3 In this example, we illustrate the variation of the optimal H∞ performance bound with respect to Q and R. Consider the neural network (23) with the parameters (36)-(41). Let Q = ηI ∈ R3×3 and R = ρI ∈ R3×3 , where the parameters η and ρ in matrices Q and R, respectively, can take different values for the purpose of comparison. To find the relations of the optimal H∞ performance bound γ with η and ρ, we first set R = 0.1I, 0.5I, 0.9I, then change the value of η in 0.1 increments between 0.1 and 0.9, and store the optimal H∞ performance bound γ for each η. Figure 5 shows the resulting data. Conversely, we fix Q = 0.1I, 0.5I, 0.9I, then change the value of ρ in 0.1 increments between 0.1 and 0.9, and finally store the optimal H∞ performance bound γ for each

1 0 −1 x3(t)

the following function: ] ∫t[ T x (σ)Qx(σ) + uT (σ)Ru(σ) dσ 0 H(t) = , ∫t wT (σ)w(σ)dσ 0

−2 −3 −4 1.5

0.3

1

0.2 0.5

0.1 0

x2(t)

0 −0.5

−0.1

x1(t)

Fig. 4. Phase diagram of state variables

ρ. Figure 6 shows the resulting data, which indicate that the optimal H∞ performance bound γ increases monotonically with the increase of η and ρ. Next, we check the variation of the effect of the disturbance on the state and input with respect to Q and R. Assume x(0) = [0 0 0]T and w(t) = [cos(10t) sin(10t) cos(5t) + sin(10t)]T . Recall the function H(t) in (42). First, we set Q = 0.1I, 0.5I, 0.9I with fixing R = 0.1I and plot H(t) for each Q in Figure 7. Conversely, we set R = 0.1I, 0.5I, 0.9I with fixing Q = 0.1I and plot H(t) for each R in Figure 8. These two figures show that H(t) decreases with the decrease of Q and R, which means that the decrease of Q and R undoubtedly decreases the effect of the disturbance on the state and input. Thus, we can obtain an improved robustness property of the receding horizon disturbance attenuator against disturbance as Q and R decrease. V. C ONCLUSION In this paper, the problems of receding horizon stabilization and disturbance attenuation for neural networks with timevarying delay were studied. The single and double integral

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

11

−3

x 10 5

R=0.1*I R=0.5*I R=0.9*I

1.1

4

0.9 3.5

0.8

3 H(t)

Optimal H∞ performance bound γ

4.5

1

0.7 0.6

2 1.5

0.5 R=0.1*I R=0.5*I R=0.9*I

0.4 0.3

2.5

1 0.5

0

0.1

0.2

0.3

0.4

0.5 η

0.6

0.7

0.8

0.9

1 0

0

2

4

6

8

10 time(sec)

12

14

16

18

20

Fig. 5. Optimal H∞ performance bound γ versus η with fixed R Fig. 8. The plot of H(t) for R = 0.1I, 0.5I, 0.9I with fixed Q 1.3 1.2

ered neural networks. The time-invariant delay and delay-free cases were also investigated. The terminal weighting matrices of the receding horizon cost functional can be easily obtained by solving the LMI feasibility problem. Via three numerical examples, we demonstrated the effectiveness of the proposed approach. It is expected that the approach proposed in this paper can be extended to many complex neural networks, such as Markovian jumping neural networks, switched neural networks, and Takagi-Sugeno fuzzy neural networks with incomplete information. Thus, this paper opens up a new path for the application of receding horizon stabilization and disturbance attenuation to many complex neural networks with time delays.

Optimal H∞ performance bound γ

1.1 1 0.9 0.8 0.7 0.6 0.5 Q=0.1*I Q=0.5*I Q=0.9*I

0.4 0.3

0

0.1

0.2

0.3

0.4

0.5 ρ

0.6

0.7

0.8

0.9

1

Fig. 6. Optimal H∞ performance bound γ versus ρ with fixed Q

ACKNOWLEDGMENTS

0.025 Q=0.1*I Q=0.5*I Q=0.9*I

This work was supported in part by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF- 2014R1A1A1006101), in part by the National Natural Science Foundation of China (61174126 and 61222301), in part by the Heilongjiang Outstanding Youth Science Fund (JC201406), in part by the Fok Ying Tung Education Foundation (141059), in part by the Fundamental Research Funds for the Central Universities (HIT.BRETIV.201303), in part by the Australian Research Council (DP140102180, LP140100471), and in part by the 111 Project (B12018).

0.02

H(t)

0.015

0.01

0.005

0

0

2

4

6

8

10 time(sec)

12

14

16

18

20

Fig. 7. The plot of H(t) for Q = 0.1I, 0.5I, 0.9I with fixed R

Wirtinger-type inequalities and lower bound lemma were used to obtain a new delay-dependent condition for the receding horizon stabilization, ensuring asymptotical stability of the considered neural networks. Based on this condition, a new LMI condition for the receding horizon disturbance attenuation was proposed to warrant the H∞ performance of the consid-

R EFERENCES [1] J. Hopfield, “Neurons with grade response have collective computational properties like those of two-state neurons,” Proc. Nat. Acad. Sci., vol. 81, no. 10, pp. 3088–3092, 1984. [2] M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks. Wiley-Interscience, 2003. [3] P. Arena, M. Bucolo, L. Fortuna, and L. Occhipinti, “Cellular neural networks for real-time DNA microarray analysis,” IEEE Engineering in Medicine and Biology Magazine, vol. 21, no. 2, pp. 17–25, 2002. [4] S. Arik, “An analysis of global asymptotic stability of delayed cellular neural networks,” IEEE Trans. Neural Networks, vol. 13, no. 5, pp. 1239–1242, 2002.

12

[5] L. Chua and T. Roska, Cellular neural networks and visual computing: foundations and application. Cambridge University Press, Cambridge, 2005. [6] Z. Lin, Y. Xia, P. Shi, and H. Wu, “Robust sliding mode control for uncertain linear discrete systems independent of time-delay,” Int. J. Innov. Comp. Inf. Control, vol. 7, no. 2, pp. 869–880, 2011. [7] Y. He, G. P. Liu, D. Rees, and M. Wu, “Stability analysis for neural networks with time-varying interval delay,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1850–1854, 2007. [8] L. Hu, H. Gao, and W. Zheng, “Novel stability of cellular neural networks with interval time-varying delay,” Neural Networks, vol. 21, no. 10, pp. 1458–1463, 2008. [9] W. Yu, J. Cao, and G. Chen, “Stability and Hopf bifurcation of a general delayed recurrent neural network,” IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 845–854, 2008. [10] H. Li, B. Chen, Q. Zhou, and W. Qian, “Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, no. 1, pp. 94–102, 2009. [11] J. Lu and D. Ho, “Globally exponential synchronization and synchronizability for general dynamical networks,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 2, pp. 350–361, 2010. [12] A. Wu and Z. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 12, pp. 1919–1929, 2012. [13] R. Sakthivel, K. Mathiyalagan, and S. M. Anthoni, “Delay-dependent robust stabilization and H∞ control for neural networks with various activation functions,” Physica Scripta, vol. 85, no. 4, p. 045801, 2012. [14] Z. Wu, P. Shi, H. Su, and J. Chu, “Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1796–1806, 2013. [15] K. Mathiyalagan, R. Sakthivel, and S. M. Anthoni, “Robust exponential stability and H∞ control for switched neutral-type neural networks,” International Journal of Adaptive Control and Signal Processing, vol. 28, no. 3-5, pp. 429–443, 2014. [16] W. Zhou, Q. Zhu, P. Shi, H. Su, J. Fang, and L. Zhou, “Adaptive synchronization for neutral-type neural networks with stochastic perturbation and Markovian switching parameters,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2848–2860, 2014. [17] Z. Wu, P. Shi, H. Su, and J. Chu, “Local synchronization of chaotic neural networks with sampled-data and saturating actuators,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2635–2645, 2014. [18] Q. Shen, B. Jiang, P. Shi, and C. Lim, “Novel neural networks-based fault tolerant control scheme with fault alarm,” IEEE Transactions on Cybernetics, vol. 44, no. 11, pp. 2190–2201, 2014. [19] J. Lam, S. X. amd D. W. C. Ho, and Y. Zou, “On global asymptotic for a class of delayed neural networks,” Int. J. Circuit Theory Appl., vol. 40, no. 11, pp. 1165–1174, 2012. [20] H. Zhang, Z. Liu, and G. Huang, “Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via SC technique,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 4, no. 6, pp. 1480–1491, 2010. [21] C. K. Ahn, “An H∞ approach to stability analysis of switched Hopfield neural networks with time-delay,” Nonlinear dynamics, vol. 60, no. 4, pp. 703–711, 2010. [22] S. Mou, H. Gao, W. Qiang, and K. Chen, “New delay-dependent exponential stability for neural networks with time delay,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 38, no. 2, pp. 571–576, 2008. [23] H. R. Karimi and H. Gao, “New delay-dependent exponential H∞ synchronization for uncertain neural networks with mixed time-delays,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 1, pp. 173–185, 2010. [24] Z. Wu, P. Shi, H. Su, and J. Chu, “Delay-dependent stability analysis for switched neural networks with time-varying delay,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 41, no. 6, pp. 1522–1530, 2011. [25] Z. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 2, pp. 199–210, 2012. [26] A. Arunkumar, R. Sakthivel, K. Mathiyalagan, and S. M. Anthoni, “Robust stability criteria for discrete-time switched neural networks with various activation functions,” Applied Mathematics and Computation, vol. 218, no. 22, pp. 10803–10816, 2012. [27] Z. Wu, P. Shi, H. Su, and J. Chu, “Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays,” IEEE Transactions

IEEE TRANSACTIONS ON CYBERNETICS

[28]

[29]

[30]

[31]

[32]

[33] [34]

[35]

[36]

[37] [38]

[39]

[40]

[41]

[42]

[43]

[44]

[45]

[46]

[47]

[48]

[49]

[50]

on Neural Networks and Learning Systems, vol. 24, no. 3, pp. 345–355, 2013. J. Yu, Z. Liu, and D. Xu, “Vector Wirtinger-type inequaplity and the stability analysis of delayed neural network,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 5, pp. 1246– 1257, 2013. Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Transactions on Neural Networks, vol. 16, no. 1, pp. 279–284, 2005. C. K. Ahn, “Linear matrix inequality optimization approach to exponential robust filtering for switched Hopfield neural networks,” Journal of Optimization Theory and Applications, vol. 154, no. 2, pp. 573–587, 2012. Z. Wu, P. Shi, H. Su, and J. Chu, “State estimation for discrete-time neural networks with time-varying delay,” Int. J. Syst. Sci., vol. 43, no. 4, pp. 647–655, 2012. C. K. Ahn, “Switched exponential state estimation of neural networks based on passivity theory,” Nonlinear Dynamics, vol. 67, no. 1, pp. 573– 586, 2012. C. Li and X. Liao, “Passivity analysis of neural networks with time delay,” IEEE Trans. Circuits Syst. II, vol. 52, no. 8, pp. 471–475, 2005. Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 72, no. 7-9, pp. 1782–1788, 2009. C. K. Ahn, “Some new results on stability of Takagi-Sugeno fuzzy Hopfield neural networks,” Fuzzy Sets and Systems, vol. 179, no. 1, pp. 100–111, 2011. Z. Wu, P. Shi, H. Su, and J. Chu, “Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 22, no. 10, pp. 1566–1575, 2011. C. K. Ahn, “Passive and exponential filter design for fuzzy neural networks,” Information Sciences, vol. 238, no. 20, pp. 126–137, 2013. W. H. Kwon and A. E. Pearson, “A modified quadratic cost problem and feedback stabilization of a linear system,” IEEE Transactions on Automatic Control, vol. 22, pp. 838–842, 1977. W. H. Kwon and A. E. Pearson, “On feedback stabilization of timevarying discrete linear systems,” IEEE Transactions on Automatic Control, vol. 23, pp. 479–481, 1978. S. S. Keerthi and E. G. Gilbert, “Optimal infinite horizon feedback laws for a general class of constrained discrete-time systems: stability and moving-horizon approximation,” Journal of Optimization Theory and Applications, vol. 57, no. 2, pp. 265–293, 1988. M. V. Kothare, V. Balakrishnan, and M. Morari, “Robust constrained model predictive control using linear matrix inequalities,” Automatica, vol. 32, no. 10, pp. 1361–1379, 1996. H. Chen and F. Allgower, “A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability,” Automatica, vol. 34, no. 10, pp. 1205–1217, 1998. W. H. Kwon and K. B. Kim, “On stabilizing receding horizon controls for linear continuous time-invariant systems,” IEEE Transactions on Automatic Control, vol. 45, no. 7, pp. 1329–1334, 2000. C. K. Ahn and M. K. Song, “New results on stability margins of nonlinear discrete-time receding horizon H∞ control,” International Journal of Innovative Computing, Information, & Control, vol. 9, no. 4, pp. 1703–1713, 2013. C. K. Ahn, S. H. Han, and W. H. Kwon, “H∞ FIR filters for linear continuous-time state-space systems,” IEEE Signal Processing Letters, vol. 13, no. 9, pp. 557–560, 2006. C. K. Ahn, S. H. Han, and W. H. Kwon, “H∞ finite memory controls for linear discrete-time state-space models,” IEEE Trans. Circuits and Systems II, vol. 54, no. 2, pp. 97–101, 2007. C. K. Ahn, “Robustness bound for receding horizon finite memory control: Lyapunov-Krasovskii approach,” International Journal of Control, vol. 85, no. 7, pp. 942–949, 2012. C. K. Ahn, “A new solution to induced l∞ finite impulse response filtering problem based on two matrix inequalities,” International Journal of Control, vol. 87, no. 2, pp. 404–409, 2014. C. K. Ahn and M. T. Lim, “Model predictive stabilizer for T-S fuzzy recurrent multilayer neural network models with general terminal weighting matrix,” Neural Computing & Applications, vol. 23, no. 1 Supplement, pp. 271–277, 2013. C. K. Ahn, “Receding horizon robust control for nonlinear systems based on linear differential inclusion of neural networks,” Journal of Optimization Theory and Applications, vol. 160, no. 2, pp. 659–678, 2014.

AHN et al.: RECEDING HORIZON STABILIZATION AND DISTURBANCE ATTENUATION FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

[51] C. K. Ahn, “Receding horizon disturbance attenuation for TakagiSugeno fuzzy switched dynamic neural networks,” Information Sciences, vol. 280, no. 1, pp. 53–63, 2014. [52] P. Park, J. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. [53] S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishinan, Linear matrix inequalities in systems and control theory. SIAM, Philadelphia, PA, 1994. [54] D. X. amd Z. Liu, J. Yu, and D. Peng, “Wirtinger-type inequuality and the stability analysis of delayed Lur’e system,” Discrete Dynamics in Nature and Society, vol. 2013, p. 9, 2013. Article ID 793686. [55] K. Gu, J. Chen, and V. L. Kharitonov, Stability of Time-Delay Systems. Birkhauser, Boston, 2003. [56] E. Shustin and E. Fridman, “On delay-derivative-dependent stability of systems with fast-varying delays,” Automatica, vol. 43, no. 9, pp. 1649– 1655, 2007. [57] A. Seuret, F. Gouaisbaut, and E. Fridman, “Stability of systems with fastvarying delay using improved wirtinger´s inequality,” IEEE Conference on Decision and Control, pp. 946–951, 2013.

Choon Ki Ahn (M’06-SM’12) received the B.S. and M.S. degrees in the School of Electrical Engineering from Korea University, Seoul, Korea. He received the Ph.D. degree in the School of Electrical Engineering and Computer Science from Seoul National University, Seoul, Korea, in 2006. He was a Senior Research Engineer, Samsung Electronics, Suwon, Korea; a Professor with the Department of Mechanical and Automotive Engineering, Seoul National University of Science and Technology, Seoul, Korea. He is currently a Professor with the School of Electrical Engineering, Korea University, Seoul, Korea. He was the recipient of the Excellent Research Achievement Award of WKU in 2010. He was awarded the Medal for ‘Top 100 Engineers’ 2010 by IBC, Cambridge, England. In 2012, his EPJE paper was ranked #1 in the TOP 20 Articles in the field of neural networks by BioMedLib. In 2013, his ND paper was selected as one of the ‘5 Key Papers’ from Nonlinear Dynamics, Springer. He is a Senior Member of the IEEE. His current research interests are two-dimensional system theory, finite word-length effects in digital signal processing, time-domain FIR filtering, fuzzy systems, neural networks, and nonlinear dynamics. He has been on the editorial board of international journals, including International Journal of Control, Automation, and Systems, Mathematical Problems in Engineering, The Scientific World Journal, Scientific Research and Essays, and Journal of Institute of Control, Robotics and Systems.

13

Peng Shi (M’95-SM’98-F’15) received the B.Sc. degree in mathematics and the M.E. degree in system engineering from the Harbin Institute of Technology, Harbin, China; the Ph.D. degree in electrical engineering from the University of Newcastle, N.S.W, Australia; the Ph.D. degree in mathematics from the University of South Australia, S.A., Australia; and the D.Sc. degree from the University of Glamorgan, Pontypridd, U.K. He was a Postdoctoral Researcher and Lecturer with the University of South Australia; a Senior Scientist with the Defence Science and Technology Organisation, Australia; and a Professor with the University of Glamorgan. He is currently a Professor with Victoria University, Melbourne, Vic., Australia, and The University of Adelaide, S.A., Australia. His research interests include system and control theory, computational and intelligent systems, and operational research. Dr. Shi is a Fellow of the Institute of the Electrical and Electronic Engineers (USA); a Fellow of the Institution of Engineering and Technology (U.K.); and a Fellow of the Institute of Mathematics and its Applications (U.K.). He has been a member of the Editorial Boards of a number of international journals, including Automatica, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, IEEE TRANSACTIONS ON FUZZY SYSTEMS, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, and IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: REGULAR PAPERS.

Ligang Wu (M’10-SM’12) received the B.S. degree in Automation from Harbin University of Science and Technology, China in 2001; the M.E. degree in Navigation Guidance and Control from Harbin Institute of Technology, China in 2003; the PhD degree in Control Theory and Control Engineering from Harbin Institute of Technology, China in 2006. From January 2006 to April 2007, he was a Research Associate in the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong. From September 2007 to June 2008, he was a Senior Research Associate in the Department of Mathematics, City University of Hong Kong, Hong Kong. From December 2012 to December 2013, he was a Research Associate in the Department of Electrical and Electronic Engineering, Imperial College London, London, UK. In 2008, he joined the Harbin Institute of Technology, China, as an Associate Professor, and was then promoted to a Professor in 2012. Dr. Wu currently serves as an Associate Editor for a number of journals, including IEEE T RANSACTIONS ON AUTOMATIC C ONTROL, Information Sciences, Signal Processing, and IET Control Theory and Applications. He is also an Associate Editor for the Conference Editorial Board, IEEE Control Systems Society. Dr. Wu has published more than 100 research papers in international referred journals. He is the author of the monographs Sliding Mode Control of Uncertain Parameter-Switching Hybrid Systems (John Wiley & Sons, 2014), and Fuzzy Control Systems with Time-Delay and Stochastic Perturbation: Analysis and Synthesis (Springer, 2015). His current research interests include switched hybrid systems, computational and intelligent systems, sliding mode control, optimal filtering, and model reduction.