A Contraction Theory Approach to Stochastic Incremental Stability

Report 4 Downloads 63 Views
arXiv:0704.0926v1 [math.OC] 6 Apr 2007

A Contraction Theory Approach to Stochastic Incremental Stability Quang-Cuong Pham ∗

Nicolas Tabareau

LPPA, Coll`ege de France

LPPA, Coll`ege de France

Paris, France [email protected]

Paris, France [email protected]

Jean-Jacques Slotine Nonlinear Systems Laboratory, MIT Cambridge, MA 02139, USA [email protected]

February 5, 2008

Abstract In this article, we investigate the incremental stability properties of Itˆo stochastic dynamical systems. Specifically, we derive a stochastic version of nonlinear contraction theory that provides a bound on the mean squared distance between any two trajectories of a stochastically contracting system. This bound can be expressed as a function of the noise intensity and the contraction rate of the noiseless system. We illustrate these results in the contexts of stochastic nonlinear observers design and stochastic synchronization.

1

Introduction

Stability analysis is one of the most important and most studied fields in the theory of nonlinear dynamical systems [24, 12]. Standard stability properties are considered with respect to an equilibrium point or to a nominal system trajectory (see e.g. [30, 18]). By contrast, incremental stability is concerned ∗

To whom correspondance should be addressed.

1

with the behaviour of system trajectories with respect to one another. In essence, a nonlinear system is incrementally stable if initial conditions are globally forgotten, i.e., if all trajectories tend towards a single trajectory, without requiring this trajectory to be known a priori. Historically, work on deterministic incremental stability can be traced back to the 1950’s [21, 13, 4] (see e.g. [23, 17] for a more extensive list and historical discussion of related references). More recently, and largely independently of these earlier studies, a number of works have put incremental stability on a broader theoretical basis and made relations with more traditional stability approaches [10, 31, 22, 2, 1]. Furthermore, it was shown that incremental stability is especially relevant in the study of problems such as state detection [2], observer design [22, 16], oscillator synchronization [28] or nervous systems modelling [11]. While the above references are mostly concerned with deterministic stability notions, stability theory has also been extended to stochastic dynamical systems, see for instance [20, 19]. This includes important recent developments in Lyapunov-like approaches [8, 25], as well as applications to standard problems in systems and control [9, 33, 26, 5]. However, stochastic versions of incremental stability have not yet been systematically investigated. The goal of this paper is to extend some concepts and results in incremental stability to stochastic dynamical systems. More specifically, we derive a stochastic version of contraction analysis in the specialized context of stateindependent metrics. We prove in section 2 that the mean squared distance between any two trajectories of a stochastically contracting system is upperbounded by a constant after exponential transients. We also note that, in contrast with standard stochastic stability, asymptotic “with probability 1” incremental stability properties cannot be obtained, because the noise does not vanish as two trajectories get closer. In section 3, we show that results on combinations of deterministic contracting systems have simple analogues in the stochastic case. These combination properties allow one to build by recursion stochastically contracting systems of arbitrary size. Finally, as illustrations of our results, we investigate in section 4 the effect of measurement noise on contracting observers, and synchronization phenomena in networks of noisy dynamical systems.

2

2

Results

2.1 2.1.1

Background Nonlinear contraction theory

Contraction theory [22] provides a set of tools to analyze incremental exponential stability of nonlinear systems, and has been applied notably to observer design [22, 16], synchronisation phenomena analysis [34, 28] and complex nervous systems modelling [11]. Nonlinear contracting systems enjoy desirable aggregation properties, in that contraction is preserved under many types of system combinations given suitable simple conditions [22]. While we shall derive global properties of nonlinear systems, many of our results can be expressed in terms of eigenvalues of symmetric matrices [15]. Given a square matrix A, the symmetric part of A is denoted by As . The smallest and largest eigenvalues of As are denoted by λmin (A) and λmax (A). Given these notations, the matrix A is positive definite (denoted A > 0) if λmin (A) > 0, and it is uniformly positive definite if ∃β > 0 ∀x, t λmin (A(x, t)) ≥ β The basic theorem of contraction analysis, derived in [22], can be stated as Theorem 1 (Contraction) Consider, in Rn , the deterministic system x˙ = f(x, t)

(2.1)

where f is a smooth nonlinear function. Denote the Jacobian matrix of f with ∂f respect to its first variable by ∂x . If there exists a square matrix Θ(x, t) such ⊤ that M(x, t) = Θ(x, t) Θ(x, t) is uniformly positive definite and the matrix   ∂f d Θ−1 (x, t) F(x, t) = Θ(x, t) + Θ(x, t) dt ∂x is uniformly negative definite, then all system trajectories converge exponentially to a single trajectory, with convergence rate | supx,t λmax (F)| = λ > 0. The system is said to be contracting, F is called its generalized Jacobian, M(x, t) its contraction metric and λ its contraction rate. 2.1.2

Standard stochastic stability

In this section, we present very informally the basic ideas of standard stochastic stability (for a rigourous treatment, the reader is referred to e.g. [20]). 3

This will set the context to understand the forthcoming difficulties and differences associated with stochastic incremental stability. For simplicity, we consider the special case of global exponential stability. Let x(t) be a stochastic process starting at x0 . Assume that there exists a non-negative function V such that ∀x ∈ Rn

e (x) ≤ −λV (x) AV

(2.2)

e is the infinitesimal operator of the process x(t) (A e is the stochastic where A analogue of the deterministic differentiation operator). By Dynkin’s formula ([20], p. 10), one has Rt e (x(s))ds ∀t ≥ 0 EV (xt ) − V (x0 ) = E 0 AV Rt Rt ≤ −λE 0 V (x(s))ds = −λ 0 EV (x(s))ds

Applying the Gronwall’s lemma (see appendix) to the deterministic realvalued function t → EV (x(t)) yields ∀t ≥ 0 EV (x(t)) ≤ V (x0 )e−λt

Next, remark that V (x(t)) is a non-negative supermartingale ([20], p. 25), which yields, by the supermartingale inequality   V (x0 )e−λT EV (x(T )) ≤ P sup V (x(t)) ≥ A ≤ A A T ≤t 0 lim P sup V (x(t)) ≥ A = 0 T →∞

2.2 2.2.1

(2.3)

T ≤t 0 such that ∀t ≥ 0, a, b ∈ Rn

kf(a, t) − f(b, t)| + kσ(a, t) − σ(b, t)k ≤ K1 ka − bk

Restriction on growth: There exists a constant K2 > 0 ∀t ≥ 0, a ∈ Rn

kf(a, t)k2 + kσ(a, t)k2 ≤ K2 (1 + kak2 )

Under these conditions, one can show that equation (2.4) has on [0, ∞[ a unique Rn -valued solution a(t), continuous with probability 1, and satisfying the initial condition a(0) = a0 ([3], p. 105). In order to investigate incremental stability properties of system (2.4), consider now the duplicated system x(t) = (a(t), b(t))T which follows the equation      f(a, t) σ(a, t) 0 dWd1 dx = dt + f(b, t) 0 σ(b, t) dWd2 ⌢

= f(x, t)dt + σ(x, t)dW2d ⌢

(2.5)

We introduce two hypotheses (H1) f(a, t) is contracting in the identity metric, with contraction rate λ,  ∂f i.e. λmax ∂a ≤ −λ uniformly  (H2) tr σ(a, t)T σ(a, t) is uniformly upper-bounded by a constant C In other words, (H1) says that the noise-free system is contracting, while (H2) says that the variance of the noise is upper-bounded by a constant.

Definition 1 A system that verifies (H1) and (H2) is said to be stochastically contracting in the identity metric, with rate λ and bound C. 2.2.2

The stochastic contraction theorem

Consider the Lyapunov-like function V (x) = ka − bk2 = (a − b)T (a − b). e (x), Using the fact that f is contracting, we derive now an inequality on AV similar to equation (2.2) in section 2.1.2. As we shall see, the crucial difference e can become positive (i.e. V can with standard stochastic stability is that AV be nondecreasing) when x is small, i.e. when a and b are close to each other. This fact will prevent us from using the supermartingale inequality to obtain asymptotic “with probability 1” bounds as in equation (2.3) 1 . This issue is intrinsic to incremental stability, and stems from the essential fact that the noise does not vanish as two trajectories get closer. 1

However, if one is interested in finite time bounds then the supermartingale inequality is still applicable, see ([20], p. 86) for details.

5

Lemma 1 Under (H1) and (H2), one has the inequality e (x) ≤ −2λV (x) + 2C AV

(2.6)

Proof First, observe that V is twice continously differentiable, and thus is e Next, let us compute AV e in the domain of A.   2 ∂V (x)⌢ 1 ⌢ T ∂ V (x) ⌢ e AV (x) = σ(x, t) f(x, t) + tr σ(x, t) ∂x 2 ∂x2 X ∂V ⌢ ∂2V ⌢ 1 X⌢ σ(x, t)ij f(x, t)i + σ(x, t)kj = ∂x 2 ∂x ∂x i i k i i,j,k X ∂V X ∂V f(a, t)i + f(b, t)i = ∂a ∂b i i i i 2 X ∂ V ⌢ ∂2V ⌢ 1 1 X⌢ ⌢ σ(a, t)ij σ(b, t)ij + σ(a, t)kj + σ(b, t)kj 2 i,j,k ∂ai ∂ak 2 i,j,k ∂bi ∂bk

= 2(a − b)T (f(a, t) − f(b, t)) + tr(σ(a, t)T σ(a, t)) + tr(σ(b, t)T σ(b, t))

Fix t > 0 and, as in [7], consider the real-valued function r(µ) = (a − b)T (f(µa + (1 − µ)b, t) − f (b, t)) Since f is C 1 , r is C 1 over [0, 1]. By the mean value theorem, there exists µ0 ∈]0, 1[ such that r ′ (µ0 ) = r(1) − r(0) = (a − b)T (f(a) − f(b)) On the other hand, one obtains by differentiating r   ∂f ′ T r (µ0 ) = (a − b) (µ0 a + (1 − µ0 )b) (a − b) ∂a Thus, one has  ∂f (µ0 a + (1 − µ0 )b) (a − b) (a − b) (f(a) − f(b)) = (a − b) ∂a ≤ −λ(a − b)T (a − b) = −2λV (x) (2.7) T

T



where the inequality is obtained by using (H1). Finally, e (x) = 2(a − b)T (f(a) − f(b)) + tr(σ(a, t)T σ(a, t)) + tr(σ(b, t)T σ(b, t)) AV ≤ −2λV (x) + 2C 6

where the inequality is obtained by using (H2).  We are now in a position to prove our main theorem on stochastic incremental stability. Theorem 2 (Stochastic contraction) Assume that system (2.4) verifies (H1) and (H2). Let a(t) and b(t) be two trajectories starting respectively at a0 and b0 . Then +   C C 2 2 ∀t ≥ 0 E ka(t) − b(t)k ≤ + ka0 − b0 k − e−2λt (2.8) λ λ where [·]+ = max(0, ·).

Proof Let x0 = (a0 , b0 ). By Dynkin’s formula Z t e (x(s))ds EV (x(t)) − V (x0 ) = E AV 0

Thus one has ∀u, t 0 ≤ u ≤ t < ∞

EV (x(t)) − EV (x(u)) = E

Z

t

Zu t

e (x(s))ds AV

≤ E (−2λV (x(s)) + 2C)ds Z tu = (−2λEV (x(s)) + 2C)ds

(2.9)

u

where inequality (2.9) is obtained by using lemma 1. Denote by g(t) the deterministic quantity EV (x(t)). Clearly, g(t) is a continuous function of t since x(t) is a continuous process (see section 2.2.1). The function g then satisfies the conditions of the Gronwall-type lemma (see lemma 3 in appendix) and as a consequence +  C C ∀t ≥ 0 EV (x(t)) ≤ + V (x0 ) − e−2λT λ λ which is the desired result. 

2.3

Some remarks

2.3.1

Practical meaning of the stochastic contraction theorem p C/λ. Basically, theorem 2 says two things. First, if the Define D = initial distance between a and b is smaller than D, then the mean distance 7

between a and b is always smaller than D. Second, if the initial distance is larger than D, the mean distance between a and b becomes smaller than D after exponential transients of rate λ. The following corollary, which is an immediate consequence of theorem 2, makes these statements more precise Corrolary 1 Assume that system (2.4) verifies (H1) and (H2). Let a(t) and b(t) be two trajectories starting respectively at a0 and b0 . p (i) If ka0 − b0 k ≤ C/λ, then for all t ≥ 0, one has p E(ka(t) − b(t)k) ≤ C/λ (2.10) (ii) If ka0 −b0 k > one has

p

C/λ, then let ǫ > 0. For all t ≥ E(ka(t) − b(t)k) ≤

p

1 λ

log

C/λ + ǫ

q

ka0 −b0 k2 −C/λ ǫ

 ,

(2.11)

Remark Since ka(t)−b(t)k is nonnegative, inequalites (2.10) and (2.11) together with Markov inequality allow one to obtain upper-bounds “with some probability” on the distance between a(t) and b(t). For example, equation (2.10) yields p C/λ ∀A > 0, ∀t ≥ 0 P (ka(t) − b(t)k ≥ A) ≤ A Note however that the so-obtained bounds are much weaker than (2.3). 2.3.2

“Optimality” of the bound obtained in the stochastic contraction theorem

To illustrate consistency with standard results, let us first use theorem 2 in a very simple application. Consider the following linear dynamical system, known as the Orstein-Uhlenbeck (colored noise) process da = −λadt + σdW

(2.12)

Clearly, the noiseless system is contracting with rate λ and the trace of the noise matrix is upper-bounded by σ 2 . Let a(t) and b(t) be two system trajectories starting respectively at a0 and b0 . Then by theorem 2, we have +   σ2 σ2 2 2 e−2λt ∀t ≥ 0 E (a(t) − b(t)) ≤ + (a0 − b0 ) − λ λ 8

Let us verify this result by solving directly equation (2.12). The solution of equation (2.12) is ([3], p. 134) Z t −λt a(t) = a0 e + σ eλ(s−t) dW (s) 0

Next, let us compute the mean square distance between two trajectories E((a(t) − b(t))2 ) = (a0 − b0 )2 e−2λt + Z t 2 ! Z t 2 !! 2 λ(s−t) λ(u−t) σ E e dW (s) +E e dW (u) 0

0

σ2 = (a0 − b0 )2 e−2λt + (1 − e−2λt ) λ +  2 σ σ2 2 ≤ e−2λt + (a0 − b0 ) − λ λ

This calculation shows that the bound of theorem 2 is optimal, in the sense that it can be attained. 2.3.3

Noisy and noiseless trajectories

Consider the following duplicated system      ⌢ dWd1 0 0 f(a, t) ⌢ dt + = f(x, t)dt + σ(x, t)dW2d dx = 2 0 σ(b, t) dWd f(b, t) (2.14) This equation is the same as equation (2.5) except that the a-system is not perturbed by white noise. Thus V (x) = ka − bk2 will represent the distance between a noiseless trajectory and a noisy one. All the calculations will be the same as in the previous development, with C being replaced by C/2. One can easily derive the following corollary Corrolary 2 Assume that system (2.4) verifies (H1) and (H2). Let a(t) be a noiseless trajectory starting at a0 , and b(t) be a noisy trajectory starting at b0 . Then +   C C 2 2 ∀t ≥ 0 E ka(t) − b(t)k ≤ + ka0 − b0 k − e−2λt (2.15) 2λ 2λ Remark This corollary provides a robustness result for contracting systems, in the sense that any contracting system is automatically protected against noise, as quantified by (2.15). This robustness could be related to the exponential nature of contraction stability. 9

(2.13)

2.4

Generalization of the stochastic contraction theorem

Theorem 2 can be vastly generalized by considering general time-dependent metrics. The case of space-dependent metrics is not considered in this article and will be the subject of a future work. Specifically, let us replace (H1) and (H2) by the following hypotheses (H1’) There exists a uniformly positive definite metric M(t) = Θ(t)T Θ(t), with the lower-bound β > 0 (i.e. ∀x, t xT M(t)x ≥ βkxk2 ) and f(a, t) is contracting in that metric, with contraction rate λ, i.e.    d ∂f −1 λmax Θ (t) ≤ −λ uniformly Θ(t) + Θ(t) dt ∂a or equivalently ∂f + M(t) ∂a



∂f ∂a

T

M(t) +

d M(t) ≤ −2λM(t) uniformly dt

 (H2’) tr σ(a, t)T M(t)σ(a, t) is uniformly upper-bounded by a constant C

Definition 2 A system that verifies (H1’) and (H2’) is said to be stochastically contracting in the metric M(t), with rate λ and bound C.

Consider now the generalized Lyapunov-like function V1 (x, t) = (a − b)T M(t)(a − b). Lemma 1 can then be generalized as follows. Lemma 2 Under (H1’) and (H2’), one has the inequality e 1 (x, t) ≤ −2λV1 (x, t) + 2C AV

e 1 Proof Let us compute first AV

(2.16)

  2 ∂V1 ∂V1⌢ 1 ⌢ T ∂ V1 ⌢ e AV1 (x, t) = f(x, t) + tr σ(x, t) σ(x, t) + ∂t ∂x 2 ∂x2   d T M(t) (a − b) + 2(a − b)T M(t)(f(a, t) − f(b, t)) = (a − b) dt +tr(σ(a, t)T M(t)σ(a, t)) + tr(σ(b, t)T M(t)σ(b, t)) Fix t > 0 and consider the real-valued function r(µ) = (a − b)T M(t)(f (µa + (1 − µ)b, t) − f(b, t)) 10

Since f is C 1 , r is C 1 over [0, 1]. By the mean value theorem, there exists µ0 ∈]0, 1[ such that r ′ (µ0 ) = r(1) − r(0) = (a − b)T M(t)(f(a) − f(b)) On the other hand, one obtains by differentiating r   ∂f ′ T r (µ0 ) = (a − b) M(t) (µ0 a + (1 − µ0 )b) (a − b) ∂a Thus, letting c = µ0 a + (1 − µ0 )b, one has  (a − b)T dtd M(t) (a − b) + 2(a − b)T M(t)(f(a) − f(b))   ∂f = (a − b)T dtd M(t) (a − b) + 2(a − b)T M(t) ∂a (c) (a − b)    T ∂f ∂f = (a − b)T dtd M(t) + M(t) ∂a (c) + ∂a (c) M(t) (a − b) ≤ −2λ(a − b)T M(t)(a − b) = −2λV1 (x)

(2.17)

where the inequality is obtained by using (H1’). Finally, combining equation (2.17) with (H2’) allows to obtain the desired result.  We can now state the generalized stochastic contraction theorem Theorem 3 (Generalized stochastic contraction) Assume that system (2.4) verifies (H1’) and (H2’). Let a(t) and b(t) be two trajectories starting respectively at a0 and b0 . Then +   C C T e−2λt ∀t ≥ 0 E (a(t) − b(t)) M(t)(a(t) − b(t)) ≤ + D0 − λ λ (2.18) T where D0 = (a0 − b0 ) M(0)(a0 − b0 ). In particular, ! +   C 1 C ∀t ≥ 0 E ka(t) − b(t)k2 ≤ + D0 − (2.19) e−2λt β λ λ Proof Following the same reasoning as in the proof of theorem 2, one obtains +  C C ∀t ≥ 0 EV1 (x(t)) ≤ + V1 (x0 ) − e−2λt λ λ which is (2.18). Next, observing that ka(t) − b(t)k2 ≤

1 1 (a(t) − b(t))T M(t)(a(t) − b(t)) = EV1 (x(t)) β β

leads to (2.19).  11

3

Combinations of contracting stochastic systems

Stochastic contraction inherits naturally from deterministic contraction [22] its convenient combination properties. Because contraction is a state-space concept, such properties can be expressed in more general forms than inputoutput analogues such as passivity-based combinations [29]. The following combination properties allow one to build by recursion stochastically contracting systems of arbitrary size. Parallel combination Consider two stochastic systems of the same dimension  dx1 = f1 (x1 , t)dt + σ1 (x1 , t)dW dx2 = f2 (x2 , t)dt + σ2 (x2 , t)dW

Assume that both systems are stochastically contracting in a same constant metric M, with rates λ1 and λ2 and with bounds C1 and C2 . Consider a uniformly positive bounded superposition α1 (t)x1 + α2 (t)x2 , where ∃li , mi > 0, ∀t ≥ 0, li ≤ αi (t) ≤ mi

Clearly, this superposition is stochastically contracting in the metric M, with rate l1 λ1 + l2 λ2 and bound m1 C1 + m2 C2 . Negative feedback combination In this and the following paragraphs, we describe combinations properties for contracting systems in constant metrics M. The case of time-varying metrics can be easily adapted from this development but is skipped here for the sake of clarity. Consider two coupled stochastic systems  dx1 = f1 (x1 , x2 , t)dt + σ1 (x1 , t)dW dx2 = f2 (x1 , x2 , t)dt + σ2 (x2 , t)dW Assume that system i (i = 1, 2) is stochastically contracting with respect to Mi = ΘTi Θi , with rate λi and bound Ci . Assume furthermore that the two systems are connected by negative feedback [32]. More precisely, the Jacobian matrices of the couplings are of the −1 = −kΘ2 J⊤ form Θ1 J12 Θ−1 2 21 Θ1 , with k a positive constant. Hence, the Jacobian matrix of the augmented unperturbed system is given by   −1 ⊤ J1 −kΘ−1 1 Θ2 J21 Θ1 Θ2 J= J21 J2 12



Θ1 √ 0 Consider a coordinate transform Θ = 0 kΘ2 metric M = ΘT Θ > 0. After some calculations, one has −1

ΘJΘ



s



associated to the

  Θ1 J1 Θ−1 0 1 s  = 0 Θ2 J2 Θ−1 2 s ≤ max(−λ1 , −λ2 )I uniformly 

(3.1)

The augmented system is thus stochastically contracting in the metric M, with rate min(λ1 , λ2 ) and bound C1 + kC2 . Hierarchical combination We first recall a standard result in matrix   A1 AT21 . analysis [15]. Let A be symmetric matrix in the form A = A21 A2 Assume that A1 and A2 are definite positive. Then A is definite positive if sing2 (A21 ) < λmin (A1 )λmin(A2 ) where sing(A21 ) denotes the largest singular value of A21 . In this case, the smallest eigenvalue of A satisfies s 2 λmin(A1 ) − λmin (A2 ) λmin (A1 ) + λmin (A2 ) λmin (A) ≥ − + sing2 (A21 ) 2 2 Consider now the same set-up as in the previous paragraph, except that the connection is now hierarchical and upper-bounded. More precisely, the Jacobians of the couplings verify J12 = 0 and sing2 (Θ2 J21 Θ−1 1 ) ≤ K. Hence, the Jacobian matrix of the augmented unperturbed system is given by J =    J1 0 Θ1 0 . Consider a coordinate transform Θǫ = associJ21 J2 0 ǫΘ2 ated to the metric Mǫ = ΘTǫ Θǫ > 0. After some calculations, one has −1

ΘJΘ Set now ǫ =

q



s

=



 Θ1 J1 Θ−1 1 s −1 1 ǫΘ J Θ 2 21 1 2

2λ1 λ2 . K

1 T ǫ(Θ2 J21 Θ−1 1 ) 2 −1 Θ2 J2 Θ2 s



The augmented system is then stochastically conp tracting in the metric Mǫ , with rate 21 (λ1 + λ2 − λ21 + λ22 )) and bound C1 + 2C2Kλ1 λ2 . Small gains In this paragraph, we require no specific assumption on the   Θ1 √ 0 form of the couplings. Consider the coordinate transform Θ = 0 kΘ2 T associated to the metric Mk = Θk Θk > 0. After some calculations, one has 13



Θk JΘ−1 k s



=

Θ1 J1 Θ−1 1 Bk



s

BTk  Θ2 J2 Θ−1 2 s



 . where Bk = + Following the matrix analysis result stated at the beginning of the previous paragraph, if inf k>0 sing2 (Bk ) < λ1 λ2 then the augmented system is stochastically contracting in the metric Mk , with rate λ verifying (3.2) and bound C1 + kC2 . s 2 λ1 + λ2 λ1 − λ2 λ≥ − + inf sing2 (Bk ) (3.2) k>0 2 2 1 2

4

√

kΘ2 J21 Θ−1 1

√1 k

T Θ1 J12 Θ−1 2

Some simple examples

4.1

Effect of measurement noise on contracting observers

Consider a nonlinear dynamical system x˙ = f(x, t). If a measurement y = y(x) is available, then it may be possible to choose an output injection matrix K(t) such that the dynamics ˆ˙ = f(ˆ x x, t) + K(t)(ˆ y − y)

(4.1)

ˆ = y(ˆ is contracting, with y x). Since the actual state x is a particular solution ˆ of (4.1) will then converge towards x exponentially. of (4.1), any solution x Let us restrict ourselves to the case where y depends linearly on the state, i.e. y = H(t)x, and assume that the measurements are corrupted by additive “white noise”, so that the measurement equation becomes y = H(t)(x + Σ(t)ξ(t)) where ξ(t) is a multidimensional “white noise” and Σ(t) is the matrix of noise intensities. The observer equation is now given by the following Itˆo stochastic differential equation (using the formal rule ξ(t)dt = dW ) dˆ x = (f(ˆ x, t) + K(t)(H(t)x − H(t)ˆ x))dt + K(t)H(t)Σ(t)dW

(4.2)

ˆ of system (4.2) By corollary 2, one has, for any solution x ∀t ≥ 0 E kˆ x(t) − x(t)k

2



+  C C 2 + kˆ x0 − x0 k − e−2λt ≤ 2λ 2λ 14

(4.3)

where

  ∂f(x, t) λ = inf λmax − K(t)H(t) x,t ∂x

C = sup tr(Σ(t)T H(t)T K(t)T K(t)H(t)Σ(t)) t≥0

Remark Note that the choice of the injection gain K(t) is governed by a trade-off between convergence speed (λ) and noise sensitivity (C/λ) as quantified by (4.3). More generally, the explicit computation of the bound on the expected quadratic estimation error given by (4.3) may open the possibility of measurement selection in a way similar to the linear case. If several possible measurements or sets of measurements can be performed, one may try at each instant (or at each step, in a discrete version) to select the most relevant, i.e., the measurement or set of measurements which will best contribute to improving the state estimate. Similarly to the Kalman filters used in [6] for linear systems, this can be achieved by computing, along with the state estimate itself, the corresponding bounds on the expected quadratic estimation error, and then selecting accordingly the measurement which will minimize it.

4.2

Estimation of velocity using composite variables

In this section, we present a very simple example that hopefully suggests the many possibilities that could stem from the combination of our stochastic stability analysis with the composite variables framework [30]. Let x be the position of a mobile having constant acceleration. We would like to compute good approximations of the mobile’s velocity v and acceleration a using only measurements of x and without using any filter. To this purpose, we construct the following observer            d v −α 1 v (β − α2 )x v x = + =A +B −β 0 a −βαx a x dt a (4.4) and introduce the composite variables vb = v + αx and b a = a + βx. By construction, these variable follow the equation       d vb v−v b a =A + (4.5) a b a−a 0 dt b and therefore, a particular solution of (b v, b a) is clearly (v, a). Choose now 2 β = α . One can then show that system (4.5) is contracting with rate λα = rα (where r ≈ 0.43 is the unique real solution to the cubic equation 15

x3 + 2x2 + 3x + 1 = 0). Hence, (b v, b a) converges exponentially to (v, a) with rate λα . Assume now that the measurements of x are corrupted by additive “white noise”, so that xmeasured = x + σξ. Equation (4.4) becomes an Itˆo stochastic differential equation          σ 0 x v v d dW dt + B = A +B 0 σ x a a In the above equation, the variance of the noise is clearly upper-bounded by α6 σ 2 . Using again corollary 2, we obtain +   α5 σ 2 α5 σ 2 2 2 ∀t ≥ 0 E kb v (t) − v(t)k ≤ e−2λt + kb v0 − v0 k − 2r 2r

To illustrate this result, we performed a numerical simulation using the Euler-Maruyama algorithm [14] for a mobile described by the equation x(t) = 0.5a0 t2 + v0 t. Figure 1 was generated using the parameters α = 2, σ = 0.6, 5 2 a0 = 3 and v0 = 2. Note that the theoretical asymptotic upper-bound α2kσ on E (kb v(t) − v(t)k2 ) is particularly accurate. 30

50

25 40

20 30

15 20

10 10

5 0

0 0

2

4

6

8

10

12

14

16

0

1

2

3

4

5

6

7

8

Figure 1: Estimation of the velocity of a mobile using noisy measurements of its position. Left plot, in red : actual position, in blue : actual velocity, in green : estimate of the velocity in one trial. Right plot, in red : average squared difference between actual and estimated velocities over 1000 trials, in blue : theoretical bound.

4.3

Stochastic synchronization

Consider a network of n dynamical elements coupled through diffusive connections ! X dxi = f(xi , t) + Kij (xj − xi ) dt + σi (xi , t)dWdi i = 1, . . . , n (4.6) j6=i

16

9

Let 

 x1 ⌢   x =  ...  , xn



 f(x1 , t) ⌢ ⌢   .. f(x, t) =  , . f(xn , t)



 σ(x, t) = 

⌢ ⌢

σ1 (x1 , t)

The global state x then follows the equation ⌢  ⌢ ⌢ ⌢ ⌢ ⌢ dx = f(x, t) − Lx dt + σ(x, t)dWnd ⌢

0 0

0 .. . 0

0

  0 σn (xn , t) (4.7)

In the sequel, we follow the reasoning of [28], which starts by defining an appropriate projection matrix V describing the synchronisation subspace (V represents the state projection on the subspace M⊥ , orthogonal to the linear subspace M, invariant under synchronization, see [28] for details). Denote ⌢ ⌢ ⌢ by y the state of the projected system, y = Vx. Since the mapping is linear, Itˆo differention rule simply yields  ⌢  ⌢ ⌢ ⌢ ⌢ ⌢ ⌢ dy = Vdx = Vf(x, t) − VLx dt + Vσ(x, t)dWnd  ⌢  ⌢ ⌢ T⌢ T⌢ = Vf(V y, t) − VLV y dt + Vσ(VT y, t)dWnd (4.8)

∂f Assume now that the couplings are strong enough, so that A = V ∂x VT − VLVT is uniformly negative definite. Let λ = |λmax (A)| > 0. System (4.8) then verifies condition (H1) with rate λ. Assume furthermore each noise intensity σi is upper-bounded by a constant Ci , i.e. supx,t tr(σi (x, t)T P σi (x, t)) ≤ Ci . Condition (H2) will then be satisfied with the bound C = i∈Sync Ci where Sync denotes the set of all aspiring synchronized elements. ⌢ Next, consider a noiseless trajectory yu (t) of system (4.8). By theo⌢ rem 3 of [28], we know that yu (t) converges exponentially to zero. Thus, by corollary 2, one can conclude that, after exponential transients of rate λ, ⌢ C E(||y(t)||2 ) ≤ 2λ . In other words, after exponential transients of rate λ, the q C mean distance from the synchronization manifold is smaller than 2λ .

Remark The above development can be easily generalized to the case of space-independent metrics by combining theorem 3 of this paper and corollary 1 of [28]. Example As illustration of the above development, we provide here a detailed analysis for the synchronization of noisy FitzHugh-Nagumo oscillators

17



(see [34] for the references). The dynamics of two diffusively-coupled noisy FitzHugh-Nagumo oscillators can be described by  dvi = (c(vi + wi − 31 vi3 + Ii ) + k(v0 − vi ))dt + σdW i = 1, 2 dwi = − 1c (vi − a + bwi )dt   1 0 −1 0 1 . The Jacobian Let x = (v1 , w1 , v2 , w2 )T and V = √2 0 1 0 −1 matrix of the projected noiseless system is then given by   c(v2 +v2 ) c c − 12 2 − k −1/c −b/c Thus, if the coupling strength verifies k > c then the projected system will be stochastically contracting in the diagonal metric M = diag(1, c) with rate min(k − c, b/c) and bound σ 2 . Hence, the average absolute difference between the two membrane potentials |v1 − v2 | should be upper-bounded by σ √ (see figure 2 for a numerical simulation). min(1,c) min(k−c,b/c)

3 v_1 v_2 2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 0

2

4

6

8

10

Figure 2: Synchronization of two noisy FitzHugh-Nagumo oscillators.

A

A variation of Gronwall’s lemma

Lemma 3 Let g : [0, ∞[→ R be a continuous function, λ > 0 and C ≥ 0. Assume that Z t ∀u, t 0 ≤ u ≤ t g(t) − g(u) ≤ − (λg(s) + C)ds (A.1) u

Then

+  C C e−λt ∀t ≥ 0 g(t) ≤ + g(0) − λ λ where [·]+ = max(0, ·). 18

(A.2)

See [27] for a proof.

References [1] L. d’Alto, M. Corless. Incremental Quadratic Stability. Available at http://roger.ecn.purdue.edu/∼corless/publications/c99-cdc05-dalto/p.pdf [2] D. Angeli. A Lyapunov Approach to Incremental Stability Properties. IEEE Transactions on Automatic Control, 47, pp. 410-422, 2002. [3] L. Arnold. Stochastic Differential Equations : Theory and Applications. New York NY ; London ; Sydney : Wiley, 1974. [4] B. Demidovich. Dissipativity of a Nonlinear System of Differential Equations, Vestnik Moscow State University, Ser. Mat. Mekh., Part I–6 (1961) 1927; Part II–1 (1962) 38 (in Russian). [5] H. Deng, M. Krsti´c, R. Williams. Stabilization of Stochastic Nonlinear Systems Driven by Noise of Unknown Covariance. IEEE Trans. On Automat. Contr., 46(8) 2001. [6] E. Dickmanns. Dynamic Vision for Intelligent Vehicles. Course Notes, MIT Dept. of Electrical Engineering and Computer Science, 1998. [7] K. Elrifai, J.-J. Slotine. Contraction and Incremental Stability. NSL Report, 2006. [8] P. Florchinger. Lyapunov-like techniques for stochastic stability. SIAM J. Control Optim., 33, pp. 1151–1169, 1995. [9] P. Florchinger. Feedback stabilization of affine in the control stochastic differential systems by the control Lyapunov function method. SIAM J. Control Optim., 35, pp. 500–511, 1997. [10] V. Fromion. Some results on the behavior of Lipschitz continuous systems. Proc. European Control Conf, 1997. [11] B. Girard et al. Where Neuroscience and Dynamic System Theory Meet Autonomous Robotics: a Contracting Basal Ganglia Model for Action Selection. To be published in Neural Networks. [12] W. Hahn. Stability of motion. Springer, 1967. [13] P. Hartmann. Ordinary differential equations. Wiley, New York, 1964. 19

[14] D. Higham. An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations. SIAM Review, 43 , pp. 525–546, 2001. [15] R. Horn, C. Johnson. Matrix Analysis, Cambridge University Press, 1985. [16] J. Jouffroy, T. Fossen. On the combination of nonlinear contracting observers and UGES controllers for output feedback. Proc. IEEE Conference on Decision and Control, 2004. [17] J. Jouffroy. Some Ancestors of Contraction Analysis. Proc. IEEE Conference on Decision and Control, 2005. [18] H. Khalil. Nonlinear Systems, 2nd Ed., Prentice-Hall, 2002. [19] R. Has’minskii. Stochastic Stability of Differential Equations. Sijthoff & Noordhoff, Rockville, 1980. [20] H. Kushner. Stochastic Stability and Control. Academic Press, 1967. [21] D. Lewis. Metric properties of differential equations, American Journal of Mathematics, 71, pp. 294–312, 1949. [22] W. Lohmiller, J.-J. Slotine. On Contraction Analysis for Nonlinear Systems. Automatica, 34 (6):671–682, 1998. [23] W. Lohmiller, J.-J. Slotine. Contraction Analysis of Nonlinear Distributed Systems. International Journal of Control, 78(9), 2005. [24] A. Lyapunov. The General Problem of Motion Stability (in Russian), 1892. Translated in French, Ann. Fac. Sci. Toulouse 9, pp. 203-474 (1907). Reprinted in Ann. Math. Study No. 17, Princeton Univ. Press, 1949. [25] X. Mao. Stability of Stochastic Differential Equations with Respect to Semimartingales. White Plains, NY: Longman, 1991. [26] Z. Pan, T. Bas. Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion. SIAM J. Control Optim., 37, pp. 957–995, 1999. [27] Q.-C. Pham. A variation of Gronwall’s lemma. Available at http://arxiv.org/abs/0704.0922, 2007. [28] Q.-C. Pham, J.-J. Slotine. Stable Concurrent Synchronization in Dynamic System Networks. Neural Networks, 20(1), 2007. 20

[29] V. Popov. Hyperstability of Control Systems. Springer-Verlag, 1973. [30] J.-J. Slotine, W. Li. Applied Nonlinear Control. Prentice-Hall, 1991. [31] E. Sontag, Y. Wang. Output-to-state stability and detectability of nonlinear systems. Syst. Control Lett., 29, 279–290, 1997. [32] N. Tabareau, J.-J. Slotine. Notes on Contraction Theory. MIT-NSL Report 0503, 2005. [33] J. Tsinias. The concept of “exponential ISS” for stochastic systems and applications to feedback stabilization. Syst. Control Lett., 36, pp. 221– 229, 1999. [34] W. Wang, J.-J. Slotine. On Partial Contraction Analysis for Coupled Nonlinear Oscillators. Biological Cybernetics, 92(1), 2005.

21