design of homogeneous time-varying stabilizing control laws for ...

Report 3 Downloads 65 Views
DESIGN OF HOMOGENEOUS TIME-VARYING STABILIZING CONTROL LAWS FOR DRIFTLESS CONTROLLABLE SYSTEMS VIA OSCILLATORY APPROXIMATION OF LIE BRACKETS IN CLOSED-LOOP PASCAL MORINy , JEAN-BAPTISTE POMETy , AND CLAUDE SAMSONy

Abstract. A constructive method for time-varying stabilization of smooth driftless controllable systems is developed. It provides time-varying homogeneous feedback laws that are continuous and smooth away from the origin. These feedbacks make the closed-loop system globally exponentially asymptotically stable if the control system is homogeneous with respect to a family of dilations, and, using local homogeneous approximation of control systems, locally exponentially asymptotically stable otherwise. The method uses some known algorithms that construct oscillatory control inputs to approximate motion in the direction of iterated Lie brackets, that we adapt to the closed-loop context. Key words. Nonlinear Control, Stabilization, Time-varying stabilization, Controllability, Lie

brackets

AMS subject classications. 93D15, 34C29, 93B52

1. Introduction. 1.1. Related work and contribution. Stabilization by continuous time-vary-

ing feedback laws of nonlinear systems that cannot be stabilized by time-invariant continuous feedback laws has been an ongoing subject of research in the past few years. The fact that, for many controllable systems, no continuous stabilizing feedback exists was rst pointed out by Sussmann [23]. A simple necessary condition was given by Brockett [1], since known as Brockett's condition, and that allows to identify a wide class of controllable systems for which no continuous stabilizing feedback exists; these include most controllable driftless systems. More recently, Coron gave a stronger necessary condition in [2]. A possible way of stabilizing systems for which these necessary conditions are violated is to use discontinuous (time-invariant) control laws. This has been explored in the literature, but the present work does not go at all in this direction. The possibility of stabilizing nonlinear controllable systems via continuous timevarying feedback control laws was rst noticed in the very detailed study of stabilization of one-dimensional systems by Sontag and Sussmann [21]. More recently, smooth stabilizing control laws for some non-holonomic mechanical systems were given by the third author in [18], and this was the starting point of a systematic study of time-varying stabilization. Coron proved in [3] the paper [15] by the second author deals with a less general class of controllable driftless systems that all controllable driftless systems may be stabilized by continuous (and even smooth) time-varying feedback, and in [4] that most controllable systems (even with drift) can also be stabilized by continuous time-varying feedback.  January 11, 1997, revised November 18, 1997 and July 31, 1998. To appear in SIAM J. on Control & Optimization. A preliminary version of this material was presented at the conference MTNS'96, Saint Louis, U. S. A., June 2428, 1996. y INRIA Sophia-Antipolis, B. P. 93, 06902 Sophia Antipolis cedex, France. [email protected], [email protected], [email protected]. 1

2

P. Morin, J.-B. Pomet and C. Samson

From here on, only driftless systems are considered in this paper. After the general existence result given in [3], studies on the subject have focused on methods to construct continuous time-varying stabilizing feedback laws, and on obtaining feedback laws that provide suciently fast convergence. As far as the constructiveness aspect is concerned, let us, for simplicity, divide the construction methods in two kinds. The rst kind of methods apply to rather large classes of controllable driftless systems, like the work of Coron [3] (general controllable driftless systems; the paper is not oriented towards construction of the control, but a method can be extracted from the proofs), by the second author [15] (controllable driftless systems for which the control Lie algebra is generated by a specic set of vector elds), or by M'Closkey and Murray [12] (same conditions as in [15]). These studies all share the following feature: they use the solution of a linear PDE, or the expression of the ow of a vector eld, to construct the control law. This solution, or this ow, has to be calculated beforehand, either analytically or numerically, and this introduces, especially when no analytical solution is available, a degree of complication which may not be necessary. The second kind of methods found in the literature provides explicit expressions. Their drawback is that they only apply to specic subclasses of driftless systems, such as models of mobile robots, or systems in the so-called Chain form or Power-form, like the work by the third author [18], by Teel et al. [27], by Sépulchre et al. [20], among others. On the other hand, a need to improve the speed of convergence came out of the slow convergence associated with the smooth control laws that were rst proposed. This concern motivated several studies starting with the work by M'Closkey and Murray [11], yielding continuous control laws which are not smooth, or even Lipschitz everywhere, but are homogeneous with respect to some dilation, and thus exponentially stabilizing, not in the standard sense but w.r.t some homogeneous norm this notion was introduced by Kawski in [7]. See for instance further work by the authors [16, 14], or by M'Closkey and Murray [12], who have also proposed recently in [13] a procedure that transforms a given smooth stabilizing control law into a homogeneous one. Except this last reference, that requires that a smooth stabilizing control law has been designed beforehand, the construction of homogeneous exponentially stabilizing control laws in the literature is restricted to specic subclasses of driftless systems. The design method described in the present paper has the advantage of being totally explicit, in the sense that it only requires ordinary dierentiation and linear algebraic operations, while it applies to general controllable systems and provides exponential stability. This method gives homogeneous feedbacks, which ensure global stability if the control vector elds are homogeneous, and local stability otherwise. The fact that it relates controllability with the construction of a stabilizing control law in a more direct way than previous designs also makes it conceptually appealing, all the more so as it may be viewed as converting the open-loop control techniques reported by Liu and Sussmann in [25, 9] into closed-loop ones. However the generality of the method has also a price. When applied to particular systems for which explicit solutions have long been available, the present method often yields solutions which are signicantly more complicated. This comes partly from the complexity of the approximation algorithm proposed in [25, 9], which we use. This is also a consequence of the modications that we have made to adapt this algorithm to our feedback control objective. 1.2. Outline of the method. Nonlinear controllability results were rst derived for driftless systems, see for instance the work by Lobry [10] where it is shown that

Stabilization via oscillatory approximation of brackets

3

such a system is controllable if and only if any direction in the state space can be obtained as a linear combination of iterated Lie brackets of the control vector elds, at least in the real-analytic case. It was also shown very early on, by Haynes and Hermes [5], that under this same condition, any curve in the state-space can be approached by open-loop solutions of the controlled system (note that this property is not shared by all controllable systems, but rather specic to driftless systems). In these studies the key element is that, in addition to the directions of motion corresponding to the control vector elds, motion along other directions corresponding to iterated Lie brackets is also possible by quickly switching motions along the original control vector elds. Take for example a system with two controls (1.1)

x_ = u1 b1(x) + u2 b2 (x)

with state x in IR5 , and that assume that at each point x the vectors

b1 (x) ; b2(x) ; [b1 ; b2 ](x) ; [b1 ; [b1 ; b2]](x) ; [b2 ; [b1 ; b2 ]](x) are linearly independent, and thus span IR5 . The idea in [5] is the following: on one hand it is clear that any (e.g. dierentiable) parameterized curve t 7! (t) is a

(1.2)

possible solution of the extended system with ve controls:

x_ = v1 b1 (x) + v2 b2(x) + v3 [b1 ; b2 ](x) + v4 [b1 ; [b1 ; b2 ]](x) + v5 [b2 ; [b1 ; b2 ]](x) (simply decompose _ (t) on the basis (1.2) to obtain the controls). Then it is proved in [5] that there exists a sequence of (oscillatory) controls u1("; t; v1 ; v2 ; v3 ; v4 ; v5 ) and u2 ("; t; v1 ; v2 ; v3 ; v4 ; v5 ) such that the system (1.1) converges to the system (1.3) when " ! 0 in the sense that the solutions of (1.1) with these controls uk converge

(1.3)

uniformly on nite time intervals to the solutions of (1.3). The proof in [5] does not give a process to build these sequences of approximating sequence of oscillatory control, and although the case of a simple bracket (approximating [b1 ; b2 ] by switching between b1 and b2 ) is simple and well known, the above case of brackets of order 3 is already not obvious. The more recent work by Liu, and Liu and Sussmann [25, 9] gives an explicit construction of the approximating sequence. The process of building this sequence is amazingly intricate compared to the simplicity of the existence proof in [5]. Of course, the controls uk are not dened for " = 0, and both their frequency and their amplitude tend to innity when " goes to zero. Being aware of these results, and faced with the problem of proving that any controllable driftless system may be stabilized by means of a periodic feedback, the most natural idea is probably the following, which we illustrate for the above case (1.1) (5 states, 2 controls) : a- Stabilize the extended system (1.3) by a control law vi (x). This is very easy, and x_ may even be assigned to be any desired function, for instance ?x. b- Use the approximation results and build the controls uk ("; t; v1 (x); v2 (x); v3 (x); v4 (x); v5 (x)), according to the process given in [25, 9] so that when " tends to zero, the system (1.1) controlled with these controls tends to the extended system (1.3) controlled with the controls vi (x). c- Since the limit system is asymptotically stable (for instance x_ = ?x), and asymptotic stability is somehow robust, the constructed control laws are hopefully stabilizing for " nonzero but small enough. For instance, one may take kxk2 as a Lyapunov function for the limit system, its time-derivative along the

4

P. Morin, J.-B. Pomet and C. Samson

limit system is ?2kxk2, and it is tempting to believe that its time-derivative along the original system controlled by uk ("; t; v1 (x); v2 (x); v3 (x); v4 (x); v5 (x)) is no larger than ?kxk2 for " small enough. Unfortunately, these arguments, which would have been somewhat simpler than those in [3], are not rigorous as they stand. The meaning of tends to in point b is very unprecise. In [5], and in [25, 9], only uniform convergence of the trajectories on nite-time intervals are considered. This is not adequate for asymptotic stabilization. The Lyapunov function based argument in point c does not work because, in general, when " tends to zero, the time-derivative of a given function along the system (1.1) in feedback with the controls uk from point b does not tend to the time-derivative of this function along the limit system (1.3). In addition, the fact that feedback controls are considered instead of open-loop controls complicates the proofs because the controls depend on the state and therefore may have a very high derivative with respect to time not only through the high frequencies and amplitudes built in the approximation process but also through their dependence on the state, whose speed is proportional to these high amplitudes. However, we show in the present paper that the above sketch is basically correct provided that homogeneous controls associated with a homogeneous Lyapunov function are used, and that the construction of the approximating sequence is modied to take into account the closed-loop nature of the controls. An argument of the type of point c is possible based on a notion of approximation that is not in terms of uniform convergence of trajectories, but in terms of the dierential operator dened by derivation along the system. The paper is organized as follows. After a brief recall of technical material in section 2, we state in section 3 the control objective, make homogeneity assumptions and explain how they will yield local results for general controllable systems. The design method is developed in section 4 through four steps: choice of the useful Lie brackets, construction of the stabilizing controls for the extended system (system (1.3) in the above example), construction of the state dependent amplitudes for the feedback law, and constructio of the ocsillarory controls by the method exposed in [9] ; the material from these steps is then gathered to give the control law, and the stabilization result is stated. We present in this section all that is needed for the construction of the control law, but the proofs of some properties needed at each steps, and of the theorem, are given separately in section 7. Section 6 is devoted to a convergence result needed in the proof of the stability theorem ; it is a translation in terms of dierential operators (instead of trajectories) of the averaging results presented in [25, 9, 26], and also in [8]. An illustrative example is given in section 5. 2. Background on homogeneous vector elds. For any  > 0, the dilation operator  associated with a weight vector r = (r1 ; : : : ; rn ); (ri > 0) is dened on IRn by

 (x1 ; : : : ; xn ) = (r x1 ; : : : ; rn xn ): A function f 2 C o (IRn ; IR) is said to be homogeneous of degree  with respect to the family of dilations ( ) if 1

8 > 0; f ( (x)) =  f (x) : An homogeneous norm is any proper continuous positive function that is homogeneous of degree one.

Stabilization via oscillatory approximation of brackets

5

A continuous vector eld X on IRn is said to be homogeneous of degree  with respect to the family of dilations ( ) if one of the following equivalent properties is satised: 1. For any i = 1; : : : ; n, its ith component, i.e. the function x 7! Xi (x), is homogeneous of degree ri + . 2. For any function h homogeneous of degree  > 0 with respect to the same dilation, the function LX h (its Lie derivative along X ) is homogeneous of degree  +  . 3. For all positive constant , the vector eld (( ) X ), conjugate of X by the dieomorphism  away from the origin satises (( ) X ) (x) = ? X (x) for x 6= 0. The previous denitions of homogeneity can be extended to time varying functions and vector elds, by considering an extended dilation:

 (x1 ; : : : ; xn ; t) = (r x1 ; : : : ; rn xn ; t): 1

Finally, let f 2 C 0 (IRn  IR; IRn ), with f (x; :) T -periodic, dening an homogeneous vector eld of degree zero with respect to a family of dilations ( ). Then, the two following properties are equivalent see [7] for the autonomous case: i) the origin x = 0 of the system x_ = f (x; t) is locally asymptotically stable, ii) x = 0 is globally -exponentially asymptotically stable, i.e. for any homogeneous norm , there exist K; > 0 such that, for any solution x(:) of the system,

(x(t))  K(x(0))e? t : In the sequel, when using the expression exponentially asymptotically stable, we will refer to the -exponential asymptotic stability dened above.

3. Problem Statement. Consider a smooth driftless controllable system (3.1)

x_ =

m X i=1

ui fi (x) :

In general, there does not exist a dilation with respect to which the control vector elds are homogeneous. However, controllability implies that after some adequate change of coordinates, there exist a dilation and a controllable homogeneous approximation [6, 7] w.r.t. this dilation of the system (3.1) around the origin. Dierent methods exist to nd such a change of coordinates and dilation. For instance, a constructive method (i.e., requiring only algebraic computations and derivations) is given in [22]. Using this method, one obtains a driftless control system with control vector elds homogeneous of degree ?1. Moreover, any homogeneous feedback law that asymptotically stabilizes this system, also locally asymptotically stabilizes the original system. The present work constructs an homogeneous feedback that ensures global exponential stabilization for homogeneous systems. Applied to the homogeneous approximation of a general system (3.1), it provides local exponential stabilization of (3.1).

6

P. Morin, J.-B. Pomet and C. Samson

In the sequel, we always consider a system (3.2)

x_ =

m X i=1

ui bi (x)

where the bi 's are smooth vector elds and the system of coordinates is such that there exist some integers (r1 ; : : : ; rn ) such that, 1. each vector eld bi is homogeneous of degree ?1 with respect to the family of dilations  with weights (r1 ; : : : ; rn ), 2. the rank at the origin of the Lie algebra generated by the bi 's is n: (3.3) Rank( Liefb1; : : : ; bm g(0) ) = n : The integer valued weights r1 ; : : : ; rn are now xed, and we denote

P = Max fri ; i = 1; : : : ; ng : Our objective is to design feedback laws u = (u1 ; : : : ; um ) 2 C 0 (IRIRn ; IRm ) such that the origin x = 0 of the closed loop system (3.2) is exponentially asymptotically

(3.4)

stable.

Remark 3.1. We only require full rank Control Lie Algebra at the origin, but controllability follows, because homogeneity allows to deduce the same rank condition everywhere. Remark 3.2. We assume that the degrees are all equal to ?1. It is the degrees given by the construction of a homogeneous approximation in [22]. If a system is naturally homogeneous, but the degrees are not all equal (if they are equal, a simple scaling makes them all equal to ?1), it might be better to use this natural homogeneity than to construct a dierent homogeneous approximation that will have all the degrees equal to ?1. The present method can be adapted to the case when the degrees of homogeneity are not all equal, this requires only a modication of the rst step (see remark 4.3). More details are available from the authors. 4. Controller design. The control design consists of four steps described below. Step 1 (Selection of Lie brackets). In this step, we select some vector elds ~bj (j = 1; : : : ; N ), obtained as Lie brackets of the control vector elds b1 ; : : : ; bm . The ~bj are chosen recursively as follows. For any p = 1; : : : ; P (with P dened by (3.4)), 1. Compute all brackets of length p made from the control vector elds bi (i = 1; : : : ; m). 2. Select, among the vector elds so obtained, a maximal number of vector elds independent1 over IR. These vector elds are the ~bj (mp?1 + 1  j  mp ) we set m0 = 0 so that all the integers mp (p = 0; : : : ; P ) are dened, with N = mP . It follows from this construction that to each vector eld ~bj , we can associate a Lie bracket of some bi 's, i.e.

(4.1)

~bj = Cj (b ; : : : ; b ` j ) ; j j 1

( )

with 1 Recall that some vector elds X ; : : : ; X are said to be linearly independent over IR if and only r 1 if for any (1 ; : : : ; r ) in IRr , the vector eld 1 X1 + : : : + r Xr is identically zero on IRn only if 1 = = r = 0. 

Stabilization via oscillatory approximation of brackets

7

 Cj a formal bracket and bj ; : : : ; bj` j the elements that are bracketed (listed 1

( )

in the order they appear in the bracket).

 `(j ) the number of vector elds that are bracketed in (4.1), i.e. `(j ) = p , mp?1 + 1  j  mp :

For instance, if we choose a vector eld ~b6 = [[b2 ; b1 ]; [b1 ; [b1 ; b2]]], then we encode this as (4.1) with `(6) = 5, 62 = 63 = 64 = 1, 61 = 65 = 2, and the symbol C6 dened by C6 (z1 ; z2; z3 ; z4 ; z5 ) = [[z1 ; z2 ]; [z3 ; [z4; z5 ]]]. This notation is sloppy but avoids using formal Lie brackets and the evaluation operator (see [24]) from a free Lie algebra to vector elds, which would make the exposition uselessly heavy. Of course, the decomposition (4.1) is not unique in general. From now on, we consider that one decomposition has been chosen and that the Cj 's and jk 's have been dened accordingly. Remark 4.1. 1. In Step 1 above, we do not need to compute all brackets of length p. More precisely, let F denote the free Lie algebra generated by some indeterminates s1 ; : : : ; sm . Then, one can select a basis B of this Lie algebra (for instance a P. Hall basis, as used by Sussmann and Liu in [25, 9, 26]). If Bp denotes the elements of B of order p then, it is clearly sucient to consider Lie brackets of the bi obtained by evaluating (in the sense of [24]) the elements of Bp at si = bi (i = 1; : : : ; m). One usually takes this into account when checking controllability. 2. Since the vector elds bi (i = 1; : : : ; m) are homogeneous of degree ?1, each bracket of length p of these vector elds is homogeneous of degree ?p. Moreover, the weights of the dilation being integer, any smooth vector eld homogeneous of integer degree is in fact polynomial. Using a (nite) basis of the polynomials homogeneous of degree k (k 2 f0; : : : ; P ? 1g), selecting Lie brackets of a given length only consists in computing a basis of a nite dimensional vector space. 3. We do not need to consider brackets of order larger than P because they are identically zero: indeed, all components of these vector elds are homogeneous of negative degree and therefore, they would tend to innity at the origin if they were not identically zero. Example : Let us illustrate this step on the following academic example: x_ 1 = u1 x_ 2 = x32 (u1 + u2 ) x_ 3 = u3 which is of the form (3.2) with m = 3 and

b1 = @x@ + x32 @x@ ; b2 = x32 @x@ ; b3 = @x@ : 1 2 2 3 The control vector elds are homogeneous of degree ?1 w.r.t the dilation with weights r1 = 1, r2 = 3, and r3 = 1. For the brackets of length 1, i.e. the control vector elds, b1 and b3 are independent at the origin while b2 is zero at the origin, but independent from b1 and b3 away from x3 = 0. Hence m1 = 3, and one can take ~b1 = b1 = C1 (b1 ), ~b2 = b2 = C2 (b2 ), and ~b3 = b3 = C3 (b3 ). At length 2, all the brackets vanish at the origin, but they are not identically zero: [b2 ; b3 ] = ?2x3 @x@ , and [b3 ; b1 ] = ?[b2 ; b3]. Since [b1 ; b2 ] = 0, we have m2 = 4. We dene for instance ~b4 = [b2 ; b3] = C4 (b2 ; b3 ). 2

8

P. Morin, J.-B. Pomet and C. Samson

Finally, since [b3 ; [b2 ; b3 ]] = ?2 @x@ , m3 = 5 with, for instance, ~b5 = [b3 ; [b2 ; b3]] = C5 (b3 ; b2 ; b3 ). Note that here, due to the origin being a singular point for the distributions spanned by the control vector elds, and by the brackets of order at most 2, N is strictly larger than n. With this general construction, we have: Proposition 4.2. For any family (~bj )j=1;:::;N dened as above, a) Let j1 ; : : : ; jn be such that Spanf~bj (0); : : : ; ~bjn (0)g = IRn . Then, 8x 2 IRn ; Spanf~bj (x); : : : ; ~bjn (x)g = IRn : 2

1

1

b) Any vector eld b that can be written as a Lie bracket of order p of some bi 's is a linear combination of the ~bj 's with `(j ) = p, i.e. b=

mp X

j =mp?1 +1

j ~bj =

X ~ j bj

`(j )=p

for some real numbers j 2 IR. c) The vector elds f~bj gj=1;:::;N are linearly independent over IR.

(Proof in Section 7.1)

Remark 4.3. If the degrees of the vector elds bi are not all equal, the above

construction has to be modied. More precisely, in the recursive construction of the family (~bj )j=1;:::;N , we have to consider an induction on the degree of homogeneity, instead of an induction on the length of the Lie brackets remark that this is just a generalization of the above construction since for vector elds of the same degree ?1, the set of Lie brackets of length p is the same as the set of Lie brackets of degree ?p. This means that at each step, we have to compute the set of Lie brackets of a certain degree and select, among them, a nite number of vector elds that form a basis of this set. Step 2 (Stabilization of the extended system). Let a be a smooth vector eld, homogeneous of degree zero w.r.t. the family of dilations ( ), and such that the origin x = 0 of the system x_ = a(x) is asymptotically stable. One may take for instance a(x) = ?x. In view of Proposition 4.2-a, the n  n matrix whose columns are ~bj (x); : : : ; ~bjn (x) is invertible for all x. Dene the functions u~j (j = 1; : : : ; N ) by : 1

0 u~ (x) 1 j B CA = ~bj (x); : : : ; ~bjn (x)?1 a(x) ; . . @. 1

(4.2)

1

u~jn (x)

 u~j = 0 8j 2= fj1 ; : : : ; jn g : These functions are obviously such that (4.3) and furthermore,

a=

N X u~j ~bj ; j =1

Proposition 4.4. For any j = 1; : : : ; N , the above constructed function u~j is in C 1 (IRn ? f0g; IR) \ C 0 (IRn ; IR), and is homogeneous of degree `(j ).

9

Stabilization via oscillatory approximation of brackets

Proof: Continuity and smoothness away from the origin are inherited from the vector

elds ~bj and the vector eld a. Each u~jk is homogeneous of degree `(jk ) because the lth component of the a is homogeneous of degree rl and the element ~ vector eld ?1 ~ (k; l) of the matrix bj (x); : : : ; bjn (x) is homogeneous of degree `(jk ) ? rl . This   last statement is true because the element (k; l) of the matrix ~bj (x); : : : ; ~bjn (x) is homogeneous of degree rl ? `(jk ) for the vector eld ~bjk is an iterated Lie bracket of `(jk ) homogeneous vector elds of degree ?1, and hence is homogeneous of degree ?`(jk ). Step 3 (Construction of the state-dependent amplitudes). This step consists in nding some functions vjk 2 C 1 (IRn ?f0g; IR) \ C 0 (IRn ; IR) (j = 1; : : : ; N; k = 1; : : : `(j )), homogeneous of degree one, and such that 1

1

N X

(4.4)

j =1

u~j Cj (bj ; : : : ; bj` j ) = 1

( )

N X j =1

Cj (bj vj1 ; : : : ; bj` j vj`(j) ) : 1

( )

Recall that the Cj 's, dened in Step 1, are the brackets associated with the ~bj 's, i.e. ~bj = Cj (b ; : : : ; b ` j ) : (4.5) j j 1

( )

The construction of the functions vjk is based on the following Lemma. Lemma 4.5. Let C (bi ; : : : ; bip ) (ik 2 f1; : : :; mg) be any Lie bracket of some vector elds bik (ik 2 f1; : : : ; mg), and vk 2 C 1 (IRn ? f0g; IR) \ C 0 (IRn ; IR) (k = 1; : : : ; p) some functions homogeneous of degree one. Then, mX p? hj ~bj : i) C (bi v1 ; : : : ; bip vp ) = v1 : : : vp C (bi ; : : : ; bip ) ? j =1 ii) for any j = 1; : : : ; mp?1 , hj 2 C 1 (IRn ?f0g; IR)\C 0 (IRn ; IR) is homogeneous of degree `(j ). The proof of this lemma, left to the reader, follows from Proposition 4.2-b by a direct induction on the length p of the bracket C (bi v1 ; : : : ; bip vp ). It is a generalization of the fact that for two functions v1 and v2 , and vector elds bi and bi , [v1 bi ; v2 bi ] = v1 v2 [bi ; bi ] ? v2 (Lbi v1 )bi + v1 (Lbi v2 )bi . Note that the functions hj in the above lemma can be explicitly computed by expressing brackets of order not larger than p ? 1 as linear combinations of ~b1 ; : : : ; ~bmp? . 1

1

1

1

1

2

1

1

2

2

1

1

1

2

2

1

Based on Lemma 4.5, the functions vjk can be constructed recursively as follows. Step p = P : For any j 2 fmP ?1 + 1; : : : ; mP g, we dene: vjP = Pu~?j 1 ; and vjk =  (k = 1; : : : ; P ? 1) ; (4.6) with  any homogeneous norm in C 1 (IRQn ? f0g; IR) \ C 0 (IRn ; IR) (for instance one q P r may take (x) = ( jxi j i ) q with q = 2 ni=1 ri ). In view of (4.5), (4.6), and Lemma 4.5, we have 1

(4.7)

mP X

j =mP ?1 +1

Cj (bj vj1 ; : : : ; bjP vjP ) = 1

mP X

j =mP ?1 +1

u~j ~bj ?

mX P ?1 j =1

hPj ~bj ;

10

P. Morin, J.-B. Pomet and C. Samson

with hPj (j = 1; : : : ; mP ?1 ) obtained by expanding the brackets in the left-hand side of (4.7) with respect to the variables vjk and their derivatives. Step 1  p < P : Assume that the functions vjk (j = mp + 1; : : : ; mP ; k = 1; : : : ; `(j )) and hkj (j = mp +1; : : : ; mP ; k = p +1; : : :; P ) have been computed in Steps P to p +1, and satisfy the induction assumption N X

(4.8)

j =mp +1

Cj (bj vj1 ; : : : ; bj` j vj`(j) ) = 1

( )

N X j =mp +1

X u~j ~bj ? hpj +1~bj : mp

j =1

We dene, for any j 2 fmp?1 + 1; : : : ; mp g,

vjp = p1?1 (~uj + hpj +1 ) ; and vjk =  (k = 1; : : : ; p ? 1) :

(4.9)

In view of (4.5), (4.9), and Lemma 4.5, we have

X p+1 p ~ X X (4.10) (hj ? hj )bj ; (~uj + hpj +1 )~bj + Cj (bj vj1 ; : : : ; bjp vjp ) = j =1 j =mp? +1 j =mp? +1 mp?1

mp

mp

1

1

1

for an adequate choice of the hpj (j = mp?1 + 1; : : : ; mp ) obtained again by expanding the brackets in the left-hand side of (4.7) with respect to the variables vjk and their derivatives. In view of (4.8) and (4.10), we have (4.11)

N X j =mp?1 +1

Cj (bj vj1 ; : : : ; bj` j vj`(j) ) = 1

( )

N X j =mp?1 +1

u~j ~bj ?

mX p?1 j =1

hpj~bj :

so that the induction assumption (4.8) on Steps P to p+1 is also true for Steps P to p. The computation of the functions vjk and hkj ends after Step p = 1 has been performed. Let us remark that in the last step (p = 1), there is no function hpj to compute. With this construction, we have: Proposition 4.6. Consider the functions vjk dened above. Then, a) Each vjk (j = 1; : : : ; N ; k = 1; : : : ; `(j )) belongs to C 1 (IRn ? f0g; IR) \ C 0 (IRn ; IR) and is homogeneous of degree one. b) Equation (4.4) is satised. Proof : Point b) is a direct consequence of (4.11) with p = 1. Point a) is an easy consequence of Proposition 4.4, (4.6), (4.9), and Lemma 4.5. Step 4 (Oscillatory approximation of Lie Brackets). The last step of our construction relies on the work of Liu [9] and Sussmann and Liu [25, 26]. More precisely, consider a control system (4.12)

x_ =

A X =1

u X (x) ;

11

Stabilization via oscillatory approximation of brackets

with X1 ; : : : ; XA some smooth vector elds on a smooth n-dimensional manifold, and a Lie bracket extended system

x_ =

(4.13)

B X =1

w X (x) (B  A)

where the A rst vector elds are the same as in (4.12), and the other vector elds are Lie brackets of X1 ; : : : ; XA . In [9], an algorithm is given that builds, for any set of integrable functions of time w ( = 1; : : : ; B ), some highly oscillatory functions of time u" such that the trajectories of (4.12), with u = u" , approximate those of (4.13). We do not describe here this algorithm, we just use the notation

u" = F ( ; " ; (w )1 B )

(4.14)

where F is a complicated function described algorithmically in [9]. It only depends on which Lie brackets have to be performed to obtain the vector elds XA+1 ; : : : ; XB from the vector elds X1 ; : : : ; XA. It is of the form (4.15)u" (t) =  ;0 (t) + "?

1 2

X

!2 (2; )

!; (t)ei!t=" +

N X

n=3

"

n?1 n

X

!2 (n; )

! (t)ei!t=" ;

with N the length of the higher order bracket X in (4.13),  ;0 ; !; , and ! some functions, and 2; , n; some nite subsets of IR, that are all built precisely in [9]. In particular, the construction of the approximating inputs u" given in [9] implies the following. Theorem 4.7 ([9]). For any T ( 0 < T < +1) and any family w ( = 1; : : : ; B ) of integrable functions on [0; T ], the functions u" ( = 1; : : : ; A) given by (4.14), where F symbolizes the algorithm described in [9], are integrable and are such that the trajectories of (4.12)(4.15) converge to the trajectories of (4.13) in the following sense: For any p 2 IRn , if the system (4.13) with x(0) = p has a unique solution x1 dened on [0; T ], and if x" is a maximal solution of system (4.12)(4.15) with x(0) = p, then x" is dened on [0; T ] for " small enough and converges uniformly to x1 on [0; T ] as " ! 0. Remark 4.8. 1. The functions u" in (4.15) are real-valued because each n; (n = 2; : : : ; N ) is symmetric (! 2 n; ) ?! 2 n; ), and ?! = ! and ?!; = !; . 2. If the functions w in (4.13) are constant, the functions  ;0 ; !; , and ! are also constant. Consider now the following two systems : `(j ) N X X (4.16) uj;s bjs vjs ; x_ = j =1 s=1 (4.17)

x_ =

N X j =1

Cj (bj vj1 ; : : : ; bj` j vj`(j) ) : 1

( )

12

P. Morin, J.-B. Pomet and C. Samson

Systems (4.16) and (4.17) are of the form (4.12) and (4.13) respectively, with the vector elds X being the bjs vjs 's (with a double index (j; s)), the vector elds X being these plus the brackets in (4.17), i.e. Cj (Xj;1 ; : : : ; Xj;`(j) ), 1  j  N , and each w in (4.13) being constant : 0 in front of the X 's that are also X 's and 1 in front of the added brackets. Note that, since each original vector eld from (3.2) appears many times in the brackets selected in step 1, we consider here as independent control vector elds in (4.12) some vector elds that are in fact multiples of each other : for instance if the vector eld b1 appears more than one time, we have js = js00 = 1 for some (j; x) 6= (j 0 ; s0), and vjs b1 and vjs00 b1 are distinct control vector elds X in (4.12). Following Liu's algorithm, we construct some functions u"j;s = F ( (j; s) ; " ; (0; : : : : : : ; 0; 1; : : :; 1) ) ; where F is the notation introduced in (4.14), such that the trajectories of (4.16)(4.18) which exist on any time interval because the system is degree zero homogeneous converge uniformly on any time interval [0; T ] to those of (4.17), as " tends to zero. Recall see (4.15) that they are of the form (4.18) u"j;s (t) = j;s;0 + "?

1 2

X

!2 (2;j;s)

!;j;s ei!t=" +

P X

n=3

"

n?1 n

X

!2 (n;j;s)

! ei!t=" :

Remark that the functions  in (4.18) are constant in view of Remark 4.8 above. We rewrite system (4.16)(4.18) as (4.19)

x_ =

0 m X @ X

i=1 (j;s):js =i

Our nal control laws are dened by (4.20)

u"i (x; t) =

1 u"j;s (t)vjs (x)A bi (x) :

X (j;s):js =i

u"j;s (t)vjs (x) :

As stated in the following theorem, they ensure asymptotic stability of system (3.2) for suciently large frequencies. Theorem 4.9. Let the controls u"i be these described above. Then, the vector eld in the right-hand side of the time-varying closed-loop system (4.21)

x_ =

m X i=1

u"i (x; t) bi (x)

is homogeneous of degree zero, and for " > 0 suciently small the origin is exponentially uniformly asymptotically stable. (Proof in Section 7.3) Remark 4.10. Our construction a priori implies uniform convergence of the trajectories of (4.21) to those of (4.17), the origin of which is asymptotically stable from (4.3) to (4.5). However this is not enough to infer asymptotic stability of (4.21). In the proofs and in Section 6, we introduce a stronger kind of convergence (DOconvergence), sucient to infer asymptotic stability of (4.21). We however quote uniform convergence here (instead of the DO-convergence, which we really need) because we base base our construction on [9]. It makes the present construction clearer

13

Stabilization via oscillatory approximation of brackets

(to construct the controls, one only needs to follow the algorithm in [9], and the kind of convergence does not matter). Also, using the convergence result from [9] (theorem 4.7) provides a shortcut in the proof on DO-convergence. This possibly makes the paper less self-contained, but it avoids reproducing some dicult calculations made in [9]. 5. An illustrative example. We now illustrate the control design method exposed in Section 4. Let us consider the following system in IR4 :

x_ = b1 u1 + b2 u2 ;

(5.1)

with b1 = @x@ +x3 @x@ +x4 @x@ and b2 = @x@ , which can be used to model the kinematic equations of a car like mobile robot. One easily veries that the vector elds b1 and b2 are homogeneous of degree ?1 with respect to the family of dilations of weight r = (1; 3; 2; 1), and that this system is controllable. We follow on this example the four steps of our control design procedure. 1

2

3

4

Step 1

Since [b1 ; b2] = ? @x@ , [b1 ; [b1 ; b2 ]] = @x@ and [b2 ; [b2 ; b1 ]] = 0, the family (~bj ) is directly given by ~ ~ ~ ~ ~ (b1 ; b2; [b1 ; b2 ]; [b1 ; [b1 ; b2]]) (5.2) (bj ) = (b1 ; b2 ; b3 ; b4) = = (C1 (b ); C2 (b ); C3 (b ; b ); C4 (b ; b ; b )) : This implies that 11 = 1; 21 = 2; 31 = 1; 32 = 2; 41 = 42 = 1; 43 = 2, and that m1 = 2, m2 = 3, and m3 = N = 4. 3

2

1 1

1 2

1 3

2 3

1 4

2 4

3 4

Step 2 Let us for instance dene the vector eld a by a(x) = ?x (the origin x = 0 of x_ = a(x) is obviously asymptotically stable). Then the integers jk are simply dened by jk = k (k = 1; : : : ; 4). By a direct computation, one obtains the following expression for the functions u~j : (~u1 ; u~2 ; u~3; u~4 )T (x) = (~b1 ; ~b2; ~b3 ; ~b4 )?1 (x) a(x) (5.3) = (?x1 ; ?x4 ; ?x1 x4 + x3 ; x1 x3 ? x2 )T :

Step 3 From Step 1, the brackets Ck are dened by

C1(x1 ) = x1 ; C2 (x2 ) = x2 ; C3(x1 ; x2 ) = [x1 ; x2 ]; C4 (x1 ; x1 ; x2 ) = [x1 ; [x1 ; x2 ]] : We now follow the procedure exposed in Section 4. Step p = P = 3: The functions v41 ; v42 and v43 are given, in view of (4.6), by (5.4) v41 = v42 = ; v43 = u~24 ; with  2 C 1 (IR4 ? f0g; IR) \ C 0 (IR4 ; IR) an homogeneous norm (for instance, one may take (x) = (x12 1 + x42 + x63 + x12 4 ) ). 1 12

14

P. Morin, J.-B. Pomet and C. Samson

We also compute the functions hPj involved in (4.7). A tedious but simple calculation gives h31 = v42 v43 L[b ;b ] v41 + v42 Lb v43 Lb v41 + v41 Lb (v43 Lb v42 ) ? v43 Lb v41 Lb v42 ; (5.5) h32 = ?v41 Lb (v42 Lb v43 ) ; h33 = ?v41 Lb v42 v43 ? v41 v42 Lb v43 : Step p = 2: The functions v31 and v32 are given, in view of (4.9), by 3 (5.6) v31 = ; v32 = (~u3 + h3 ) : The functions h21 and h22 dened by (4.10) can be computed using (5.6): h21 = h31 + v32 Lb v31 ; (5.7) h22 = h32 ? v31 Lb v32 : Step p = 1: Finally, the functions v11 and v21 are dened, from (4.9) again, by (5.8) v11 = u~1 + h21 ; v21 = u~2 + h22 : 1

1

2

1

2

2

2

1

1

1

1

1

2

1

Step 4

First, we need to nd functions uj;s (j = 1; : : : ; 4 ; s = 1; : : : ; `(j )) such that the trajectories of the system `(j ) 4 X X (5.9) uj;s bjs vjs x_ = j =1 s=1 converge uniformly to those of the system 4 X (5.10) x_ = Cj (bj vj1 ; : : : ; bj` j vj`(j) ) : j =1 We remark that, in view of (5.2), (5.4), and (5.6), the vector elds b v31 ; b v41 , and b v42 are in fact identical. As a consequence, there are only ve and not seven, the number of terms in the sum (5.9) dierent vector elds in (5.9) or (5.10). Therefore, the system (5.9) can be rewritten 5 X (5.11) x_ = ui Xi ; i=1 1 1 1 with X1 = b1 v1 ; X2 = b2 v2 ; X3 = b1 v3 = b1 v41 = b1 v42 ; X4 = b2 v32 , and X5 = b2 v43 , and u1 ; u2; u3 ; u4 ; u5 standing respectively for u1;1 , u2;1 , u3;1 + u4;1 + u4;2, u3;2 and u4;3 , and the system (5.10) can then be re-written as (5.12) x_ = X1 + X2 + [X3 ; X4 ] + [X3 ; [X3 ; X5 ]] : We choose some candidate functions ui , for the approximation of trajectories of (5.12) by solutions of (5.11), of the following form: 8 u1(t) = 1;0 ; > > < u2(t) = ?2;0 ;  ? (5.13) > u3 (t) = " ! ; cos !1;1 t=" + "? ! ; cos !2;1 t=" + ! ; cos !2;2 t=" ; ? > !1;2 t="; : uu54((tt)) == ""? !! ; sin cos !2;3 t=" : ; 1

( )

1 3

2 4

1 2 1 2

2 3

1 1 1 2 2 3

2 3

2 1

2 2

1 4

Stabilization via oscillatory approximation of brackets

15

with !k;j dened for instance by

1 = f!1;1 ; !1;2g = f 72 ; ? 27 g and 2 = f!2;1 ; !2;2; !2;3 g = f2; 3; ?5g : Note in particular that each set k is Minimally Cancelling in the sense of [9, 25, 26]. Using [9, Th. 5.1] see also Section 8 of the same reference, where a very similar example is treated, one can show that the trajectories of system (5.11) (5.13) converge to those of the system

     (5.14) x_ = 1;0 X1 + 2;0 X2 ? !2;! ! ; [X3 ; X4 ] ? ! 4;! ! !; ! ; [X3 ; [X3 ; X5 ]] : 1;1 2;1 2;2 In order to identify system (5.12) with system (5.14), one can for instance dene 1;0 = 2;0 = ! ; = ! ; = ! ; = 1 ; and ! ; = ?2!1;1; ! ; = ?4!2;1!2;2 : Expressing the right hand term of (5.11) as a function of the control vector elds b1 , and b2 , we nally obtain the expression of our stabilizing feedbacks:  u" (x; t) = u (t=")v1(x) + u (t=")v1(x) 1 3 1 1 3 (5.15) u"2 (x; t) = u2 (t=")v21 (x) + u4 (t=")v32 (x) + u5 (t=")v43 (x) with the ui 's dened by (5.13), and the vjs 's dened by (5.4), (5.6), and (5.8). Although the above expression of the control laws appears quite simple, it is in fact pretty involved due to the terms contained in the vjs 's, and in particular due to the functions hpj dened by (5.5) and (5.7). This is a negative aspect of our construction that solving the equation (4.4) in the vjs 's leads to heavy computations. 2 1

1 2

1 1

2 3

2 2

2 1

1 1

2 2

2 3

1 2

6. Convergence of highly oscillatory vector elds as dierential operators. As explained in the introduction (section 1.2), the convergence results which

are implicitly contained in [5], and explicitly in [25, 9] or [8], in terms of uniform convergence of solutions on nite time intervals, are not sucient here. In this section, we state separately the convergence result that is used to prove Theorem 4.9. The word convergence is maybe a bit farfetched since there is no notion of limit in the topological sense, the convergence is more of an algebraic nature : we simply decompose the operator as the sum of a non-oscillating term (the limit) and a term which is a dierential operator of order higher than 1 whose coecients are, when " goes to zero and x remains in a compact set, O(" ), with > 0. However this result will prove to be sucient for our needs. It is also sucient to recover the uniform convergence stated in [5, 8, 25, 9]. In the sequel T denotes any time interval (possibly innite). Definition 6.1. Let F " (" 2 (0; "0 ] "0 > 0), and F 0 be be vector elds on IR1+n , dened by F " (t; x) = @t@ + f ("; t; x) and F 0 (t; x) = @t@ + f 0 (t; x) with f 2 C 0 ((0; "0 ]  T nIRn ; IRn ) \Cn 1 ((0; "0 ] T  (IRn ?f0g); IRn ), and f 0 2 C 0(T  IRn ; IRn ) \C 1 (T  (IR ? f0g); IR ). We say that F " converges as a dierential operator of order one on functions of t and x, in brief DO-converges, to F 0 , as " ?! 0, if (6.1)

F " = F 0 + " 1



F " D"

D" @

1 ? 1 @t



+ " D2" : 2

16

P. Morin, J.-B. Pomet and C. Samson

The above equality is understood as an equality of dierential operators. 1 and 2 are strictly positive reals, and D1" and D2" are dierential operators whose coecients are continuous, smooth outside the origin, and locally uniformly bounded when " ! 0, i.e. there exists "0 > 0 such that for all compact subset K of IRn , each component of these dierential operators is bounded for ("; t; x) 2 (0; "0]  T  K . This kind of convergence carries with it two important properties. Proposition 6.2. Suppose that a vector eld F " DO-converges, as " ?! 0, to a vector eld F 0 on a time interval T . Then, 1. The trajectories of x_ = f ("; t; x) converge uniformly to those of x_ = f 0 (t; x) on nite time intervals. More precisely, let [0; T ]  T , and let x0 be the (unique) solution of x_ = f 0(t; x) (6.2) x(0) = x0 : Then, for " small enough, the unique solution x" of

x_ = f ("; t; x) x(0) = x0 :

(6.3)

is dened on [0; T ], and x" (t) converges to x0 (t) uniformly on [0; T ]. 2. If T = [0; +1), and if all vector elds in (6.1) are homogeneous of degree zero and f 0 is autonomous then, if the origin of (6.4) x_ = f 0 (x) is asymptotically stable, the origin of

x_ = f ("; t; x)

(6.5)

is (exponentially) asymptotically stable too for " > 0 suciently small.

Proof: We prove 1. First, we rewrite (6.1) as

@ + " D " : F " (I ? " D1" ) = F 0 ? " D1" @t 2 2

1

1

This an equality between dierential operators. We apply each side to the coordinate functions xi . D1" xi and D2" xi are simply the coecient in front of @x@ i in the expression of the dierential operator D1" xi or D2" xi . This implies (coordinate by coordinate) that the dierential equation (6.3) can be rewritten d (x ? " d ("; t; x)) = f 0 (t; x) + " d ("; t; x) ; (6.6) 2 1 dt where di ("; t; x) (i 2 f1; 2g) is the vector whose j th component is the coecient of @ " 0 " @xj in Di . This implies that the dierence between x (t) ? x (t) satises kx" (t) ? x0 (t)k  " kd1 ("; t; x0 (t))k + " kd1 ("; t; x0 )k 2

1

Zt

1

1

kf 0(; x" ( )) ? f 0 (; x0 ( ))kd + "

Zt

kd2 ("; ; x" ( )))kd : 0 0 The standard Gronwall lemma then yields, for all " 2 (0; "0 ], and all t 2 [0; T ] such that x" remains in the interior of a certain compact neighborhood K of the trajectory +

2

17

Stabilization via oscillatory approximation of brackets

x0 , the estimate kx" (t) ? x0 (t)k  (2" + T" )Met , where  is a Lipschitz constant (with respect to x) of F on [0; T ]  K and M is an upperbound on (0; "0 ]  [0; T ]  K for both kd1 k and kd2 k. This proves 1. 2

1

Let us prove 2. Since the right-hand side of (6.4) is homogeneous of degree zero, there exists, from [17], an homogeneous and autonomous Lyapunov function V , positive denite, and whose derivative along (6.4) is given by (6.7) V_(6.4) = F 0 V = ?W

here X V , for X a vector eld, denotes the Lie derivative of V along X  with W homogeneous positive denite, of the same degree as V , i.e. (6.8) W (x)  c V (x) : Let us now compute the derivative of V along system (6.5). From (6.1) and (6.7),

" V_(6.5) = F " V = ? W + " F " D1" V ? " D1" @V @t + " D2 V which can be rewritten, since V is autonomous, as (6.9) F " V" = ? W + " D2" V ; 1

1

2

2

with (6.10) V" = V ? " D1" V : Since, by assumption, the operators D1" and D2" are homogeneous of degree zero, and locally uniformly bounded with respect to " > 0, one has, since V is positive denite, 1

jD1" V j  k V ; jD2" V j  k V ;

for all " > 0. Hence for " suciently small, V" is arbitrarily close to V and hence positive denite, and also (6.11) V_" = F " V"  ? 2c V : Therefore, for " small enough, V" is a strict Lyapunov function for system (6.5). This ends the proof of 2 via Lyapunov's rst method. Before stating our convergence result, we recall two denitions introduced in [25, 9]. Definition 6.3. [25, 9] Let be a nite subset of IR and j j denote the number of elements of . The set is said to be Minimally Canceling (in short, MC) if and onlyX if : i) !=0 !2

ii) this is the only zero sum with at most j j terms taken in with possible repetitions:

9 > 8 ( ) > < ! !2 = (0; : : : ; 0) = !2

or (1; : : : ; 1) (!X )!2 2 ZZ j j > =) : or (?1; : : : ; ?1) j! j  j j > ; X

(6.12)

!2

! ! = 0

18

P. Morin, J.-B. Pomet and C. Samson

For example, a set f!1; !2 g is MC if and only if !2 = ?!1 with !1 = 6 0, a set f!1; !2 ; !3 g is MC if and only if !3 = ?!1 ? !2 with !1 6= 0, !2 6= 0, !1 + !2 6= 0, !1 ? !2 = 6 0, !1 + 2!2 = 6 0, 2!1 + !2 = 6 0, !1 ? 2!2 = 6 0, 2!1 ? !2 = 6 0. . .

Definition 6.4. [25, 9] Let ( ) 2I be a nite family of nite subsets of IR. The family ( ) 2I is said to be Independent with respect to p if and only if :



XX

2I !2

! ! = 0

 (X ! )X !2 ; 2I 2 ZZ j j  j! j  p

(6.13)

2I !2

9 > > = X =) ! ! = 0 8 2 I > !2 > ;

For example, the sets (f!1; !2 ; !3 g; f!4; !5 g) are both MC and independent with respect to 2 if and only if !3 = ?!1 ? !2 and !5 = ?!4 with !1 6= 0, !2 6= 0, !1 + !2 6= 0, !1 ? !2 6= 0, !1 + 2!2 6= 0, 2!1 + !2 6= 0, !1 ? 2!2 6= 0, 2!1 ? !2 6= 0, !4 6= 0 (this is MC), and !1 + !4 6= 0, !1 ? !4 6= 0, !2 + !4 6= 0, !2 ? !4 6= 0, !1 + !2 + !4 6= 0, !1 + !2 ? !4 6= 0 (this is independence). We are now ready to state our convergence result. Theorem 6.5. Let N be a positive integer and consider, for j = 1; : : : ; N ,  Some vector elds Xjs 2 C 1 (IRn ? f0g; IRn ) \ C 0 (IRn ; IRn ) (s = 1; : : : ; `(j )),  Some smooth complex valued functions of time js (s = 1; : : : ; `(j )) such that, for some M ,

(6.14)

s (t)  M and _ s (t)  M 8t 2 T j j

 Some sets j = f!j1; : : : ; !j`(j) g of real numbers such that !js = 0 if `(j ) = 1,

j is minimally canceling (MC) if `(j )  2, and the family ( j ) (`(j )  2) is

 Max `(j ). independent with respect to P = j Then, the vector eld `(j ) N X @ +X (6.15) sj;" Xjs ; F " = @t j =1 s=1 with

(6.16)

sj;" (t) = 2"?

`(j)?1 `(j)



< js (t) ei!js t="



DO-converges, as " ! 0, to the vector eld ! N 2 X j1    j`(j) @ 0 (6.17) F = @t + `(j ) < Bj i`(j)?1 j =1 X [Xj(1) ; [Xj(2) ; [: : : ; Xj(`(j)) ] : : :]] with Bj = (`(j )?1) ) : (1) (2) (1) (1) 2S(`(j )) !j (!j + !j ) : : : (!j + : : : + !j Furthermore, if all the vector elds Xjs are homogeneous of degree zero, then all the dierential operators in (6.1) are homogeneous of degree zero too. Remark 6.6. This result is very much related to the theory of normal forms for time-varying dierential equations, as exposed for instance in [19, Chapter 6]. Let

Stabilization via oscillatory approximation of brackets

19

@ is said us recall see that reference fore details that a vector eld @t@ + "f 0 (t; x) @x @ @ 0 0 to be in normal form if and only if [ @t ; f @x ] = 0, i.e. if f does not depend on t. For a system

(" ) x_ = f ("; t; x) ; nding a normal form means nding a change of coordinates x 7! y = x + ("; x) that transforms (" ) into

(0 ) y_ = "f0(y) : In general, deciding whether a normal form exists for a system, and then possibly nding this normal form, is a dicult problem, and there are no systematic tools available. Let us however rephrase Theorem 6.5 in the terms of [19]. By a time-scaling t 7! "t, the system x_ = f ("; x; t), where f is dened by F " = @t@ + f @x@ , with F " given by (6.15), rewrites as : (0" ) x_ = "f1 (t; x) + "1=2 f2 (t; x) +    + "1=P fP (t; x) : In the context of normal forms, Theorem 6.5 states that (0 ), with f 0 dened by F 0 = @t@ + f 0 @x@ and F 0 given by (6.17), and is a normal form for (0 ), up to terms of higher order in ". (Proof in Section 7) (In [5, 8, 25, 9], the main ingredient of the proof was iterated integrations by parts. Here we mimic these integrations by parts, but at the level of products of dierential operators instead of integrals along the solutions.)

7. Proofs. 7.1. Proof of Proposition 4.2 (Section 4). Point b is strictly a consequence

of the construction. Point c follows from the fact that if a linear combination of all the vector elds ~bj with constant real coecients is identically zero, then homogeneity implies that each linear combination where only the terms corresponding to the brackets of same length must also be zero, and since by construction all the brackets ~bj of same length are linearly independent over IR, this implies that all the coecients are zero. Let us prove point a. First recall that any Lie bracket of length p > P made with the vector elds bi is identically zero see Remark 4.1. From this fact, the controllability assumption (3.3) and the construction itself, there clearly exist integers j1 ; : : : ; jn 2 f1; : : :; N g such that f~bj (0); : : : ; ~bjn (0)g is a basis of IRn . Hence f~bj (x); : : : ; ~bjn (x)g is a basis of IRn fornx in some neighborhood W of the origin. Let us show that this is true for any x in IR . Let x be outside W . There exist  > 0 such that x =  (x) is in W and hence f~bj (x); : : : ; ~bjn (x)g is a basis of IRn . This implies, since  is a local dieomorphism from a neighborhood of x to a neighborhood of x, that     f ( ?1 )~bj (x); : : : ; ( ?1 )~bjn (x)g 1

1

1



1



is also a basis of IRn . Now, from the homogeneity, (?1 )~bjk = ?`(jk )~bjk . This proves point a.

20 as

P. Morin, J.-B. Pomet and C. Samson

7.2. Proof of Theorem 6.5. The closed-loop vector eld F " can be re-written

(7.1)

@+ F " = @t +

X

2j1 Xj1

1jN `(j ) = 1

X

`(j ) X

1  j  N s=1 `(j )  2

"?

`(j)?1 `(j)





js ei!js t=" + js e?i!js t=" Xjs

Let us make some conventions and denitions, used only in the present proof. We dene the following sets of indices : (7.2) (7.3) (7.4)

J = f j 2 f1; : : :; N g ; `(j )  2 g = fm1 + 1; : : : ; N g Jl = f j 2 f1; : : :; N g ; `(j ) = l g = fml?1 + 1; : : : ; ml g Kj = f ?`(j ) ; ?`(j ) ? 1 ; : : : ; ?1 ; 1 ; 2 ; : : : `(j ) g :

and the following sets of pairs of indices : (7.5)

I = f (j; s); j 2 J ; s 2 Kj g =

(7.6)

Il = f (j; s) 2 I ; `(j ) = l g =

We call F1 the vector eld (7.7)

F1 =

X 1jN `(j ) = 1

2j1 Xj1 =

[ j 2J

[

j 2Jl

fj g  Kj

fj g  Kj

X (j;s)2I

2js Xjs : 1

Clearly, if we dene, for s < 0, the real numbers !js , the complex numbers js and the vector elds Xjs by : (7.8)

!j?s = ? !js j?s = js Xj?s = Xjs

9 = ; for j 2 J ; s 2 Kj ; s > 0 ;

the vector eld F " from (7.1) may be rewritten as (7.9) (7.10) where (7.11)

@ +F + F " = @t 1

X

(j;s)2I

"?

`(j)?1 `(j)

js ei!js t=" Xjs

@ + F + "? F " + "? F " +    + "? PP? F " = @t 1 2 3 P 2 3

1 2

Fl" =

X (j;s)2Il

js ei!js t=" Xjs :

1

21

Stabilization via oscillatory approximation of brackets

Note that the interest of (7.10) is that the negative powers of " are written apart, and the vector elds Fj" have the boundedness property that their coecients are continuous functions of x and t, smooth outside x = 0, indexed by " > 0, and locally uniformly bounded with respect to " > 0 (it is not the case of F " itself because of the negative powers of "). In the remainder of the proof, we shall always write the negative powers of " apart, so that all the dierential operators written as capital letters never contain coecients that are unbounded when " goes to zero. We now dene a certain number of dierential operators Fp" ;p ;:::;pd of order d for d between 1 and P , and for all d-uple (p1 ; p2 ; : : : ; pd) of integers such that : 8 1  pk  P for 1  k  d ; > > 1 1 > p +    + pd?  1 ; > < (p1 ; p2 ) 6= (2; 2) ; (7.12) ( p ; p2 ; p3 ) 6= (3; 3; 3) ; 1 > > .. > > : (p1 ; : : : ; pd?1) 6=. (d ? 1; : : : ; d ? 1) : We dene Fp" ;p ;:::;pd to be equal to : sd t s X js js    jsdd ei(!j ++!jd ) " Xjsdd Xjsdd?? : : : Xjs (7.13) (d?1)!js (!js + !js )    (!js +    + !jsd? ) d? ((j ;s ) ;::: ; (jd ;sd ) ) 2 I d (p ;:::;pd ) i where I d(p1 ; : : : ; pd ) is the set of d-uples of indices ( (j1 ; s1 ) ; : : : ; (jd ; sd) ) such that `(jk ) = pk , and which are neither a collection of d2 pairs of the form (j; s), (j; ?s) nor such that, for some (even) k, 2  k  d, ( (j1 ; s1 ) ; : : : ; (jk ; sk ) ) would be a collection of k2 pairs of the form (j; s), (j; ?s). More precisely, I d(p1 ; : : : ; pd ) may be dened recursively by I 1 (p) = I1 and : ((j1 ; s1 ); : : : ; (jd ; sd )) 2 I d (p81 ; : : : ; pd) all k, > <  (((jjk1; ;ssk1));2: : I: p;k(jfor d?1 ; sd?1 )) 2 I d?1 (p1 ; : : : ; pd?1 ) ; , >  there exist no permutation : such that (j (k) ; s (k)) = (2jk ;S?(sdk)) : (7.14) With the above denition of the sets of indices I d (p1 ; : : : ; pd), the denominators in (7.13) cannot be zero because of the following lemma. Lemma 7.1. Let ((j1 ; s1 ); : : : ; (jd ; sd )) 2 I d see the denition of I in (7.5) be such that !js +    + !jsdd = 0 . Then :  either (`(j1 ); : : : ; `(jd )) = (d; : : : ; d) and there exists a permutation  2 S(d) such that the p-uple ((j1 ; s1 ); : : : ; (jd ; sd)) is exactly equal to ((j; (1)); : : : ; (j; (p))) or ((j; ?(1)); : : : ; (j; ?(p))) (with j1 =    = jd = j ).  or `(1j ) +    + `(1j ) > 1 , 1 d  or there exist a permutation  2 S(d)s such that (js (k) ; s (k) ) = (jk ; ?sk ) for all k. Proof of lemma 7.1 : The equality !j +    + !jdd = 0 may be rewritten `(j ) X X (7.15) sj !js = 0 j 2 f1; : : :; N g s=1 `(j )  2 1

1

1

1

2

1 1

1

2

1

1 1

2 2

1 1

1

1 1

1 1

1 1

1

1 1

1

2 2

1 1

1

1

22

P. Morin, J.-B. Pomet and C. Samson

where the integer sj is equal to the number of times that (j; s) appears in ((j1 ; s1 ); : : : ; (jd ; sd )) minus the number of times (j; ?s) appears. Of course, (7.15) may be rewritten as

XX

j 2J !2 j

with !js = sj . Note that

X !

X

jw j =

j;s

! ! = 0

jsj j  d  P :

Hence, from the assumption that the sequences of frequencies are mutually independent with respect to P , and are all minimally canceling (see (6.13)-(6.12)), each (1j ; : : : ; `j(j) ) is equal to either (0; : : : ; 0) or (1; : : : ; 1) or (?1; : : : ; ?1). If it is different from (0; : : : ; 0) for at least one j , then all the couples (j; 1); : : : ; (j; `(j )), or all the couples (j; ?1); : : : ; (j; ?`(j )), appear in ((j1 ; s1 ); : : : ; (jd ; sd)). If d = `(j ) for this j , i.e. if ((j1 ; s1 ); : : : ; (jd ; sd)) is a re-ordering of ((j; 1); : : : ; (j; `(j ))), or of ((j; ?1); : : : ; (j; ?`(j ))), then we are in the rst case of the lemma ; if d > `(j ), then there is at least another couple (j 0 ; s0 ) in ((j1 ; s1 ); : : : ; (jd ; sd)) and hence the sum 1 1 1 `(j ) +    + `(jd ) can be no less than 1 + `(j 0 ) and hence we are in the second case of the lemma. Let us now examine the case where all the (1j ; : : : ; `j(s) )'s are equal to (0; : : : ; 0). This means that for all j; s, the couple (j; s) and the couple (j; ?s) appears the same numbers of time in ((j1 ; s1 ); : : : ; (jd ; sd)). This allows one to build the permutation having the property required in the third point of the lemma : it is the one that exchanges 1 with the rst k1 such that (jk ; sk ) = (j1 ; ?s1 ), 2 (3 if k1 = 2) with the rst k2 6= k1 such that (jk ; sk ) = (j2 ; ?s2 ), and so on. We shall now prove the following two facts. Fact 1 : For all q, 1  q  P , there exist 1;q and 2;q strictly positive such that: 1

1

2

1

2

q @ +F +X F " = @t (?1)p?1 Fp;" p; : : : ; p 1 | {z }



p times  @

+ " ;q F " D1";q ? D1";q @t + " ;q D2";q ?  X + (?1)q?1 "? 1? p ?? pq Fp" ;:::;pq (p1 ; : : : ; pq ) 2 f2; : : :; P gq ; 1 1 p +    + pq  1 (p1 ; : : : ; pq ) 6= (q; : : : ; q) 1

(7.16)

p=2

2

1 1

1

1

1

Fact 2 : For all p, 1  p  P , there exist 10 ;p and 20 ;p strictly positive such that:

!

(?1)p?1 X < j1    jp B = 2 j p; p j2Jp i(p?1) | p;{z: : : ; p}

F"

p

(7.17)

times

+ " 10 ;p



F " D10";p

D0" @



? 1;p @t + " 0 ;p D20";p 2

Stabilization via oscillatory approximation of brackets

with (7.18)

23

X

[Xj(1) ; [Xj(2) ; [: : : ; Xj(`(j)) ] : : :]] (`(j )?1) ) (1) (2) (1) (1) 2S(`(j )) !j (!j + !j ) : : : (!j + : : : + !j

Bj =

These two facts imply Theorem 6.5. Indeed, for q = P , the last sum in (7.16) is empty since p1 +    + p1P  1 with all the integers pj no larger that P implies 1

(p1 ; : : : ; pP ) = (P; : : : ; P ). Hence for q = P , (7.16) reads

F" =

P @ +F +X (?1)p?1 Fp;" p; : : : ; p 1 @t | {z } p=2 p times  @



+ " ;P F " D1";P ? D1" @t + " ;P D2";P :

(7.19)

1

2

" Substituting in the above the expression of Fp;:::;p given by (7.17), one clearly gets (6.1) with the appropriate dierential operators D1" and D2" and the appropriate positive real numbers 1 and 2 . Proof of fact 1. We prove (7.16) by induction on q, from q = 1 to q = P . For q = 1, the sum on the rst line of (7.16) is empty, one may take D1";1; D1" and " D2;1 to be zero, and (7.16) is simply (7.10). Let us now suppose that (7.16) holds for a certain q  1 and let us prove it for q + 1. This is done through a manipulation on dierential operator that more or less mimics an integration by parts. Since we shall use it elsewhere, let us explain it on a general dierential operator Y before applying it. Consider a dierential operator of order d on functions of t and x that does not contain derivations with respect to t :

(7.20)

Y =

X

multi-indices I of length d

Dene Y [1] and Y [?1] to be (7.21) (7.22)

X

@ jI j : I (t)aI (t; x) @x I

Z t



@ jI j aI (; x)d @x I  multi-indices I of length d Z  X dI (t) t a (; x)d @ jI j Y [1] = I @xI  multi-indices I of length d dt

Y [?1] =

I (t)

Note that these are dened up to a function of x (through the initial time in the integrals) and that Y [1] is zero if the 's are constants. The derivative with respect to t of Y [?1] is Y + Y [1] in the following sense :

@ ; Y [?1] ] = @ Y [?1] ? Y [?1] @ : Y + Y [1] = [ @t @t @t indeed it is obvious that for any smooth function h of x and t, one has @ Y [?1] :h (t; x) ? Y [?1]: @h (t; x) ; (7.24) Y:h (t; x) + Y [1] :h (t; x) = @t @t (7.23)

24

P. Morin, J.-B. Pomet and C. Samson

simply because @t@ commutes with @ jI@j xI . Then we re-write (7.23) in the following way :

@ ; Y [?1] ] ? Y [1] Y = [ @t = F " Y [?1]

(7.25)

?

P X r=1

!

r?1 "? r Fr" Y [?1]

? Y [?1] @t@ ? Y [1] :

In order to prove that if (7.16) holds for q, it also holds for q + 1, we apply the identity (7.25) with Y = Fp" ;:::;pq [ ? Y 1] = " G"p ;:::;pq ; Y [1] = " Hp" ;:::;pq ; for (p1 ; : : : ; pq ) 6= (q; : : : ; q) and p1 +    + p1  1 ; (7.26) 1 q " " where Gp ;:::;pq and Hp ;:::;pq are given by : s     sq ei(!js ++!jsqq )t=" X sq X sq? : : : X s X  jq jq? j j jq (7.27) G"p ;:::;pq = s s s s q s q ((j ;s );:::;(jq ;sq )) 2 I q (p ;:::;pq ) i !j (!j + !j ) : : : (!j +    + !jq )  d  s sq  i(!js ++!jsq )t=" sq sq? q Xjq Xjq? : : : Xjs X dt j    jq e (7.28) Hp" ;:::;pq = sq s s q s s ((j ;s );:::;(jq ;sq ))2I q (p ;:::;pq ) i !j (!j + !j ) : : : (!j +    + !jq ) Note that the denominators are nonzero because, from lemma 7.1, the denition (7.14) of the set of indices I q (p1 ; : : : ; pq ) precisely removes the terms where the denominators would be zero. Then (7.25) with the above expressions for Y , Y [1] and Y [?1] yields : 1

1

1

1

1

1 1

1 1

1

1

1

1 1

1

1 1

1

1

1 1

1

2 2

1 1

1 1

1 1

1 1

1 1

1

1

1

1

1 1

1

2 2

1 1

X @ ? " H" (7.29) Fp" ;:::;pq = ? " r Fr" G"p ;:::;pq + " F " G"p ;:::;pq ? " G"p ;:::;pq @t p ;:::;pq : r=1 From (7.27) and (7.11) we have : Fr" G"p ;:::;pq = sq s X js    jsqq ei(!j ++!jq )t=" Xjsqq Xjsqq : : : Xjs sq s s q s s ( (j ; s ) ; : : : ; (jq ; sq ) ) 2 I q (p ; : : : ; pq ) i !j (!j + !j ) : : : (!j +    + !jq ) (jq ; sq ) 2 Ir The right-hand side of the above equation is equal to Fp" ;:::;pq ;r given by (7.13) because the (q + 1)-tuples ((j1 ; s1 ); : : : ; (jq+1 ; sq+1 )) which are in I q (p1 ; : : : ; pq )  Ir but not in I q+1 (p1 ; : : : ; pq ; r) are compare (7.14) these such that this is possible only if q is odd there exist a permutation  of the set of integers f1; : : : ; q + 1g for which ((j (1) ; s (1) ); (j (2) ; s (2)); : : : ; (j (q+1) ; s (q+1) )) (7.30) = ((j1 ; ?s1 ); (j2 ; ?s2 ); : : : ; (jq+1 ; ?sq+1 )) ; P

1

1

1

1

1

1

1

1 1

1

1

1

+1

+1 1 1

+1 +1

1 1

+1

1 1

2 2

+1

1

+1

+1

1 1

1 1

25

Stabilization via oscillatory approximation of brackets

but these terms sum to zero in the above sum, which is equal to Fr" Gp ;:::;pq because, for ((j1 ; s1 ); (j2 ; s2 ); : : : ; (jq+1 ; sq+1 )) such that there exists a permutation  satisfying (7.30), the term corresponding to ((j1 ; ?s1 ); (j2 ; ?s2); : : : ; (jq+1 ; ?sq+1 )) is opposite to the term corresponding to ((j1 ; s1 ); (j2 ; s2 ); : : : ; (jq+1 ; sq+1 )). Indeed, (7.8) for X and !, not for  and (7.30) imply that the term corresponding to ((j1 ; ?s1 ); (j2 ; ?s2); : : : ; (jq+1 ; ?sq+1 )) is equal to : 1

s (q+1)

s (1)

js    js qq ei(!j ++!j q )t=" Xjsqq Xjsqq : : : Xjs iq (?!js )(?!js ? !js ) : : : (?!js ?    ? !jsqq ) (1)

( +1)

(1)

( +1) 1 1

+1

( +1)

(1)

1 1

1 1

+1

2 2

1 1

which, since q must be odd (if not, there is no such terms anyway), is equal to

?

Qq+1



sq+1

s1

s k i(!j ++!jq )t=" sq Xjq Xjsqq : : : Xjs k=1 j k e iq !js (!js + !js ) : : : (!js +    + !jsqq ) ( )

1

( ) 1 1

+1

+1

1 1

2 2

1 1

+1

1 1

and  gives the change of index in the product allows to say that this is the opposite of the term corresponding to ((j1 ; s1 ); (j2 ; s2 ); : : : ; (jq+1 ; sq+1 )). Hence Fr" G"p ;:::;pq = Fp" ;:::;pq ;r . Substituting this in (7.29) yields (we rename r as pq+1 ) : 1

1

Fp"1 ;:::;pq

= ?

P X pq+1 =1

@ ? " H" " pq Fp" ;:::;pq ;pq + " F " G"p ;:::;pq ? " G"p ;:::;pq @t p ;:::;pq : 1 +1

1

1

1

+1

1

Hence (7.16) yields :

@ +F + (7.31) F " = @t 1 + " 1;q + (?1)q



q X p=2

(?1)p?1 Fp;" p; : : : ; p

| {z } p times @

F " D1";q ? D1" @t + " ;q D2";q

X

"

2



? 1? p11 ?? pq1+1

(p1 ; : : : ; pq ) 2 f2; : : : ; P gq



Fp" ;:::;pq ;pq 1

+1

p +  + p  1 (p ; : : : ; pq ) 6= (qq; : : : ; q) pq 2 f1; : : : ; P g   X @ ? H" + (?1)q?1 " p ++ pq F " G"p ;:::;pq ? G"p ;:::;pq @t p ;:::;pq (p ; : : : ; pq ) 2 f2; : : : ; P gq ; p + + p  1 (p ; : : : ; pq ) 6= (qq; : : : ; q) 1 1

1

1

+1

1 1

1

1 1

1

1

1

1

1

1

This yields (7.16) for q + 1 because the term corresponding to (p1 ; : : : ; pq+1 ) = (q + 1; : : : ; q + 1) in the sum on the third line is (?1)q Fq"+1;:::;q+1 , it adds to the sum on the rst lie and this yields the rst line of (7.16) for q + 1, the other terms in this sum such that p1 +    + pq1  1 yield exactly the third line of (7.16) for q + 1, and the terms in this sum such that p1 +    + pq1 > 1, as well as all the last sum add up with the second line to give the second line (the small terms) of (7.16) for q + 1. This ends the proof by induction of fact 1. 1

+1

1

+1

26

P. Morin, J.-B. Pomet and C. Samson

Proof of fact 2. From the denition (7.13) of Fp" ;p ;:::;pd , we have : 1

Fp;" : : : ; p =

(7.32)

2

| {z } p

times

X

js js    jspp Xjspp Xjspp?? : : : Xjs (p?1)!js (!js + !js ) : : : (!js +    + !jsp? ) p? ((j ; s ) ; : :s: ; (jp ; sp ) )sp2 I p (p; : : : ; p) i 1

1

1 1

1 1

!j11 +    + !jp = 0

1

2 2

1 1

1

1 1

2 2

1

1 1

1

sp s js js    jspp ei(!j ++!jp )t=" Xjspp Xjspp?? : : : Xjs + (p?1) !js (!js + !js ) : : : (!js +    + !jsp? ) p? ((j ; s ) ; : :s: ; (jp ; sp ) )sp2 I p (p; : : : ; p)i

X

1

1

1 1

1 1

2 2

1 1

!j11 +    + !jp 6= 0

1

1 1

1

1 1

2 2

1

1 1

1

Now, apply (7.20)-(7.21)-(7.22)-(7.25) with Y equal to the second sum, and therefore " Y [?1] = " G"p;:::;p ; Y [1] = " Hp;:::;p ; with (7.33) G"p; : : : ; p =

| {z } p

times

sp

js    jspp ei(!j ++!jp )t=" Xjspp Xjspp?? : : : Xjs sp s s p s s ((j ; s ); : :s: ; (jp ; sp )) s2p I p (p; : : : ; p) i !j (!j + !j ) : : : (!j +    + !jp ) 1

1

X

s1

1 1

1

1 1

!j +    + !jp 6= 0 " (7.34) Hp; : : : ; p =

1

1 1

1

1 1

2 2

1 1

1 1

| {z } times

d 

 s     sp ei(!js ++!jspp )t=" X sp X sp? : : : X s  jp jp? j jp dt j s (! s + ! s ) : : : (! s +    + ! sp ) p i ! j j jp ((j ; s ); : : : ; (jp ; sp )) 2 I p (p : : : p) j j p

1

1

X

1 1

1 1

1 1

!js11 +    + !jspp 6= 0

1 1

1

1 1

1

2 2

1 1

This allows to rewrite the second sum in (7.32) as

P @ ? " H" ? X " " F " G"p;:::;p ? " G"p;:::;p @t " r Fp;:::;p;r p;:::;p 1

r=1

with (7.35) Fp;" : : : ; p ; r = | {z } p

times

sp s js    jspp ei(!j ++!jp )t=" Xjspp Xjspp : : : Xjs sp s s p s s ((j ; s ); : :s: ; (jp ; sp )) s2p I p(p; : : : ; p) i !j (!j + !j ) : : : (!j +    + !jp ) 1

1

X

!j11 +    + !jp 6= 0 (jp+1 ; sp+1 ) 2 Ir

1 1

+1 1 1

+1 +1

1 1

+1

1 1

2 2

+1

+1

1 1

1 1

Let us now consider the rst sum in (7.32). From Lemma 7.1 and the fact that, from (7.14), a p-uple that is in I p (p; : : : ; p) cannot be of the type described in the

27

Stabilization via oscillatory approximation of brackets

third item of this slemma, all the p-uples ((j1 ; s1 ); : : : ; (jp ; sp )) in I p (p; : : : ; p) such that !js +    + !jpp = 0 are exactly are exactly of the form ((j; (1)); : : : ; (j; (p))) or ((j; ?(1)); : : : ; (j; ?(p))) with `(j ) = p and  2 S(p). Hence the rst sum may be rewritten (recall that Xj?s = Xjs ) as : 1 1

!

X

1    p 2 < jip?1 j Cj j 2Jp with

X

Xj(p) Xj(p?1) : : : Xj(1) Cj = (p?1) ) (1) (2) (1) (1) 2S(p) !j (!j + !j )    (!j +    + !j

(7.36)

If one replaces in the above sum  by    where  is the permutation that sends (1; 2; : : : ; p) on (p; p ? 1; : : : ; 1) (change of indices in the summation), one gets X Xj(1) Xj(2) : : : Xj(p) Cj = (p?1) )! (p) (p) (3) (2) (p) (p) j 2S(p) (!j +    + !j )(!j +    + !j )    (!j + !j since !j1 +    + !jp = 0, the denominator may be transformed :

Cj = (?1)p?1

X

Xj(1) Xj(2) : : : Xj(p) (p?1) ) (1) (2) (1) (1) 2S(p) !j (!j + !j )    (!j +    + !j

Finally, a combinatorial computation in the free Lie algebra (see [8], or [9] in which this identity is also obtained but in a less computational way) gives: X Xj(1)Xj(2) : : : Xj(p) (p?1) ) (1) (2) (1) (1) 2S(p) !j (!j + !j )    (!j +    + !j X [Xj(1) ; [Xj(2) ; [   ; Xj(p) ]   ]] 1 = p (p?1) ) (1) (2) (1) (1) 2S(p) !j (!j + !j )    (!j +    + !j p? Hence Cj = (?1)p Bj with Bj given by (7.18). Substituting the above in (7.32) yields ! p?1 X j1    jp 2 ( ? 1) " (7.37) Fp; : : : ; p = < ip?1 Bj p | {z } 1

p

times

j 2Jp

P @ ) ? " H" ? X " r Fp;" : : : ; p;r + " (F " G"p; : : : ; p ? G"p; : : : ; p @t p;:::;p | {z } | {z } | {z } 1

p

times

p

times

r=1

p

times

This clearly yields (7.17), and ends the proof of fact 2, and hence the proof of Theorem 6.5.

28

P. Morin, J.-B. Pomet and C. Samson

7.3. Proof of Theorem 4.9. Let F " =

@ @t

+ f " with f " the vector eld associated with the right-hand side of (4.21), and G = @t@ + g with g the vector eld associated with the right-hand side of (4.17). First, we show that F " DO-converges see dention 6.1 to G as " tends to zero. Since (4.21) is the same as (4.16) with uj;s = u"j;s given by (4.18), F " can be expressed in the form (6.15), with all Xjs 's homogeneous of degree zero because each Xjs corresponds to one of the bjs vjs 's and, from Proposition 4.6, all vector elds bjs vjs are homogeneous of degree zero. We can apply Theorem 6.5 because the sets

n; (n = 2; : : : ; N ) in the construction of Theorem 4.7 are minimally cancelling and linearly independent w.r.t P  (see [9, Sec. 5]). It implies that F " DO-converges, as " tends to zero, to a vector eld F 0 = @t@ + f 0 of the form (6.17), and in the denition (6.1) of DO-convergence, all dierential operators are homogeneous of degree zero. We claim that G = F 0 . Indeed, from Proposition 6.2, the property of DO-convergence implies the uniform convergence of the trajectories on nite time intervals. Therefore, the trajectories of (4.21) converge to those of x_ = f 0 (t; x). But from Theorem 4.7 (recall that (4.21) is the same as (4.16)(4.18)), they converge to the trajectories of (4.17). This implies that the systems x_ = g(t; x) and x_ = f 0 (t; x) are the same because they have the same trajectories. Hence, F 0 = G. Finally, since F " DO-converges to G = @t@ +g with g autonomous, and since all dierential operators in the denition of DO-convergence are homogeneous of degree zero, the asymptotic stability of the origin of (4.21), for " > 0 small enough, will follow from Proposition 6.2 if we can show that the origin of (4.17) is asymptotically stable. This is a direct consequence of (4.3) to (4.5). REFERENCES [1] R. W. Brockett, Asymptotic stability and feedback stabilization, in Dierential Geometric Control Theory, R. W. Brockett et al., eds., 1983, Birkäuser. [2] J.-M. Coron, A necessary condition for feedback stabilization, Syst. & Control Lett., 14 (1990), pp. 227232. [3] , Global asymptotic stabilization of for controllable systems without drift, Math. of Control, Signals & Systems, 5 (1992), pp. 295312. , Stabilization in nite time of locally controllable systems by means of continuous sta[4] bilizing feedback laws, SIAM J. on Control and Optim., 33 (1995), pp. 804833. [5] G. W. Haynes and H. Hermes, Nonlinear controllability via Lie theory, SIAM J. on Control, 8 (1970), pp. 450460. [6] H. Hermes, Nilpotent and high-order approximations of vector elds systems, SIAM Reviews, 33 (1991), pp. 238264. [7] M. Kawski, Homogeneous stabilizing feedabck laws, Control Th. and Adv. Technol., 6 (1990), pp. 497516. [8] J. Kurzweil and J. Jarnik, Iterated Lie brackets in limit processes in ordinary dierential equations, Results in Mathematics, 14 (1988), pp. 125137. [9] W. Liu, An approximation algorithm for nonholonomic systems, SIAM J. on Control and Optim., 35 (1997), pp. 13281365. [10] C. Lobry, Contrôlabilité des systèmes non linéaires, SIAM J. on Control, 8 (1970), pp. 573 605. [11] R. T. M'Closkey and R. M. Murray, Nonholonomic systems and exponential convergence : Some analysis tools, in 32th IEEE Conf. on Decision & Control, 1993. [12] , Exponential stabilization of driftless nonlinear control systems via time-varying homogeneous feedback, in 33rd IEEE Conf. on Decision & Control, 1994. , Exponential stabilization of driftless nonlinear control systems using homogeneous feed[13] back, IEEE Trans. Automat. Control, (to appear). [14] P. Morin and C. Samson, Applications of backstepping techniques to the time-varying exponential stabilization of chained-form systems, Europ. J. of Control, 3 (1997), pp. 1536. [15] J.-B. Pomet, Explicit design of time-varying stabilizing control laws for a class of controllable

Stabilization via oscillatory approximation of brackets

29

systems without drift, Syst. & Control Lett., 18 (1992), pp. 147158. [16] J.-B. Pomet and C. Samson, Exponential stabilization of nonholonomic systems in power form, in IFAC Symposium on Robust Control Design, Rio de Janeiro (Brasil), Sept. 1994, pp. 447452. [17] L. Rosier, Homogeneous Lyapunov function for homogeneous continuous vector eld, Syst. & Control Lett., 19 (1992), pp. 467473. [18] C. Samson, Velocity and torque feedback control of a nonholonomic cart, in Int. Workshop in Adaptive and Nonlinear Control : Issues in Robotics, vol. 162 of Proc. in Advanced Robot Control, Springer-Verlag, New-York, 1991. Proceedings of a Conf. held in Grenoble, France, 1990. [19] J. A. Sanders and F. Verhulst, Averaging Methods in Nonlinear Dynamical Systems, vol. 56 of Applied Mathematical Sciences, Springer-Verlag, 1985. [20] R. Sépulchre, G. Campion, and V. Wertz, Some remarks on periodic feedback stabilization, in 2nd NOLCOS, Bordeaux, France, June 1992, IFAC, pp. 418423. [21] E. D. Sontag and H. J. Sussmann, Remarks on continuous feedback, in 19th IEEE Conf. on Decision and Control, 1980. [22] G. Stefani, On the local controllability of scalar-input control systems, in Theory and Applications of Nonlinear Control Systems, C. I. Byrnes and A. Lindquist, eds., North-Holland, 1986, pp. 167179. (Proc. of MTNS'84). [23] H. J. Sussmann, Subanalytic sets and feedback control, J. Di. Equations, 31 (1979), pp. 3152. , A general theorem on local controllability, SIAM J. on Control and Optim., 25 (1987), [24] pp. 158194. [25] H. J. Sussmann and W. Liu, Limits of highly oscillatory controls and approximation of general paths by admissible trajectories, in 30th IEEE Conf. on Dec. & Control, 1991. [26] , Lie bracket extensions and averaging : the single bracket case, in Nonholonomic Motion Planning, Z. Li and J. Canny, eds., Boston, 1993, KAP, pp. 107147. [27] A. R. Teel, R. M. Murray, and G. Walsh, Nonholonomic control systems : from steering to stabilization with sinusoids, in 31th IEEE Conf. on Decision and Control, 1992.