GENERATING SERIES FOR INTERCONNECTED ANALYTIC ...

Report 1 Downloads 25 Views
c 2005 Society for Industrial and Applied Mathematics 

SIAM J. CONTROL OPTIM. Vol. 44, No. 2, pp. 646–672

GENERATING SERIES FOR INTERCONNECTED ANALYTIC NONLINEAR SYSTEMS∗ W. STEVEN GRAY† AND YAQIN LI† Abstract. Given two analytic nonlinear input-output systems represented as Fliess operators, four system interconnections are considered in a unified setting: the parallel connection, product connection, cascade connection, and feedback connection. In each case, the corresponding generating series is produced and conditions for the convergence of the corresponding Fliess operator are given. In the process, an existing notion of a composition product for formal power series has its set of known properties significantly expanded. In addition, the notion of a feedback product for formal power series is shown to be well defined in a broad context, and its basic properties are characterized. Key words. Chen–Fliess series, formal power series, nonlinear operators, nonlinear systems AMS subject classifications. 47H30, 93C10 DOI. 10.1137/S036301290343007X

1. Introduction. Let X = {x0 , x1 , . . . , xm } denote an alphabet and X ∗ the set of all words over X (including the empty word ∅). A formal power series in X is any mapping of the form X ∗ → R , and the set of all such mappings will be denoted by R X. For each c ∈ R X, one can formally associate an m-input, -output operator Fc in the following manner. Let p ≥ 1 and a < b be given. For a measurable function u : [a, b] → Rm , define up = max{ui p : 1 ≤ i ≤ m}, where ui p is the usual Lp -norm for a measurable real-valued function, ui , defined on [a, b]. Let Lm p [a, b] denote the set of all measurable functions defined on [a, b] having a finite  · p -norm and Bpm (R)[a, b] := {u ∈ Lm p [a, b] : up ≤ R}. With t0 , T ∈ R fixed and T > 0, define recursively for each η ∈ X ∗ the mapping Eη : Lm 1 [t0 , t0 + T ] → C[t0 , t0 + T ] by E∅ = 1, and 

t

Exi η¯[u](t, t0 ) =

ui (τ )Eη¯[u](τ, t0 ) dτ, t0

where xi ∈ X, η¯ ∈ X ∗ , and u0 (t) ≡ 1. The input-output operator corresponding to c is then  Fc [u](t) = (c, η) Eη [u](t, t0 ), η∈X ∗

which is referred to as a Fliess operator. All Volterra operators with analytic kernels, for example, are Fliess operators. In the classical literature, where these operators first appeared [7, 9, 10, 26], it is normally assumed that there exist real numbers K, M > 0 such that |(c, η)| ≤ KM |η| |η|! for all η ∈ X ∗ , where |z| = max{|z1 | , |z2 | , . . . , |z |} when z ∈ R , and |η| denotes the number of letters in η. This growth condition on the coefficients of c ensures that there exist positive real numbers R and T0 such that, for all piecewise continuous u with u∞ ≤ R and T ≤ T0 , the series defining ∗ Received by the editors June 18, 2003; accepted for publication (in revised form) December 8, 2004; published electronically September 12, 2005. http://www.siam.org/journals/sicon/44-2/43007.html † Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529-0246 ([email protected], [email protected]).

646

647

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

Fc

Fc +

u

y

u

Fd

Fd

(a) parallel connection

(b) product connection u

u

+

Fd

Fc

+

Fc

y

y

y

Fd (c) cascade connection

(d) feedback connection

Fig. 1.1. Elementary system interconnections.

Fc converges uniformly and absolutely on [t0 , t0 + T ]. Therefore, a power series c is said to be locally convergent when its coefficients satisfy such a growth condition. The set of all locally convergent series in R X will be denoted by RLC X. More recently, it was shown in [13] that local convergence also implies that Fc constitutes a well-defined operator from Bpm (R)[t0 , t0 + T ] into Bq (S)[t0 , t0 + T ] for sufficiently small R, S, T > 0, where the numbers p, q ∈ [1, ∞] are conjugate exponents, i.e., 1/p + 1/q = 1 with (1, ∞) being a conjugate pair by convention. In many applications, input-output systems are interconnected in a variety of ways. Given two Fliess operators Fc and Fd , where c, d ∈ RLC X, Figure 1.1 shows four elementary interconnections. The product connection is defined componentwise, and in the case of the feedback connection it is assumed that  = m > 0. The general goal of this paper is to describe in a unified manner the generating series for each elementary interconnection and conditions under which it is locally convergent. The clear antecedent to this work is that of Ferfera, who first described the generating series for such connections (implicitly in the case of feedback) and, in particular, introduced the composition product c ◦ d of two formal power series c and d [5, 6]. In each case, however, the local convergence of the new generating series or, equivalently, the convergence of the corresponding Fliess operator, was not explicitly addressed. It is trivial to show that the parallel connection of Fc and Fd always produces a locally convergent generating series when c and d are locally convergent. The same conclusion was later provided in [28] for the product connection via an analysis involving the shuffle product. In this paper, an analogous result is developed for the composition product by producing an explicit expression for one pair of growth constants, Kc◦d and Mc◦d . In the process, the set of known properties of the composition product is significantly expanded. (An interesting parallel development has appeared in [3, 11] regarding a composition product for formal power series motivated by the composition of two analytic functions (see, e.g., [18]) rather than two Fliess (integral) operators. Its definition is quite distinct and not clearly related to the composition product described in this paper.)

648

W. STEVEN GRAY AND YAQIN LI

The feedback connection is a fundamentally more difficult case to analyze. For example, when Fc is a linear operator, the formal solution to the feedback equation (1)

y = Fc [u + Fd [y]]

is y = Fc [u] + Fc ◦ Fd ◦ Fc [u] + · · · . It is not immediately clear that this series converges in any manner and, in particular, converges to another Fliess operator, say, Fc@d , for some c@d ∈ Rm LC X. When Fc is nonlinear, the problem is further complicated by the fact that operators of the form I + Fd , where I denotes the identity map, never have a Fliess operator representation. In this paper, the problem is circumvented by introducing a simple variation of the composition product so that an appropriate feedback product, c@d, is well defined, and y = Fc@d [u] satisfies the feedback equation (1) in the sense that every analytic input u produces an analytic output y with (u, y) satisfying (1). In this case, c@d is referred to as being input-output locally convergent, and explicit expressions are derived for one set of growth constants, Kcy and Mcy , for the series representation of the output function, cy . It should be stated that Ferfera’s primary interest in [5, 6] was rational series and their corresponding bilinear realizations. In a state space setting, the issue of local convergence is rather straightforward. If c and d each have finite Lie rank, in addition to being locally convergent, then the mappings Fc and Fd each have a finitedimensional analytic state space realization, and therefore so does each interconnected system (see [16, 21] for a basic treatment of nonlinear realization theory). The literature then provides that the corresponding generating series can be computed by successive Lie derivatives and, in particular, it must be locally convergent [26, Lemma 4.2]. (Additional analysis of interconnected state space systems using a chronological product together with Hall–Viennot bases appears in [17].) While the state space formalism is clearly dominant in modern control theory, other system descriptions like Volterra series [10, 16, 21] or input-output differential equations [28, 29, 30] are sometimes useful. In such settings, the convergence analysis of interconnected systems is a natural application for the main results of this paper. But even in a pure state space setting, as illustrated by Examples 3.2 and 4.11, knowledge of the growth constants for the generating series of a given interconnection permits one to compute a lower bound on any finite escape time. This is particularly useful in physical problems, like the one described in [12], as it provides computable limitations on the applicability of the underlying mathematical models. The paper is organized as follows. In section 2 the composition product is introduced and developed independently of the system interconnection problem. First, its various fundamental properties are presented. Then, in preparation for the feedback analysis, it is shown that the composition product produces a contractive mapping on the set of all formal power series using a familiar ultrametric. In section 3, the three nonrecursive connections, parallel, product, and cascade, are analyzed primarily by applying results from section 2. In section 4 the feedback connection is considered. The main focus is on showing when the feedback product of two formal power series is well defined and in precisely what sense it is locally convergent. 2. The composition product. The composition product of two formal power series over an alphabet X = {x0 , x1 , . . . , xm } is defined recursively in terms of the

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

649

shuffle product. The shuffle product of two words η, ξ ∈ X ∗ is defined recursively by η  ξ = (xj η  )  (xk ξ  ) := xj [η   ξ] + xk [η  ξ  ] with ∅  ∅ = ∅ and ξ  ∅ = ∅  ξ = ξ. It is easily verified that η  ξ is always a polynomial consisting of words each having length |η| + |ξ|. The definition is extended to any two series c, d ∈ RX by (2)



c  d =

[(c, η)(d, ξ)] η  ξ.

η,ξ∈X ∗

For a fixed ν ∈ X ∗ , the coefficient (η  ξ, ν) = 0 if |η| + |ξ| = |ν|. Hence, the infinite sum in (2) is well defined since the family of polynomials {η  ξ : η, ξ ∈ X ∗ } is locally finite [2]. In general, the shuffle product is commutative. It is also associative and distributes over addition. Thus, the vector space RX with the shuffle product forms a commutative R-algebra, the so-called shuffle algebra, with multiplicative identity element ∅. The shuffle product on R X is defined componentwise, i.e., (c  d, ν)i = (ci  di , ν) for i = 1, 2, . . . , . For any η ∈ X ∗ and d ∈ Rm X, the composition product is defined recursively as  η : |η|xi = 0 ∀i = 0, η◦d= [di  (η  ◦ d)] : η = xn0 xi η  , n ≥ 0, i = 0, xn+1 0 where |η|xi denotes the number of letters in η equivalent to xi and di : ξ → (d, ξ)i , the ith component of the coefficient (d, ξ). Consequently, if (3)

n

η = xn0 k xik x0 k−1 xik−1 · · · xn0 1 xi1 xn0 0 ,

where ij = 0 for j = 1, . . . , k, then it follows that η ◦ d = x0nk +1 [dik



n

x0 k−1

+1

[dik−1



· · · xn0 1 +1 [di1



xn0 0 ] · · ·]].

Alternatively, for any η ∈ X ∗ , one can uniquely associate a set of right factors {η0 , η1 , . . . , ηk } by the iteration (4)

n

ηj+1 = x0 j+1 xij+1 ηj , η0 = xn0 0 , ij+1 = 0,

so that η = ηk with k = |η| − |η|x0 . In which case, η ◦ d = ηk ◦ d, where n

ηj+1 ◦ d = x0 j+1

+1

[dij+1



(ηj ◦ d)]

and η0 ◦ d = xn0 0 . The theorem below ensures that the composition product of two series described subsequently is well defined. Theorem 2.1. Given a fixed d ∈ Rm X, the family of series {η ◦ d : η ∈ X ∗ } is locally finite, and therefore summable. Proof. Given an arbitrary η ∈ X ∗ expressed in the form (3), it follows directly that (5)

ord(η ◦ d) = n0 + k +

k  j=1

|η|−|η|x0

nj + ord(dij ) = |η| +

 j=1

ord(dij ),

650

W. STEVEN GRAY AND YAQIN LI

where the order of c is defined as  inf{|η| : η ∈ supp(c)} ord(c) = ∞

: c = 0, : c = 0,

and supp(c) := {η ∈ X ∗ : (c, η) = 0} denotes the support of c. Hence, for any ξ ∈ X ∗ , Id (ξ) := {η ∈ X ∗ : (η ◦ d, ξ) = 0} ⊂ {η ∈ X ∗ : ord(η ◦ d) ≤ |ξ|} ⎧ ⎫ |η|−|η|x0 ⎨ ⎬  = η ∈ X ∗ : |η| + ord(dij ) ≤ |ξ| . ⎩ ⎭ j=1

Clearly this last set is finite, and thus Id (ξ) is finite for all ξ ∈ X ∗ . This fact implies summability. For any c ∈ R X and d ∈ Rm X, the composition product is defined as  (c, η) η ◦ d. c◦d= η∈X ∗

The summation can also be written using the set of all right factors as described by (4). Let X i be the set of all words in X ∗ of length i. For each word η ∈ X i , the jth right factor, ηj , has exactly j letters not equal to x0 . Therefore, given any ν ∈ X ∗ , (6)

(c ◦ d, ν) =

|ν| i   

(c, ηj )(ηj ◦ d, ν).

i=0 j=0 ηj ∈X i

The third summation is understood to be the sum over the set of all possible jth right factors of words of length i. This set has a familiar combinatoric interpretation. A composition of a positive integer N is an ordered set of positive integers {a1 , a2 , . . . , aK } such that N = a1 + a2 + · · · + aK . (For example, the integer 3 has the compositions 1 + 1 + 1, 1 + 2, 2 + 1, and 3). For a given N and K, it is well known N −1 possible compositions. Now each factor ηj ∈ X i , when that there are CK (N ) = K−1 written in the form n

n

ηj = x0 j xij x0 j−1 xij−1 · · · xn0 1 xi1 xn0 0 , maps to a unique composition of i + 1 with j + 1 elements: i + 1 = (n0 + 1) + (n1 + 1) + · · · + (nj + 1). Thus, there are exactly Cj+1 (i + 1)mj = ji mj possible factors ηj in X i , and the total number of terms in the summations of (6) is ((m + 1)|ν|+1 − 1)/m ≈ (m + 1)|ν| . As will be seen shortly, this provides a conservative lower bound on the growth rate of the coefficients of c ◦ d. It is easily verified that the composition product is linear in its first argument, but not its second. A special exception are linear series. A series c ∈ R X is called linear if supp(c) ⊆ {η ∈ X ∗ : η = xn0 1 xi xn0 0 , i ∈ {1, 2, . . . , m}, n1 , n0 ≥ 0}. It was shown in [5] that the composition product is associative and distributive from the right over the shuffle product. But in general it is neither commutative nor has an identity element. This lack of an identity element is precisely the reason the identity map I is not realizable as a Fliess operator. Other elementary properties concerning the composition product are summarized below.

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

651

Lemma 2.2. The following identities hold (1l is a column vector with m ones): 1. 0 ◦ d = 0 for all d ∈ Rm X. 2. c ◦ 0 = c0 := n≥0 (c, xn0 ) xn0 . (Therefore, c ◦ 0 = 0 if and only if c0 = 0.) 3. c0 ◦ d = c0 for all d ∈ Rm X. (In particular, 1 ◦ d = 1.)

|η| 4. c ◦ 1l = c ll := η∈X ∗ (c, η) x0 . (Therefore, c ◦ 1l = c if and only if c0 = c.) The set Rm X forms a metric space under the ultrametric dist

: :

Rm X × Rm X → R+ ∪ {0}, (c, d) → σ ord(c−d) ,

where σ ∈ (0, 1) is arbitrary [2]. The following theorem states that the composition product on Rm X × Rm X is continuous in its left argument. (Right argument continuity will be addressed later.) Theorem 2.3. Let {ci }i≥1 be a sequence in Rm X with limi→∞ ci = c. Then limi→∞ (ci ◦ d) = c ◦ d for any d ∈ Rm X. Proof. Define the sequence of nonnegative integers ki = ord(ci −c) for i ≥ 1. Since c is the limit of the sequence {ci }i≥1 , the sequence {ki }i≥1 must have an increasing subsequence {kij }. Now observe that dist(ci ◦ d, c ◦ d) = σ ord((ci −c) ◦d) and ⎛ ord((cij − c) ◦ d) = ord ⎝



⎞ (cij − c, η) η ◦ d⎠

η∈supp(cij −c)

≥ ≥

inf

ord(η ◦ d)

inf

|η| + (|η| − |η|x0 ) ord(d)

η∈supp(cij −c) η∈supp(cij −c)

≥ kij . Thus, dist(cij ◦ d, c ◦ d) ≤ σ kij for all j ≥ 1, and limi→∞ ci ◦ d = c ◦ d. The ultrametric space (Rm X, dist) is known to be complete [2]. Given a fixed c ∈ Rm X, consider the mapping Rm X → Rm X : d → c ◦ d. The goal is to show that this mapping is always a contraction on Rm X, i.e., that dist(c ◦ d, c ◦ e) ≤ σ dist(d, e) ∀d, e ∈ Rm X, so that fixed point theorems can be applied in later analysis [14, 22, 23, 24]. Any c ∈ Rm X can be written unambiguously in the form (7)

c = c0 + c1 + · · · ,

where ck ∈ Rm X has the defining property that η ∈ supp(ck ) only if |η|−|η|x0 = k. Some of the series ck may be the zero series. When c0 = 0, c is referred to as being homogeneous. When ck = 0 for k = 0, 1, . . . , l − 1 and cl = 0, then c is called homogeneous of order l. In this setting consider the following lemma. Lemma 2.4. For any ck in (7), dist(ck ◦ d, ck ◦ e) ≤ σ k dist(d, e) ∀d, e ∈ Rm X.

652

W. STEVEN GRAY AND YAQIN LI

Proof. The proof is by induction for the nontrivial case, where ck = 0. First suppose k = 0. From the definition of the composition product it follows directly that η ◦ d = η for all η ∈ supp(c0 ). Therefore,   c0 ◦ d = (c0 , η) η ◦ d = (c0 , η) η = c0 , η ∈ supp(c0 )

η ∈ supp(c0 )

and dist(c0 ◦ d, c0 ◦ e) = dist(c0 , c0 ) = 0 ≤ σ 0 dist(d, e). Now fix any k ≥ 0 and assume the claim is true for all c0 , c1 , . . . , ck . In particular, this implies that ord(ck ◦ d − ck ◦ e) ≥ k + ord(d − e).

(8)

For any j ≥ 0, words in supp(cj ) have the form ηj as defined in (4). Observe then that  ck+1 ◦ d − ck+1 ◦ e = (ck+1 , ηk+1 ) ηk+1 ◦ d − (ck+1 , ηk+1 ) ηk+1 ◦ e ηk+1 ∈X ∗

=



 (ck+1 , ηk+1 ) x0 nk+1 +1 [dik+1



[ηk ◦ d]]

ηk ,ηk+1 ∈X ∗

− x0 nk+1 +1 [eik+1 =



 (ck+1 , ηk+1 ) x0 nk+1 +1 [dik+1





 [ηk ◦ e]]

[ηk ◦ d]]

ηk ,ηk+1 ∈X ∗

− x0 nk+1 +1 [dik+1 + x0 =

nk+1 +1



[dik+1



[ηk ◦ e]] − x0

nk+1 +1

 (ck+1 , ηk+1 ) x0 nk+1 +1 [dik+1

ηk ,ηk+1 ∈X ∗



[ηk ◦ e]]

[eik+1





 [ηk ◦ e]]

[ηk ◦ d − ηk ◦ e]]

 + x0 nk+1 +1 [(dik+1 − eik+1 )  [ηk ◦ e]] ,

using the fact that the shuffle product distributes over addition. Next, applying the identity (5) and the inequality (8) with ck = ηk , it follows that  ord(ck+1 ◦ d − ck+1 ◦ e) ≥ min inf nk+1 + 1 + ord(d) + k + ord(d − e), ηk+1 ∈ supp(ck+1 )  inf nk+1 + 1 + ord(d − e) + |ηk | + k ord(e) ηk+1 ∈ supp(ck+1 )

≥ k + 1 + ord(d − e), and thus, dist(ck+1 ◦ d, ck+1 ◦ e) ≤ σ k+1 dist(d, e). Hence, dist(ck ◦ d, ck ◦ e) ≤ σ k dist(d, e) holds for any k ≥ 0. Applying the above lemma leads to the following result.

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

653

Lemma 2.5. If c ∈ Rm X, then for any series c0 ∈ Rm X0 , (9)

dist((c0 + c) ◦ d, (c0 + c) ◦ e) = dist(c ◦ d, c ◦ e) ∀d, e ∈ Rm X.

(Here X0 denotes the single letter alphabet {x0 }.) If c is homogeneous of order l ≥ 1 then dist(c ◦ d, c ◦ e) ≤ σ l dist(d, e) ∀d, e ∈ Rm X.

(10)

Proof. The equality is proved first. Since the ultrametric dist is shift-invariant, observe that dist((c0 + c) ◦ d, (c0 + c) ◦ e) = dist (c0 ◦ d + c ◦ d, c0 ◦ e + c ◦ e) = dist (c0 + c ◦ d, c0 + c ◦ e) = dist (c ◦ d, c ◦ e) . The inequality is proved next by first selecting

l+k any fixed l ≥ 1 and showing inductively that it holds for any partial sum i=l ci , where k ≥ 0. When k = 0, Lemma 2.4 implies that dist(cl ◦ d, cl ◦ e) ≤ σ l dist(d, e). If the result is true for partial sums up to any fixed k ≥ 0, then using the ultrametric property dist(d, e) ≤ max{dist(d, f ), dist(f, e)} ∀d, e, f ∈ Rm X, it follows that l+k+1   l+k+1    ci ◦ d, ci ◦ e dist i=l

i=l

 l+k   l+k     = dist ci ◦ d + cl+k+1 ◦ d, ci ◦ e + cl+k+1 ◦ e i=l



i=l

 l+k   l+k     ≤ max dist ci ◦ d + cl+k+1 ◦ d, ci ◦ d + cl+k+1 ◦ e , i=l

i=l

i=l

i=l

 l+k    l+k    ci ◦ d + cl+k+1 ◦ e, ci ◦ e + cl+k+1 ◦ e dist  l+k    l+k    = max dist(cl+k+1 ◦ d, cl+k+1 ◦ e), dist ci ◦ d, ci ◦ e  

≤ max σ

l+k+1

 dist(d, e), σ dist(d, e)

i=l

i=l

l

≤ σ l dist(d, e). Hence, the result holds for all k ≥ 0. Inequality (10) is proved by noting that c =

l+k limk→∞ i=l ci and using the left argument continuity of the composition product, proved in Theorem 2.3, and the continuity of the ultrametric. The main result regarding contractive mappings is below. Theorem 2.6. For any c ∈ Rm X, the mapping d → c ◦ d is a contraction on m R X.

654

W. STEVEN GRAY AND YAQIN LI

Proof. Choose any series d, e ∈ Rm X. If c is homogeneous of order l ≥ 1, then the result follows directly from (10). Otherwise, observe that, via (9),  ∞  ∞     dist(c ◦ d, c ◦ e) = dist ci ◦ d, ci ◦ e ≤ σ dist(d, e). l=1

l=1

An immediate result of this theorem is the right argument continuity of the composition product. Theorem 2.7. Let {di }i≥1 be a sequence in Rm X with limi→∞ di = d. Then limi→∞ (c ◦ di ) = c ◦ d for all c ∈ Rm X. Proof. Trivially, lim dist(c ◦ di , c ◦ d) ≤ σ lim dist(di , d) = 0.

i→∞

i→∞

The final property considered in this section is local convergence. If all the summands in the defining expression (6) are unity, i.e., c and d have no coefficient growth whatsoever, then earlier combinatoric analysis shows that (c ◦ d, ν) grows at least at the rate (m + 1)|ν| . Of course, in general, much faster growth rates are possible when c and d are simply locally convergent. The analysis begins by considering the local convergence of the shuffle product. It provides a point of reference and some important tools. The following theorem was proved in [28]. Theorem 2.8. Suppose c, d ∈ RLC X with growth constants Kc , Mc and Kd , Md , respectively. Then c  d ∈ RLC X with |(c  d, ν)| ≤ Kc Kd M |ν| (|ν| + 1)! ∀ν ∈ X ∗ ,

(11)

where M = max{Mc , Md }. Noting that n + 1 ≤ 2n for all n ≥ 0, (11) can be written more conventionally as |(c  d, ν)| ≤ Kc Kd (2M )|ν| |ν|! ∀ν ∈ X ∗ . The specific goal here is to show that c ◦ d is also locally convergent, when the series c and d are locally convergent, and to produce an inequality analogous to (11). The following properties of the shuffle product are essential. Lemma 2.9 (see [28]). For c, d ∈ RX and any ν ∈ X ∗ , |ν|    ¯  ξ, ¯  ξ, ¯ ν) = ¯ ν); 1. (c  d, ν) = (c, ξ)(d, ξ)(ξ (c, ξ)(d, ξ)(ξ ∗ ¯ ξ,ξ∈X

2.

i=0





¯ ν) = (ξ  ξ,

ξ∈X i |ν|−i ¯ ξ∈X

 |ν| , 0 ≤ i ≤ |ν|. i

ξ∈X i |ν|−i ¯ ξ∈X

Now given any η ∈ X ∗ , the set of right factors {η0 , η1 , . . . , ηk } defined by (4) produces a corresponding family of real-valued functions: 1 , n ≥ 0, |η0 |! 1 Sη (n), 1 ≤ |η1 | ≤ n, Sη1 (n) = (n)n1 +1 0

Sη0 (n) =

Sηj (n) =

1 (n)nj +1

n−|ηj |

 i=0

Sηj−1 (n − (nj + 1) − i), j ≤ |ηj | ≤ n, 2 ≤ j ≤ k,

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

655

where (n)i = n!/(n − i)! denotes the falling factorial. The next two lemmas form the core of the local convergence proof for the composition product. Lemma 2.10. Suppose c ∈ RLC X and d ∈ Rm LC X with growth constants Kc , Mc and Kd , Md , respectively. Then (12)

|(c ◦ d, ν)| ≤ Kc ψ|ν| (Kd ) M |ν| |ν|! ∀ν ∈ X ∗ ,

where M = max{Mc , Md }, and {ψn (Kd )}n≥0 is the set of degree n polynomials in Kd , ψn (Kd ) =

n  i  

Kdj Sηj (n) |ηj |!, n ≥ 0.

i=0 j=0 ηj ∈X i

Proof. The proof has two main steps. It is first shown that for any integer l > 0 and any η ∈ X ∗ with |η| ≤ l and right factors {η0 , η1 , . . . , ηk } as defined in (4), (13)

−|ηj |

|(ηj ◦ d, ν)| ≤ Kdj Md

|ν|

Md |ν|! Sηj (|ν|)

for all 0 ≤ j ≤ k and |ηj | ≤ |ν| ≤ l. (Note that when |ν| < |ηj |, the coefficients (ηj ◦ d, ν) = 0, and Sηj (|ν|) is simply not defined.) This is shown by induction on j. The case j = 0 < l is trivial. When j = 1 ≤ l, the left-shift operator −(n +1) := (xn0 1 +1 )−1 is employed, where, in general, for any ξ, ν ∈ X ∗ , x0 1   : ν = ξν  , ν −1 ξ (ν) = 0 : otherwise. Observe the following for any ν with |η1 | ≤ |ν| ≤ l and containing the left factor xn0 1 +1 (otherwise the claim is trivial):   |(η1 ◦ d, ν)| = (xn0 1 +1 (di1  xn0 0 ), ν)    #   −(n +1) =  di1  xn0 0 , x0 1 (ν)   ! "    ν       n0   = (di1 , ξ)(ξ  x0 , ν )    ξ∈X |ν |−n0  |ξ| ≤ (Kd Md |ξ|!) (ξ  xn0 0 , ν  ) (since 0 ≤ |ξ| < l) ξ∈X |ν

 |−n 0

|ν  |−n0

≤ K d Md

−|η1 |

= K d Md

  |ν | (|ν  | − n0 )! n0 |ν|

Md |ν|! Sη1 (|ν|).

Now assume that the result holds up to some fixed j, where 1 ≤ j ≤ k − 1. Then in a similar fashion for |ηj+1 | ≤ |ν| ≤ l,    #   −(nj+1 +1) |(ηj+1 ◦ d, ν)| =  dij+1  (ηj ◦ d), x0 (ν)   ! "   ν  |ν  |        ¯ ¯ = (dij+1 , ξ)(ηj ◦ d, ξ)(ξ  ξ, ν ).   i i=0

ξ∈X |ν  |−i ¯ ξ∈X

656

W. STEVEN GRAY AND YAQIN LI

¯ = 0 for |ξ| ¯ < |ηj |, it follows that, by using the coefficient bounds for Since (ηj ◦ d, ξ) ¯ < l − (nj+1 + 1)), d (because 0 ≤ |ξ| ≤ l − (j + 1)) and Lemma 2.9 (since |ηj | ≤ |ξ| |(ηj+1 ◦ d, ν)| ≤

=

|ν  |−|ηj |





i=0

ξ∈X i |ν  |−i ¯ ξ∈X

 # ¯ −|η | |ξ| |ξ| ¯ Sη (|ξ|) ¯ (ξ  ξ, ¯ ν) (Kd Md |ξ|!) · Kdj Md j Md |ξ|! j

−|η | |ν| Kdj+1 Md j+1 Md

|ν  |−|ηj |

 i=0

=

−|η | |ν| Kdj+1 Md j+1 Md |ν|! −|ηj+1 |

= Kdj+1 Md

  |ν | i! (|ν | − i)! Sηj (|ν | − i) i 

1 (|ν|)nj+1 +1



|ν  |−|ηj |



Sηj (|ν| − (nj+1 + 1) − i)

i=0

|ν|

Md |ν|! Sηj+1 (|ν|).

Hence, the claim is true for all 0 ≤ j ≤ k. In the second step of the proof, the claimed upper bound on (c ◦ d, ν) is produced in terms of the polynomials ψn (Kd ). Since η ∈ Id (ν) only if |η| ≤ |ν|, using the inequality (13), it follows that    |ν| i      |(c ◦ d, ν)| =  (c, ηj )(ηj ◦ d, ν)  i=0 j=0 ηj ∈X i  ≤

|ν| i   

(Kc M |ηj | |ηj |!) · (Kdj M −|ηj | M |ν| |ν|! Sηj (|ν|))

i=0 j=0 ηj ∈X i

= Kc ψ|ν| (Kd ) M |ν| |ν|!. Lemma 2.11. For each right factor ηj as defined in (4) of a given word η ∈ X ∗ , the following bounds apply: 0 < Sηj (n) ≤

(α + 1)n−|ηj |+j αj |ηj |!

for any α > 0 and all n ≥ |ηj |. Proof. The proof is again by induction. The j = 0 case is trivial. When j = 1, observe that Sη1 (n) =

1

(n)n1 +1 |η0 |! 1 ≤ , n ≥ |η1 |, (|η1 |)n1 +1 |η0 |! 1 = |η1 |!   α + 1 (α + 1)n−|η1 | , n ≥ |η1 |. ≤ α |η1 |!

Now suppose the lemma is true up to some fixed j ≥ 1. Then Sηj+1 (n) =

1 (n)nj+1 +1

n−|ηj+1 |

 i=0

Sηj (n − (nj+1 + 1) − i)

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

≤ ≤

n−|ηj+1 |



1 (n)nj+1 +1

i=0

j+1 | j n−|η 

(α + 1) αj |ηj+1 |!

657

(α + 1)(n−(nj+1 +1)−i)−|ηj |+j αj |ηj |!

(α + 1)n−|ηj+1 |−i , n ≥ |ηj+1 |,

i=0

(α + 1)n−|ηj+1 |+j+1 . ≤ αj+1 |ηj+1 |! So the result holds for all j ≥ 0. The main local convergence theorem for the composition product follows. Theorem 2.12. Suppose c ∈ RLC X and d ∈ Rm LC X with growth constants Kc , Mc and Kd , Md , respectively. Then c ◦ d ∈ RLC X with |(c ◦ d, ν)| ≤ Kc ((φ(mKd ) + 1)M )|ν| (|ν| + 1)! ∀ν ∈ X ∗ , $ where φ(x) := x/2 + x2 /4 + x and M = max{Mc , Md }. Proof. In light of Lemma 2.10, the goal is to show that ψn (Kd ) ≤ (φ(mKd ) + 1)n (n + 1) for all n ≥ 0. Observe that applying Lemma 2.11 gives, for any α > 0, ψn (Kd ) ≤

n  i   i=0 j=0

Kdj

ηj ∈X i i≥j

= (α + 1)n

(α + 1)n−|ηj |+j αj

j  n  i    i mKd

i=0 j=0 n  n i

j

α

1 α+1

i−j

β,

= (α + 1)

i=0

where β := mKd /α + 1/(α + 1). Setting β = 1 corresponds√to letting α = φ(mKd ), and the theorem is proved. (Note that φ(1) = φg := (1 + 5)/2, the golden ratio, and φ(mKd ) ≈ mKd when mKd  1.) Example 2.13. In some cases, the coefficient boundaries given in Theorem 2.12 are conservative; i.e., smaller growth constants might be produced by exploiting particular

features of the series under

consideration. For example, given linear series c = n≥0 (c, xn0 x1 )xn0 x1 and d = n≥0 (d, xn0 x1 )xn0 x1 in RLC X with X = {x0 , x1 }, it can be shown directly that, by writing the composition product as a convolution

n −1 sum and using the fact that k=0 nk < 3 for any n ≥ 0, |(c ◦ d, ν)| < Kc Kd M |ν| |ν|! ∀ν ∈ X ∗ . 3. The nonrecursive connections. In this section the generating series are produced for the three nonrecursive interconnections, and their local convergence is characterized. Theorem 3.1. If c, d ∈ RLC X, then each nonrecursive interconnected inputoutput system shown in Figure 1.1(a)–(c) has a Fliess operator representation generated by a locally convergent series as indicated: 1. Fc + Fd = Fc+d ; 2. Fc · Fd = Fc  d ; 3. Fc ◦ Fd = Fc◦d , where  = m.

658

W. STEVEN GRAY AND YAQIN LI

Proof. 1. Observe that



Fc [u](t) + Fd [u](t) =

[(c, η) + (d, η)] Eη [u](t, t0 ) = Fc+d [u](t).

η∈X ∗

Since c and d are locally convergent, define M = max{Mc , Md }. Then it follows that |(c + d, η)| = |(c, η) + (d, η)| ≤ (Kc + Kd )M |η| |η|! ∀η ∈ X ∗ , or c + d is locally convergent. 2. In light of the componentwise definition of the product interconnection and the shuffle product, it can be assumed without loss of generality that  = 1. Therefore,   Fc [u](t)Fd [u](t) = (c, η)Eη [u](t, t0 ) (d, ξ)Eξ [u](t, t0 ) η∈X ∗

ξ∈X ∗



=

(c, η)(d, ξ) Eη  ξ [u](t, t0 )

η,ξ∈X ∗

= Fc  d [u](t). Local convergence of c  d is provided by Theorem 2.8. 3. It is first shown by induction that Fη ◦ Fd = Fη◦d for any η ∈ X ∗ and ∗ d ∈ Rm LC X. Choose any η ∈ X , and let {η0 , η1 , . . . , ηk } be the corresponding set of right factors defined in (4). Clearly, (Fη0 ◦ Fd [u])(t) = Eη0 [u](t, t0 ) = Fη0 [u](t) = Fη0 ◦d [u](t). Now assume that (Fηj ◦ Fd [u])(t) = Fηj ◦d [u](t) holds up to some fixed factor ηj . Then (Fηj+1 ◦ Fd [u])(t) = Exnj+1 xi ηj [Fd [u]](t, t0 ) 0 j+1  τ2  t Fdij+1 [u](τ1 )Eηj [Fd [u]](τ1 , t0 ) dτ1 · · · dτnj+1 +1 ··· = t0 t0  ! " nj+1 +1 times

 



t

··· !

= t0

τ2

t0

"

Fdij+1



(ηj ◦d) [u](τ1 )

dτ1 · · · dτnj+1 +1

nj+1 +1 times

= Fxnj+1 +1 [d

ij+1 

0

(ηj ◦d)]

[u](t)

= Fηj+1 ◦d [u](t). Thus, the claim holds for η = ηj+1 and, by induction, for η = ηk . Finally,   (Fc ◦ Fd [u])(t) = (c, η)Eη [Fd [u]](t, t0 ) = (c, η)Fη◦d [u](t) η∈X ∗

=

 η∈X ∗

% (c, η)

η∈X ∗

 ν∈X ∗

&

(η ◦ d, ν)Eν [u](t, t0 )

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

=



⎡ ⎣

ν∈X ∗

=





659

⎤ (c, η)(η ◦ d, ν)⎦ Eν [u](t, t0 )

η∈X ∗

(c ◦ d, ν) Eν [u](t, t0 )

ν∈X ∗

= Fc◦d [u](t). Local convergence of c ◦ d was proved in Theorem 2.12.

Example 3.2. Let X = {x0 , x1 }, c = k≥0 Kc Mck k! xk1 , and d = k≥0 Kd Mdk k! xk1 , where Kc , Mc > 0 and Kd , Md > 0 are arbitrary growth constants. It is easily verified that the state space systems, z˙c = Mc zc2 uc , zc (0) = 1, y c = K c zc ,

z˙d = Md zd2 ud , zd (0) = 1, y d = K d zd ,

realize the operators Fc : uc → yc and Fd : ud → yd , respectively, for sufficiently small inputs and intervals of time. Letting z = [zcT zdT ]T , it follows directly that Fc◦d is realized by z˙ = f (z) + g(z)u, z(0) = [1 1]T , y = h(z),

(14) (15) where

 f (z) =

Kd Mc zc2 zd 0



 , g(z) =

0 Md zd2

 , h(z) = Kc zc .

The first few coefficients of c, d, and c ◦ d are given in Table 3.1 along with the upper bounds on the coefficients of c ◦ d predicted by Theorem 2.12. Since these upper bounds hold for any series c and d with the given growth constants, they can be Table 3.1 Some coefficients (c, ν), (d, ν), (c ◦ d, ν) and upper bounds for (c ◦ d, ν) in Example 3.2. ν

(c, ν)

(d, ν)

(c ◦ d, ν)

Upper bounds for (c ◦ d, ν)

∅ x0 x1 x20 x0 x1 x1 x0 x21 x30 x20 x1 x0 x1 x0 x0 x21 x1 x20 x1 x0 x1 x21 x0 x31

Kc 0 Kc M c 0 0 0 Kc Mc2 2! 0 0 0 0 0 0 0 Kc Mc3 3!

Kd 0 Kd M d 0 0 0 Kd Md2 2! 0 0 0 0 0 0 0 Kd Md3 3!

Kc Kc (Kd Mc ) 0 Kc (Kd Mc )2 2! Kc (Kd Mc )Md 0 0 Kc (Kd Mc )3 3! Kc (Kd Mc )2 Md 22 Kc (Kd Mc )2 Md 2 Kc (Kd Mc )Md2 2 0 0 0 0

Kc Kc ((φ(Kd ) + 1)M ) 2! Kc ((φ(Kd ) + 1)M ) 2! Kc ((φ(Kd ) + 1)M )2 3! Kc ((φ(Kd ) + 1)M )2 3! Kc ((φ(Kd ) + 1)M )2 3! Kc ((φ(Kd ) + 1)M )2 3! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4! Kc ((φ(Kd ) + 1)M )3 4!

660

W. STEVEN GRAY AND YAQIN LI Table 3.2 ¯ = 1. Tmax and tesc for specific examples of c ◦ d with u Case

Kc

Mc

Kd

Md

Mc◦d

Tmax

tesc

tesc /Tmax

1 2 3 4

4 2 2 2

2 4 2 2

2 2 4 2

2 2 2 4

7.46 14.93 11.66 14.93

0.03349 0.01675 0.02145 0.01675

0.1967 0.1105 0.1105 0.1580

5.873 6.598 5.152 9.435

140 Case 1 Case 2 Case 3 Case 4

120

Fc o d[u]

100

80

60

40

20

0

0.05

0.1

0.15

0.2

t

Fig. 3.1. The output of Fc◦d [u] when u(t) = u ¯ = 1 for Cases 1–4 of Table 3.2.

conservative in specific cases. In [13] it is shown that given any series c ∈ RLC X, |ν| where X = {x0 , x1 , . . . , xm } and |(c, ν)| ≤ Kc Mc |ν|! for all ν ∈ X ∗ , if max{||u||1 , T } ≤

1 , (m + 1)2 Mc

then Fc [u] converges absolutely and uniformly on [0, T ]. The result still holds if one |ν| has the slightly more generous growth condition |(c, ν)| ≤ Kc Mc (|ν| + 1)!. For a constant input u(t) = u ¯, where |¯ u| ≥ 1, define (16)

Tmax =

1 . (m + 1)2 Mc |¯ u|

Then it follows from Theorem 2.12 that when m = 1, Fc◦d [¯ u] will always be well defined on at least the interval [0, Tmax ), where Tmax =

1 4Mc◦d |¯ u|

and Mc◦d = (φ(Kd ) + 1) max{Mc , Md }. Four specific cases are described in Table 3.2. Here each Tmax is compared against the finite escape time, tesc , of the state space system (14)–(15) with u(t) = u ¯ = 1, which is determined numerically (see Figure 3.1). In each case, the value of Tmax < tesc , but, as expected, Tmax is conservative since the coefficient upper bounds for c ◦ d are conservative. Example 3.3. The composition product provides an alternative interpretation of the symbolic calculus of Fliess [8, 10, 19]. Specifically, consider an input-output system represented by Fc with c ∈ RLC X. Any input u, which is analytic at t = t0 , can be represented near t0 by a series cu ∈ Rm LC X0 , i.e., u = Fcu [v] for

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

661

some locally convergent series cu = k≥0 (cu , xk0 ) xk0 and all ν ∈ Bpm (R)[t0 , t0 + T ]. In effect, cu is the formal Laplace–Borel transform of the input u. (See [20] for more analysis of this example using the formal Laplace–Borel transform.) The analyticity of y = Fc [u] follows from [28, Lemma 2.3.8], and therefore the formal Laplace–Borel transform of y, namely, cy , can be related to c and cu via Fcy [v] = y = Fc [Fcu [v]] = Fc◦cu [v]. From [28, Corollary 2.2.4], it follows directly that cy = c ◦ cu . This last example motivates the following definition. Definition 3.4. A series c ∈ R X is input-output locally convergent if for  every cu ∈ Rm LC X0  it follows that c ◦ cu ∈ RLC X0 . It is immediate that every locally convergent series is input-output locally convergent, but the converse claim is only known to hold at present in certain special cases. Lemma 3.5. Let c ∈ R X be an input-output locally convergent series with nonnegative coefficients. Then c is locally convergent. Proof. Set cu = 1l and let K, M be the growth constants for the series c ◦ 1l. Then from Lemma 2.2, property 4,  |(c ◦ 1l, xn0 )| = max (ci , η) ≤ KM n n! ∀n ≥ 0. i

η∈X n

Thus, |(c, η)| = maxi (ci , η) ≤ KM n n! for all n ≥ 0. Lemma 3.6. Let c ∈ R X be an input-output locally convergent linear series of the form c = j≥0 (c, xj0 xij ) xj0 xij , where ij ∈ {1, 2, . . . , m} for all j ≥ 0. Then c is locally convergent. Proof. Again set cu = 1l and let K, M be the growth constants for the series c ◦ 1l. Then |(c ◦ 1l, xn0 )| = max |(ci , xn−1 xin )| ≤ KM n n! ∀n ≥ 0, 0 i

and the conclusion follows. 4. The feedback connection. Given any c, d ∈ Rm LC X, the general goal of this section is to determine when there exists a y which satisfies the feedback equation (1) and, in particular, when there exists a generating series e so that y = Fe [u] for all admissible inputs u. In the latter case, the feedback equation becomes equivalent to (17)

Fe [u] = Fc [u + Fd◦e [u]],

and the feedback product of c and d is defined by c@d = e. It is assumed throughout that m > 0; otherwise the feedback connection is degenerate. An initial obstacle in this analysis is that Fe is required to be the composition of two operators, Fc and I + Fd◦e , where the second operator is never realizable by a Fliess operator due to the direct feed term I. This does not prevent the composition from being a Fliess operator, but to compensate for the presence of this term a modified composition product is needed. Specifically, for any η ∈ X ∗ and d ∈ Rm X, define the modified composition product as  η : |η|xi = 0 ∀i = 0, η˜ ◦d = ˜ xn0 xi (η  ˜ [d   (η ◦ d) + xn+1 ◦ d)] : η = xn0 xi η  , n ≥ 0, i = 0. i 0

662

W. STEVEN GRAY AND YAQIN LI

For c ∈ R X and d ∈ Rm X, the definition is extended as  c˜ ◦d = (c, η) η ˜◦ d. η∈X ∗

It can be verified in a manner completely analogous to the original composition product that the modified composition product is always well defined (summable), continuous in both arguments, and locally convergent when both c and d are. In particular, the following theorems are central to the analysis in this section. Theorem 4.1. For any c ∈ RLC X and d ∈ Rm LC X, it follows that Fc ◦˜ d [u] = Fc [u + Fd [u]] for all admissible u. Proof. The result is verified simply by inserting the direct feed term into the proof of Theorem 3.1, part 3. Theorem 4.2. For any c ∈ Rm X, the mapping d → c ˜◦ d is a contraction on m R X. Proof. This is also a minor variation of previous results concerning the composition product, in particular, Lemma 2.4, Lemma 2.5, and Theorem 2.6. The contraction coefficient, σ, is unaffected by the required modifications. The first main result of this section is given next. Theorem 4.3. Let c, d be fixed series in Rm X. Then the following propositions hold: 1. The mapping

(18)

S : Rm X → Rm X : ei → ei+1 = c ◦˜ (d ◦ ei )

has a unique fixed point in Rm X, c@d = limi→∞ ei , which is independent of e0 . 2. If c, d, and c@d are locally convergent, then Fc@d satisfies the feedback equation (17). Proof. 1. The mapping S is a contraction since, by Theorems 2.6 and 4.2, dist(S(ei ), S(ej )) ≤ σ dist(d ◦ ei , d ◦ ej ) ≤ σ 2 dist(ei , ej ). Therefore, the mapping S has a unique fixed point, c@d, that is independent of e0 , i.e., (19)

c@d = c ˜ ◦ (d ◦ (c@d)). 2. From the stated assumptions concerning c, d, and c@d, it follows that Fc@d [u] = Fc ◦˜ (d◦(c@d)) [u] = Fc [u + Fd [Fc@d [u]]]

for any admissible u. The obvious question is whether c@d is always locally convergent, or at least input-output locally convergent, when both c and d are locally convergent. The local convergence of c and d guarantees that the feedback system in Figure 1.1(d) is at least well-posed in the sense described in [1, 27] since Fc and Fd are well-defined causal analytic operators. That is, there exist sufficiently small R, S, T > 0 such that

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

663

for any u ∈ Bpm (R)[t0 , t0 + T ], there exists a y ∈ Bqm (S)[t0 , t0 + T ] which satisfies the feedback equation (1). But whether y = Fc@d [u] on some ball of input functions of nonzero radius over a nonzero interval of time is not immediate. The following m example shows that Rm LC X is not a closed subset of R X in the ultrametric topology. Example 4.4. Let X = {x0 , x1 } and consider the following sequence of polynomials in Rm LC X: ei = x1 + 22 2! x21 + 33 3! x31 + · · · + ii i! xi1 , i ≥ 1. Clearly, e = limi→∞ ei is not locally convergent. A central issue is whether such an example can be produced by repeated compositions of a locally convergent series. It will be first shown that the answer to this question is no. Then the more general case described by (18) is examined. This leads to the main conclusion that the feedback product of two locally convergent series is always input-output locally convergent. first that if e = c ◦ e, then it follows that e must have the form e =

Observe n n (e, x ) x 0 0 . Furthermore, since e appears on both sides of the expression e = c◦e, n≥0 it is possible by repeated substitution to express each coefficient (e, xn0 ) in terms of the coefficients {(c, ν) : |ν| ≤ n}. For example, if X = {x0 , x1 }, the first few coefficients of e are (e, ∅) = (c, ∅), (e, x0 ) = (c, x0 ) + (c, ∅)(c, x1 ), (e, x20 ) = (c, x20 ) + (c, x0 )(c, x1 ) + (c, ∅)(c, x1 )2 + (c, ∅)(c, x0 x1 ) + (c, ∅)(c, x1 x0 ) +(c, ∅)2 (c, x21 ), (e, x30 ) = (c, x20 )(c, x1 ) + (c, x0 )(c, x1 )2 + (c, ∅)(c, x1 )3 + (c, ∅)(c, x1 )(c, x0 x1 ) +(c, ∅)(c, x1 )(c, x1 x0 ) + (c, ∅)2 (c, x1 )(c, x21 ) + (c, x0 )(c, x0 x1 ) +(c, ∅)(c, x1 )(c, x0 x1 ) + 2(c, x0 )(c, x1 x0 ) + 2(c, ∅)(c, x1 )(c, x1 x0 ) +3(c, ∅)(c, x0 )(c, x21 ) + 3(c, ∅)2 (c, x1 )(c, x21 ) + (c, x30 ) + (c, ∅)(c, x20 x1 ) +(c, ∅)(c, x0 x1 x0 ) + (c, ∅)2 (c, x0 x21 ) + (c, ∅)(c, x1 x20 ) + (c, ∅)2 (c, x1 x0 x1 ) +(c, ∅)2 (c, x21 x0 ) + (c, ∅)3 (c, x31 ) .. . If c is locally convergent with growth constants Kc , Mc , then |(e, ∅)| ≤ Kc , |(e, x0 )| ≤ Kc (Kc + 1)Mc ,   3 2 5 2 K + Kc + 1 Mc2 2!, |(e, x0 )| ≤ Kc 2 c 2   5 3 35 2 13 3 K + Kc + Kc + 1 Mc3 3! |(e, x0 )| ≤ Kc 2 c 6 3 .. . This suggests that a variation of inequality (12) is possible, namely, that |(e, xn0 )| ≤ Kc ψ˜n (Kc ) Mcn n! ∀n ≥ 0,

664

W. STEVEN GRAY AND YAQIN LI Table 4.1 The first few polynomials S˜ηj (Kc , n) and ψ˜n (Kc ) when m = 1. n

ηj

S˜η0 (Kc , n), . . . , S˜ηj (Kc , n)

ψ˜n (Kc )

0



S˜∅ (Kc , 0) = 1

1

1

x0 x1

S˜x0 (Kc , 1) = 1 S˜∅ (Kc , 1) = 1, S˜x1 (Kc , 1) = 1 S˜ 2 (Kc , 2) = 1

x20 2

x0

Kc + 2

2

S˜∅ (Kc , 2) = 1, S˜x0 x1 (Kc , 2) = 12 ˜ Sx0 (Kc , 2) = 1, S˜x1 x0 (Kc , 2) = 12 ˜ S∅ (Kc , 2) = 1, S˜x1 (Kc , 2) = 12 Kc + 1, S˜x2 (Kc , 2) = 21

x0 x1 x1 x0 x21

3 2 K 2 c

+ 3Kc + 3

1

S˜x3 (Kc , 3) = 61 0 S˜∅ (Kc , 3) = 1, S˜ 2 (Kc , 3) =

x30 x20 x1

x0 x1

S˜x0 (Kc , 3) = 1, S˜x0 x1 x0 (Kc , 3) = 16 S˜∅ (Kc , 3) = 1, S˜x1 (Kc , 3) = 12 Kc2 + Kc + 1, S˜x0 x2 (Kc , 3) = 16 1 S˜ 2 (Kc , 3) = 1 , S˜ 2 (Kc , 3) = 1

x0 x1 x0 x0 x21 3

1 6

x1 x20

x0

x1 x0

2

6

S˜∅ (Kc , 3) = 1, S˜x0 x1 (Kc , 3) = S˜x1 x0 x1 (Kc , 3) = 16 ˜ Sx0 (Kc , 3) = 1, S˜x1 x0 (Kc , 3) = S˜ 2 (Kc , 3) = 1

x1 x0 x1 x21 x0

x1 x0

1 K 6 c 1 K 3 c

+

5 3 K 2 c

+ 7Kc2 + 6Kc + 4

1 , 3

+ 23 ,

6

S˜∅ (Kc , 3) = 1, S˜x1 (Kc , 3) = 12 Kc2 + Kc + 1, S˜x2 (Kc , 3) = 12 Kc + 1, S˜x3 (Kc , 3) = 16

x31

1

1

where each ψ˜n (Kc ) is a polynomial in Kc of degree n. The next lemma establishes the claim using a family of polynomials of the form ψ˜n (Kc ) =

n  i   i=0 j=0 ηj

Kcj S˜ηj (Kc , n)|ηj |!, n ≥ 0.

∈X i

Given a fixed n, every word ηj in the innermost summation satisfies j ≤ |ηj | ≤ n and has a corresponding set of right factors {η0 , η1 , . . . , ηj }. When j > 0, each polynomial S˜ηj (Kc , n) is computed iteratively using its right factors and the previously computed polynomials {ψ˜0 (Kc ), ψ˜1 (Kc ), . . . , ψ˜n−1 (Kc )}: 1 , 0 ≤ |η0 | ≤ n, |η0 |! 1 S˜η1 (Kc , n) = ψ˜n−|η1 | (Kc ) S˜η0 (Kc , n), 1 ≤ |η1 | ≤ n, (n)n1 +1

S˜η0 (Kc , n) =

S˜η2 (Kc , n) =

1 (n)n2 +1

n−|η2 |



ψ˜i (Kc ) S˜η1 (Kc , n − (n2 + 1) − i), 2 ≤ |η2 | ≤ n,

i=0

.. . S˜ηj (Kc , n) =

1 (n)nj +1

n−|ηj |



ψ˜i (Kc ) S˜ηj−1 (Kc , n − (nj + 1) − i), 2 ≤ j ≤ |ηj | ≤ n.

i=0

See Table 4.1 for the case where m = 1.

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

665

m Lemma 4.5. Let c ∈ Rm LC X with growth constants Kc , Mc , and e ∈ R X such that e = c ◦ e. Then

|(e, xn0 )| ≤ Kc ψ˜n (Kc ) Mcn n! ∀n ≥ 0.

(20)

Proof. The proof has some elements in common with that of Lemma 2.10, except here it is not assumed a priori that e is locally convergent. The basic approach employs nested inductions. The outer induction is on n. It is clear from the discussion above that the claim holds when n = 0 and n = 1 for m = 1. A similar calculation can be done for arbitrary m ≥ 1. Now suppose (20) holds up to some fixed n − 1 ≥ 1. Given any ηj , where j ≤ |ηj | ≤ n, it will first be shown by induction on j (the inner induction) that (21)

|(ηj ◦ e, xn0 )| ≤ Kcj Mc−|ηj | Mcn n! S˜ηj (Kc , n), 0 ≤ j ≤ n.

The j = 0 case is trivial. Suppose j = 1. Then 0 ≤ n − |η1 | ≤ n − 1 and   |(η1 ◦ e, xn0 )| =  xn0 1 +1 (ei1  xn0 0 ), xn0   #  n−(n1 +1)  =  ei1  xn0 0 , x0   # #  n−|η | n−|η | n−(n1 +1)  =  ei1 , x0 1 x0 1  xn0 0 , x0   # n − (n + 1) 1 ≤ Kc ψ˜n−|η1 | (Kc ) Mcn−|η1 | (n − |η1 |)! n − |η1 | = Kc M −|η1 | M n n! S˜η (Kc , n). c

c

1

Now assume that inequality (21) holds up to some fixed j, where 1 ≤ j ≤ n − 1. Then 0 ≤ n − |ηj+1 | ≤ n − (j + 1) and  #  n−(nj+1 +1)  |(ηj+1 ◦ e, xn0 )| =  eij+1  (ηj ◦ e), x0    n−(nj+1 +1)  #  n − (n     + 1) n−(n +1)−i j+1 j+1 . =  eij+1 , xi0 ηj ◦ e, x0  + 1) − i n − (n j+1   i=0 n−(n

+1)−i

j+1 Since (ηj ◦ e, x0 ) = 0 when n − (nj+1 + 1) − i < |ηj | or, equivalently, i > n−|ηj+1 |, it follows that, using the coefficient bound (20) for e (because 0 ≤ i ≤ n−1) and the bound (21) for ηj ◦ e,

|(ηj+1 ◦ e, xn0 )| ≤

n−|ηj+1 | 



# Kc ψ˜i (Kc )Mci i! Kcj Mc−|ηj | Mcn−(nj+1 +1)−i

i=0

# · (n − (nj+1 + 1) − i)! S˜ηj (Kc , n − (nj+1 + 1) − i)   n − (nj+1 + 1) · n − (nj+1 + 1) − i 1 = Kcj+1 Mc−|ηj+1 | Mcn n! (n)nj+1 +1 n−|ηj+1 |

·



ψ˜i (Kc ) S˜ηj (Kc , n − (nj+1 + 1) − i)

i=0

= Kcj+1 Mc−|ηj+1 | Mcn n! S˜ηj+1 (Kc , n).

666

W. STEVEN GRAY AND YAQIN LI

Hence, the claim is true for all 0 ≤ j ≤ n. To complete the outer induction with respect to n, observe that     n  i    n n n   |(e, x0 )| = |(c ◦ e, x0 )| =  (c, ηj )(ηj ◦ e, x0 )  i=0 j=0 ηj ∈X i  ≤

n  i   

#  # Kc Mc|ηj | |ηj |! Kcj Mc−|ηj | Mcn n! S˜ηj (Kc , n)

i=0 j=0 ηj ∈X i

= Kc ψ˜n (Kc ) Mcn n!. Therefore, inequality (20) holds for all n ≥ 0. The next lemma provides an upper bound on the growth of the sequence ψ˜n (Kc ), n ≥ 0, when Kc is fixed. Lemma 4.6. For any Kc ≥ 1, it follows that (22)

ψ˜n (Kc ) ≤ φg (mKc (2 + φg ) + 1)n sn ∀n ≥ 0,

where s0 = 1/φg , and sn , n ≥ 1, is an integer sequence equivalent to the binomial transform of the sequence of Catalan numbers, Cn , n ≥ 1 (specifically, sequence A007317 in [25]). Proof. The proof has two main parts. First, it is shown by a nested induction that, for any > 0, there exists a sequence of positive real numbers, ξn ( ), such that (23)

ψ˜n (Kc ) ≤ (mKc (2 + ) + 1)n ξn ( ), n ≥ 0, Kc ≥ 1.

Then inequality (22) is produced for n ≥ 1 by setting = φg and showing that ξn (φg ) = φg sn when n ≥ 1. (n = 0 is a trivial special case.) Let > 0 and define two sequences of positive real numbers, ξn ( ) and Γn ( ), via the recurrence equations (24) (25)

ξn+1 ( ) = ξn ( ) + Γn+1 ( ), n ≥ 0, ξ0 = 1, Γ1 = 1/ , % & n  1 ξn ( ) + Γn+1 ( ) = ξi ( )Γn−i+1 ( ) , n ≥ 1. i=1

By definition, Γ0 = 1. In light of Table 4.1, inequality (23) clearly holds when n = 0 and n = 1 for m = 1 and Kc ≥ 1. (It is easily verified to also hold when m ≥ 1.) Now suppose the inequality holds up to some fixed n − 1 ≥ 1. Given any word ηj , where j ≤ |ηj | ≤ n, an inner induction with respect to j will now show that (26)

S˜ηj (Kc , n) ≤

(mKc (2 + ) + 1)n−|ηj | (2 + )j Γn−|ηj | ( ) , 0 ≤ j ≤ |ηj | |ηj |!

(cf. the proof of Lemma 2.11, where some of the computational details are similar). The j = 0 case is trivial. Suppose j = 1. Since n − |η1 | < n, it follows that S˜η1 (Kc , n) =

ψ˜n−|η1 | (Kc ) (n)n1 +1 |η0 |! 1



(mKc (2 + ) + 1)n−|η1 | ξn−|η1 | ( ) |η1 |!



(mKc (2 + ) + 1)n−|η1 | (2 + ) Γn−|η1 | ( ) , n ≥ |η1 |. |η1 |!

GENERATING SERIES FOR INTERCONNECTED SYSTEMS

667

This last inequality employs the general properties for any j ≥ 0 that ξn−|ηj | ( ) = Γn−|ηj | ( ) when n = |ηj | and n−|ηj |



(27)

ξi ( )Γn−|ηj |−i ( ) = (2 + ) Γn−|ηj | ( )

i=0

when n > |ηj |. Now suppose that inequality (26) holds up to some fixed j ≥ 1. Then S˜ηj+1 (Kc , n) = ≤

= =

n−|ηj+1 |



1 (n)nj+1 +1

ψ˜i (Kc ) S˜ηj (Kc , n − (nj+1 + 1) − i)

i=0

n−|ηj+1 |



1

(mKc (2 + ) + 1)i ξi ( ) |ηj+1 |! i=0 , + · (mKc (2 + ) + 1)n−|ηj+1 |−i (2 + )j Γn−|ηj+1 |−i ( ) (mKc (2 + ) + 1)n−|ηj+1 | (2 + )j |ηj+1 |!

n−|ηj+1 |



ξi ( )Γn−|ηj+1 |−i ( )

i=0

(mKc (2 + ) + 1)n−|ηj+1 | (2 + )j+1 Γn−|ηj+1 | ( ) , |ηj | < |ηj+1 | ≤ n, |ηj+1 |!

where again identity (27) was used to derive the final equality above. Hence, inequality (26) holds for all 0 ≤ j ≤ |ηj |. To complete the outer induction with respect to n, observe that ψ˜n+1 (Kc ) =

n+1 i 



i=0 j=0 ηj



n+1 i   i=0 j=0

∈X i

i j

Kcj S˜ηj (Kc , n + 1)|ηj |!

-

. (mKc (2 + ) + 1)n+1−i (mKc (2 + ))j Γn+1−i ( ) i! i!

= (mKc (2 + ) + 1)n+1

n+1 

Γn+1−i ( )

i=0

= (mKc (2 + ) + 1)n+1 ξn+1 ( ). Thus, inequality (23) must hold for all n ≥ 0. Now consider setting = φg in the system of equations (24)–(25). Eliminating by substitution the sequence Γn (φg ) gives the recurrence relation ξn+1 (φg ) = φg +

n 1  ξi (φg )ξn−i+1 (φg ), n ≥ 1, ξ1 (φg ) = φg , φg i=1

or, equivalently,     n   ξn+1 (φg ) ξi (φg ) ξn−i+1 (φg ) ξ1 (φg ) =1+ , n ≥ 1, = 1. φg φg φg φg i=1 It is known that sn satisfies the recurrence equation (28)

sn+1 = 1 +

n  i=1

si sn−i+1 , n ≥ 1, s1 = 1

668

W. STEVEN GRAY AND YAQIN LI

(see [25] and the references therein). Hence, the conclusion that ξn (φg ) = φg sn , n ≥ 1, is immediate. The recurrence equation (28) can be derived from the well-known recurrence

n relation for the Catalan numbers: Cn+1 = i=0 Ci Cn−i with C0 = 1 [4], which in turn is equivalent to Segner’s recurrence formula given in the year 1758 as a solution to Euler’s polygon division problem [31]. It is also worth noting that the sequence tn := Γn (φg )/φg , n ≥ 1, the increments of sn , is sequence A002212 in [25]. The positive integer sequences Cn , sn , and tn each have a variety of combinatoric interpretations in graph theory and the theory of formal languages. Of particular interest to system theorists, for example, is the fact that Cn is equivalent to the number of ways to binary bracket the letters in a word of length n + 1 [31, 32]. The asymptotic behavior of sn , / 5 5n 1 sn ∼ 8 π n3/2 (see [15, sequence 124]), motivates the following central result concerning local convergence. Theorem 4.7. If c ∈ Rm LC X with growth constants Kc , Mc , and e = c ◦ e, X . Specifically, for any Kc ≥ 1, then e ∈ Rm 0 LC |(e, xn0 )| ≤ Kc ((mKc (2 + φg ) + 1)5Mc )n n! ∀n ≥ 0.

(29)

Proof. The result is trivial when n = 0. When n ≥ 1, it is first necessary to show by induction that sn+1 < 5sn . The claim is clearly true when n = 1 or n = 2. Suppose it is known to hold up to some fixed integer n + 1 ≥ 2. Sequence sn is known to satisfy another recurrence equation [15, 25]: (n + 2)sn+2 = (6n + 4)sn+1 − 5nsn . Therefore, sn+2 < [(6n + 4)sn+1 − nsn+1 ]/(n + 2) < 5sn+1 , which proves the claim for all n ≥ 1. Next, substituting the upper bound φg sn ≤ 5n , n ≥ 0, into (22) gives (30)

ψ˜n (Kc ) ≤ ((mKc (2 + φg ) + 1)5)n ∀n ≥ 0.

The theorem is finally proved by simply applying Lemma 4.5. In most cases the upper bound in (29) is quite conservative because the upper bound (30) is conservative. Figure 4.1 shows ψ˜n (Kc ) (generated symbolically via MAPLE) and upper bound (30) versus n for various values of Kc . The final step of the analysis is to use Theorem 4.7 to prove the input-output local convergence of the feedback product. Theorem 4.8. If c, d ∈ Rm LC X, then c@d is input-output locally convergent. Specifically, when Kc ≥ 1, then ((c@d) ◦ b, xn0 ) ≤ Kc ([mKc (2 + φg ) + 1][φ(m(Kb + Kd )) + 1]10M )n n! for any b ∈ Rm LC X0  and where M = max{Mb , Mc , Md }. Proof. Select any series b ∈ Rm LC X0 . It follows from (19) that (c@d) ◦ b = (c ˜ ◦ (d ◦ (c@d))) ◦ b = c ◦ (b + d) ◦ ((c@d) ◦ b).

669

GENERATING SERIES FOR INTERCONNECTED SYSTEMS K =1

K =5

c

K =10

c

c

50

50

50

40

40

40

30

30

30

20

20

20

10

10

10

0

5

10 n

0

15

5

10 n

15

0

5

10 n

15

Fig. 4.1. A plot of log10 (ψ˜n (Kc )) (solid lines) and the logarithm (base 10) of the upper bound in (30) (dashed lines) versus n for various values of Kc .

Since b, c, and d are all locally convergent, so is the series c ◦ (b + d). Now apply Theorem 4.7, replacing c with c ◦ (b + d) and e with (c@d) ◦ b. This implies that (c@d) ◦ b is always locally convergent, and therefore c@d must be input-output locally convergent. To produce the given growth condition for the output series, note that Kc◦(b+d) = Kc

Mc◦(b+d) = 2(φ(m(Kb + Kd )) + 1)M,

using Theorem 2.12 and the fact that n + 1 ≤ 2n for all n ≥ 0. Substituting these growth constants for Kc and Mc , respectively, in Theorem 4.7 produces the desired result. Example 4.9. Suppose c and d are linear series in Rm LC X. Then c@d = limi→∞ ei , where ei+1 = c ˜ ◦ (d ◦ ei ) = c + (c ◦ d) ◦ ei . Setting e0 = c gives

c@d = c +

∞ 

(c ◦ d)◦k ◦ c,

k=1

where c◦k denotes k copies of c composed k − 1 times. It is easily verified since (c, ∅) = 0 that ((c ◦ d)◦k , ν) = 0 for all k > |ν|. Hence, |ν|−1

(c@d, ν) = (c, ν) +



((c ◦ d)◦k ◦ c, ν).

k=1

Example 4.10. For any c, d ∈ RLC X, a self-excited feedback loop can be described by Fc@d [0] = F(c@d)◦0 [u] = F(c@d)0 [u] (cf. Lemma 2.2, property 2). In this case (c@d)0 = limi→∞ ei , where ei+1 = (c ◦ d) ◦ ei . Using the m = 0 version of (16)

670

W. STEVEN GRAY AND YAQIN LI Table 4.2 Some coefficients (c, ν), (d, ν), and (c@d, ν) in Example 4.11. ν

(c, ν)

(d, ν)

(c@d, ν)

∅ x0 x1 x20 x0 x1 x1 x0 x21 x30 x20 x1 x0 x1 x0 x0 x21 x1 x20 x1 x0 x1 x21 x0 x31

Kc 0 Kc M c 0 0 0 Kc Mc2 2! 0 0 0 0 0 0 0 Kc Mc3 3!

Kd 0 Kd M d 0 0 0 Kd Md2 2! 0 0 0 0 0 0 0 Kd Md3 3!

Kc Kc Kd M c Kc M c Kc ((Kd Mc )2 2! + Kc Kd Mc Md ) Kc Kd Mc2 2! Kc Kd Mc2 2! Kc Mc2 2! Kc ((Kd Mc )3 3! + Kc (Kd Mc )2 Md 7 + Kc2 Kd Mc Md2 2!) Kc ((Kd Mc )2 Mc 3! + Kc Kd Mc2 Md 3) Kc ((Kd Mc )2 Mc 3! + Kc Kd Mc2 Md 2!) Kc Kd Mc3 3! 2 Kc ((Kd Mc ) Mc 3! + Kc Kd Mc2 Md 2!) Kc Kd Mc3 3! Kc Kd Mc3 3! Kc Mc3 3!

(since the closed-loop system has, in effect, no external input) and Theorem 4.7, F(c@d)0 [u] will converge at least on the interval [0, Tmax ), where Tmax =

1 1 = . M(c@d)0 (Kc◦d (2 + φg ) + 1)5Mc◦d

Of course, if the series (c@d)0 can be computed explicitly, a potentially better estimate   Tmax = 1/M(c@d) is possible. For example, when c◦d = 1+x1 , it is easily verified that 0

k (c@d)0 = k≥0 x0 so that Fc@d [0](t) = et for t ≥ 0. In this case, both Tmax = 0.04331  and Tmax = 1 are very conservative. On the other hand, when c ◦ d = 1 + 2x1 + 2x21 ,

it follows that (c@d)0 = k≥0 (k + 1)! xk0 and Fc@d [0](t) = 1/(1 − t)2 for 0 ≤ t < 1.  Here Tmax = 0.02428 is less conservative and Tmax = 1 is exact. Example 4.11. Reconsider the state space systems in Example 3.2. The operator Fc@d [u] then has the analytic state space realization     Mc zc2 Kd Mc zc2 zd , g(z) = f (z) = , h(z) = Kc zc Kc Md zc zd2 0 near z(0) = [1 1]T . The first few coefficients of c@d are given in Table 4.2. Since c@d is a nonnegative series in this case, local convergence and input-output local convergence are equivalent as a consequence of Lemma 3.5. Setting u(t) = u ¯ = 1 is equivalent to letting b = 1 in Theorem 4.8. Therefore, using again the m = 0 version of (16) and the growth condition from Theorem 4.8, a lower bound on the finite escape time for this system is Tmax =

1 1 . = M(c@d)◦1 [Kc (2 + φg ) + 1][φ(1 + Kd ) + 1]10M

Four specific cases of Tmax are given in Table 4.3 and compared against the numerically determined escape times. The conservativeness in these estimates is a consequence of accumulated conservativeness in various intermediate upper bounds, for example inequality (30), as compared to the cascade connection in Example 3.2.

671

GENERATING SERIES FOR INTERCONNECTED SYSTEMS Table 4.3 Tmax and tesc for specific examples of (c@d) ◦ 1. Case

Kc

Mc

Kd

Md

M(c@d)◦1

Tmax

tesc

tesc /Tmax

1 2 3 4

4 2 2 2

2 4 2 2

2 2 4 2

2 2 2 4

1483 1579 1129 1579

0.6745e−03 0.6335e−03 0.8857e−03 0.6335e−03

0.07556 0.06606 0.07387 0.07556

112.0 104.3 83.4 119.3

REFERENCES [1] M. Araki and M. Saeki, A quantitative condition for the well-posedness of interconnected dynamical systems, IEEE Trans. Automat. Control, 28 (1983), pp. 569–577. [2] J. Berstel and C. Reutenauer, Les S´ eries Rationnelles et Leurs Langages, Springer-Verlag, Paris, 1984. [3] J. Chaumat and A.-M. Chollet, On composite formal power series, Trans. AMS, 353 (2001), pp. 1691–1703. [4] B. Cloitre, private communication, 2004. [5] A. Ferfera, Combinatoire du Mono¨ide Libre Appliqu´ ee a ` la Composition et aux Variations de Certaines Fonctionnelles Issues de la Th´ eorie des Syst` emes, Doctoral dissertation, University of Bordeaux I, 1979. [6] A. Ferfera, Combinatoire du mono¨ide libre et composition de certains syst` emes non lin´ eaires, Ast´ erisque, 75/76 (1980), pp. 87–93. [7] M. Fliess, Fonctionnelles causales non lin´ eaires et ind´ etermin´ ees non commutatives, Bull. Soc. Math. France, 109 (1981), pp. 3–40. [8] M. Fliess, D´ eveloppements fonctionnels et calcul symbolique non commutatif, in Outils et Mod` eles Math´ematiques pour L’Automatique L’Analyse de Syst`emes et le Traitement du Signal, vol. 1, I. D. Landau, ed., Centre National de la Recherche Scientifique, Paris, 1981, pp. 359–377. [9] M. Fliess, R´ ealisation locale des syst` emes non lin´ eaires, alg` ebres de Lie filtr´ ees transitives et s´ eries g´ en´ eratrices non commutatives, Invent. Math., 71 (1983), pp. 521–537. [10] M. Fliess, M. Lamnabhi, and F. Lamnabhi-Lagarrigue, An algebraic approach to nonlinear functional expansions, IEEE Trans. Circuits Systems, 30 (1983), pp. 554–570. [11] X.-X. Gan and D. Knox, On composition of formal power series, Int. J. Math. Math. Sci., 30 (2002), pp. 761–770. [12] W. S. Gray and B. Nabet, Volterra series analysis and synthesis of a neural network for velocity estimation, IEEE Trans. Systems Man Cybernet. Part B, 29 (1999), pp. 190–197. [13] W. S. Gray and Y. Wang, Fliess operators on Lp spaces: Convergence and continuity, Systems Control Lett., 46 (2002), pp. 67–74. [14] U. Heckmanns, Aspects of Ultrametric Spaces, Queen’s Papers in Pure and Applied Math. 109, Queen’s University, Kingston, ON, Canada, 1998. [15] INRIA Algorithms Project, Encyclopedia of Combinatorial Structures, http://algo.inria.fr/ encyclopedia/formulaire.html. [16] A. Isidori, Nonlinear Control Systems, 3rd ed., Springer-Verlag, London, 1995. [17] M. Kawski, Calculus of nonlinear interconnections with applications, in Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, 2000, pp. 1661–1666. [18] K. Knopp, Infinite Sequences and Series, Dover Publications, New York, 1956. [19] M. Lamnabhi, A new symbolic calculus for the response of nonlinear systems, Systems Control Lett., 2 (1982), pp. 154–162. [20] Y. Li and W. S. Gray, The formal Laplace–Borel transform, Fliess operators and the composition product, in Proceedings of the 36th IEEE Southeastern Symposium on System Theory, Atlanta, GA, 2004, pp. 333–337. [21] H. Nijmeijer and A. J. van der Schaft, Nonlinear Dynamical Control Systems, SpringerVerlag, New York, 1990. [22] S. Priess-Crampe and P. Ribenboim, Fixed points, combs, and generalized power series, Abh. Math. Sem. Univ. Hamburg, 63 (1993), pp. 227–244. [23] S. Priess-Crampe and P. Ribenboim, Fixed point and attractor theorems for ultrametric spaces, Forum Math., 12 (2000), pp. 53–64. ¨ rner, Ultrametric fixed point theorems and applications, in Valuation Theory and [24] E. Scho Its Applications, Vol. II, F.-V. Kuhlmann, S. Kuhlmann, and M. Marshall, eds., AMS, Providence, RI, 2003, pp. 353–359.

672

W. STEVEN GRAY AND YAQIN LI

[25] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, http://www.research.att. com/∼njas/sequences. [26] H. J. Sussmann, Lie brackets and local controllability: A sufficient condition for scalar-input systems, SIAM J. Control Optim., 21 (1983), pp. 686–713. [27] M. Vidyasagar, On the well-posedness of large-scale interconnected systems, IEEE Trans. Automat. Control, 25 (1980), pp. 413–421. [28] Y. Wang, Algebraic Differential Equations and Nonlinear Control Systems, Doctoral dissertation, Rutgers University, New Brunswick, NJ, 1990. [29] Y. Wang and E. D. Sontag, Generating series and nonlinear systems: Analytic aspects, local realizability, and I/O representations, Forum Math., 4 (1992), pp. 299–322. [30] Y. Wang and E. D. Sontag, Algebraic differential equations and rational control systems, SIAM J. Control Optim., 30 (1992), pp. 1126–1149. [31] E. W. Weisstein et al., Catalan Number, MathWorld—A Wolfram Web Resource, http:// mathworld.wolfram.com/CatalanNumber.html. [32] H. S. Wilf, Generating Functionology, 2nd ed., Academic Press, San Diego, CA, 1994.