ADAPTIVE GENERALIZED POLYNOMIAL CHAOS FOR ... - CiteSeerX

Report 3 Downloads 67 Views
c 2004 Society for Industrial and Applied Mathematics 

SIAM J. SCI. COMPUT. Vol. 26, No. 2, pp. 720–735

ADAPTIVE GENERALIZED POLYNOMIAL CHAOS FOR NONLINEAR RANDOM OSCILLATORS∗ D. LUCOR† AND G. E. KARNIADAKIS† Abstract. The solution of nonlinear random oscillators subject to stochastic forcing is investigated numerically. In particular, solutions to the random Duffing oscillator with random Gaussian and non-Gaussian excitations are obtained by means of the generalized polynomial chaos (GPC). Adaptive procedures are proposed to lower the increased computational cost of the GPC approach in large-dimensional spaces. Adaptive schemes combined with the use of an enriched representation of the system improve the accuracy of the GPC approach by reordering the random modes according to their magnification by the system. Key words. uncertainty, Duffing oscillator, polynomial chaos, stochastic modeling AMS subject classifications. 65C20, 60H35, 65C30 DOI. 10.1137/S1064827503427984

1. Introduction. Fully nonlinear oscillators subject to mild or extreme noisy forces are of great interest for multiple disciplinary engineering communities (e.g., ocean structures [1]). Many mechanical systems involving flow-structure interaction can be modeled by the Duffing oscillator equation; see [2, 3]. In the present work, we determine the response of nonlinear single-degree-of-freedom mechanical systems subject to random excitations (Gaussian or non-Gaussian). We are particularly interested in the determination of second-moment characteristics of the response of stochastic Duffing oscillators. The method we adopt in this work is an extension of the classical polynomial chaos approach [4]. This representation is an infinite sum of multidimensional orthogonal polynomials of standard random variables with deterministic coefficients. Practically, only a finite number of terms in the expansion can be retained, as the sum has to be truncated. Consequently, the multidimensional random space has a finite number of dimensions n, and the highest order of the orthogonal polynomial is finite, denoted here by p. The Hermite-chaos expansion, which is the basis of the classical polynomial chaos, is effective in solving stochastic differential equations with Gaussian inputs as well as certain types of non-Gaussian inputs [5, 6, 7]. Its theoretical justification is based on the Cameron–Martin theorem [8]. However, it has been found that for general non-Gaussian random inputs, optimal exponential convergence rate is not achieved, and in some cases the convergence rate is in fact severely deteriorated; see [9, 10]. Another issue with the polynomial chaos decomposition is the fast growth of the dimensionality of the problem with respect to the number of random dimensions and the highest order of the retained polynomial; see Table 1.1. This issue becomes critical if one deals with a very noisy input (white noise) or a strongly nonlinear problem or both. Indeed, an accurate representation of a noisy input requires using a large number of random dimensions, while strong nonlinear dynamics can only be captured accurately with the use of a high polynomial order. ∗ Received

by the editors May 15, 2003; accepted for publication (in revised form) March 22, 2004; published electronically December 22, 2004. This work was supported by ONR and NSF, and computations were performed at the DoD HPCM centers. http://www.siam.org/journals/sisc/26-2/42798.html † Division of Applied Mathematics, Brown University, Providence, RI 02912 ([email protected]. edu, [email protected]). 720

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

721

Table 1.1 Number of unknown deterministic coefficients in the polynomial chaos representation as a function of the number of random dimensions n and the highest polynomial order p.

n=2 n=4 n=8 n = 16

p=3 10 35 165 969

p=5 21 126 1,287 20,349

p=7 36 330 6,435 245,157

p=9 55 715 24,310 2,042,975

In this paper, we consider the case of the random response of a Duffing oscillator subject to nonstationary additive noise, where the forcing is represented by a deterministic time-dependent periodic function multiplied by a random variable with different distributions. We also study the case of the random response of a Duffing oscillator subject to a stationary additive noise represented by a random process with different distributions. The objective is twofold: First, we investigate what type of stochastic solutions we obtain in comparison with the well-studied deterministic Duffing oscillator. Second, we obtain the stochastic solutions at reduced cost using adaptive procedures first pioneered by Li and Ghanem in [11]. The paper is organized as follows. In section 2 we give a brief overview of the generalized polynomial chaos, and subsequently we apply it to the stochastic Duffing oscillator. In section 3 we first consider the case of a periodic excitation with random amplitude and then focus on the adaptive procedure and present an extension to Ghanem’s original proposal. We conclude in section 4 with a brief summary. 2. Generalized polynomial chaos. 2.1. The Wiener–Askey representation. The Wiener–Askey polynomial chaos or generalized polynomial chaos (GPC) expansion is an extension of the original polynomial chaos. It is well suited to represent more general (Gaussian and non-Gaussian) random inputs. The expansion basis in this case consists of polynomial functionals from the Askey family [9, 12]. Since each type of polynomial from the Askey scheme forms a complete basis in the Hilbert space, each corresponding Wiener–Askey expansion converges to an L2 functional in the L2 sense in the appropriate Hilbert functional space; this is a generalized result of the Cameron–Martin theorem; see [8, 13]. A general second-order random process X(θ) is represented by X(θ) = a0 I0 ∞  + ci1 I1 (ζi1 (θ)) i1 =1

+

i1 ∞  

ci1 i2 I2 (ζi1 (θ), ζi2 (θ))

i1 =1 i2 =1

+

i1  i2 ∞  

ci1 i2 i3 I3 (ζi1 (θ), ζi2 (θ), ζi3 (θ))

i1 =1 i2 =1 i3 =1

(2.1)

+··· ,

where In (ζi1 , . . . , ζin ) denotes the GPC of order n in terms of the random vector ζ = (ζi1 , . . . , ζin ).

722

D. LUCOR AND G. E. KARNIADAKIS

For example, one possible choice for In are the Hermite polynomials which correspond to the original Wiener–Hermite polynomial chaos Hn . In the GPC expansion, the polynomials In are not restricted to Hermite polynomials but rather can be all types of the orthogonal polynomials from the Askey (α,β) is given by scheme. For example, the expression of the Jacobi polynomials Pn (α,β)

In (ζi1 , . . . , ζin ) ≡ Pn =

(ζi1 , . . . , ζin )

  (1 − ζ)−α (1 + ζ)−β ∂n (1 − ζ)n+α (1 + ζ)n+β , n −n 2 n!(−1) ∂ζi1 · · · ∂ζin

where ζ denotes the vector consisting of n Beta random variables (ζi1 , . . . , ζin ). For notational convenience, we rewrite (2.1) as (2.2)

X(θ) =

∞ 

cˆj Φj (ζ),

j=0

where there is a one-to-one correspondence between the functions In (ζi1 , . . . , ζin ) and Φj (ζ). The orthogonality relation of the GPC takes the form (2.3)

Φi Φj  = Φ2i δij ,

where δij is the Kronecker delta and ·, · denotes the ensemble average which is the inner product in the Hilbert space of the variables ζ. We also have  (2.4) f (ζ)g(ζ) = f (ζ)g(ζ)W (ζ)dζ or (2.5)

f (ζ)g(ζ) =



f (ζ)g(ζ)W (ζ)

ζ

in the discrete case. Here W (ζ) is the weighting function corresponding to the GPC basis {Φi }. Most of the orthogonal polynomials from the Askey scheme have weighting functions that take the form of probability function of certain types of random distributions. We then choose the type of independent variables ζ in the polynomials {Φi (ζ)} according to the type of random distributions as shown in Table 2.1. Legendre polynomials, which are a special case of the Jacobi polynomials with parameters α = β = 0, correspond to an important distribution—the Uniform distribution. 2.2. Duffing oscillator and its GPC representation. We consider the Duffing oscillator subject to external forcing, i.e.,   x ¨(t, θ) + cx(t, ˙ θ) + k x(t, θ) + x3 (t, θ) = f (t, θ). (2.6) This equation has been normalized with respect to the mass, so the forcing f (t) has units of acceleration. The damping factor c and spring factor k are defined as follows: (2.7)

c = 2ζω0

and

k = ω02 ,

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

723

Table 2.1 Correspondence between the type of GPC and the type of random inputs (N ≥ 0 is a finite integer). Random inputs

GPC

Support

Continuous

Gaussian Gamma Beta Uniform

Hermite-chaos Laguerre-chaos Jacobi-chaos Legendre-chaos

(−∞, ∞) [0, ∞) [a, b] [a, b]

Discrete

Poisson Binomial Negative binomial Hypergeometric

Charlier-chaos Krawtchouk-chaos Meixner-chaos Hahn-chaos

{0, 1, 2, . . . } {0, 1, . . . , N } {0, 1, 2, . . . } {0, 1, . . . , N }

where ζ and ω0 are, respectively, the damping ratio and the natural frequency of the system. This system can become stochastic if the external forcing or the input parameters or both are some random quantities. Those random quantities can evolve in time (random process) or not (random variable). Nonconservative restoring forces tend to correspond to hysteretic materials whose structural properties change in time when subjected to cyclic stresses. A popular restoring force model used in random vibration analysis consists of the superposition of a linear force αx(t) and a hysteretic force (1 − α)Q(t) (see [14]), so that we have (2.8)

x ¨(t) + cx(t) ˙ + k (αx(t) + (1 − α)Q(t)) = f (t),

(2.9)

n n−1 ˙ − ρ|x(t)|Q(t)|Q(t)| ˙ . Q(t) = αx(t) ˙ − β x(t)|Q(t)| ˙

The coefficients α, β, ρ, and n control the shape of the hysteretic loop; some of these coefficients may vary in time. Here, we focus on the case of the Duffing oscillator, while other cases can be deducted from this one. Let us consider the stochastic differential equation (2.6), where the damping factor c and the spring constant k are random processes with unknown correlation functions and the external forcing is a random process with a given correlation function. We decompose the random process representing the forcing term in its truncated Karhunen–Lo`eve expansion up to the nth random dimension; see [15]. We have (2.10)

f (t, θ) = f¯(t) + σf

n  

n 

i=1

i=0

λi φi (t)ξi (θ) =

fi (t)ξi .

Assuming that the correlation functions for the coefficients c and k are not known, we can decompose the random input parameters in their GPC expansion [5, 6, 7] as follows: (2.11)

c(t, θ) =

P 

cj (t)Φj (ξ(θ))

and k(t, θ) =

j=0

P 

kj (t)Φj (ξ(θ)).

j=0

Finally, the solution of the problem is sought in the form given by its truncated GPC expansion (2.12)

x(t, θ) =

P  i=0

xi (t)Φi (ξ(θ)),

724

D. LUCOR AND G. E. KARNIADAKIS

where n is the number of random dimensions and p is the highest polynomial order of the expansion. By substituting all expansions in the governing equation (see (2.6)), we obtain (2.13) P 

x ¨i (t)Φi +

i=0

P 

cj (t)Φj

j=0

+

P 

kj (t)Φj

 P 

j=0

P 

x˙ i (t)Φi

i=0

xi (t)Φi + 

 P 

xi (t)Φi

i=0

i=0

P  k=0

xk (t)Φk

P 

 xl (t)Φl

=

l=0

n 

fi (t)ξi .

i=0

We project the above equation onto the random space spanned by our orthogonal polynomial basis Φm ; i.e., we take the inner product with each basis and then use the orthogonality relation. We obtain a set of coupled deterministic nonlinear differential equations 1  cj (t)x˙ i (t)eijm Φ2m  i=0 j=0 P

x ¨m (t) +

P

1  kj (t)xi (t)eijm Φ2m  i=0 j=0 ⎛ ⎞ P P P P  ⎝    − 2 kj xi (t)xk (t)xl (t)eijklm ⎠ + fm (t), Φm  i=0 j=0 P

P

=−

(2.14)

k=0 l=0

where m = 0, 1, 2, . . . , P , eijm = Φi Φj Φm , and eijklm = Φi Φj Φk Φl Φm ; here ·, · denotes an ensemble average. These coefficients as well as Φ2m  can be determined analytically or numerically using multidimensional numerical quadratures. This system of equations consists of (P + 1) nonlinear deterministic equations with each equation corresponding to one random mode. Standard solvers can be employed to obtain the numerical solutions. 3. Duffing oscillator. 3.1. Periodic excitation with random amplitude. We consider a viscously damped nonlinear Duffing oscillator subject to random external forcing excitations:   x ¨(t, θ) + cx(t, ˙ θ) + k x(t, θ) + x3 (t, θ) = f (t, θ), (3.1) x(0, θ) = x0 and x(0, ˙ θ) = x˙ 0 , t ∈ [0, T ] . In this case, the random forcing is treated as a nonstationary random variable and has the form (3.2)

f (t, θ) = f¯(t) + σf (t)ξ(θ) + γf (t)ξ 2 (θ) + δf (t)ξ 3 (θ),

where ξ is a random variable of known distribution and the coefficients are given by

¯ 2 β, δf = σ 3 β, f¯ = A¯ α + A¯2 β , σf = σA α + 3A¯2 β , γf = 3Aσ A A

α = k − ω 2 cos (ωt) − cω sin (ωt) , β = k cos3 (ωt) .

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

725

An analytical solution can be obtained for this forcing of the form

(3.3) x(t) = A¯ + σA ξ cos(ωt + φ) ¯ σA , and ω being some fixed constants. The random variable ξ can with φ = 0 and A, have different distributions. In this section, we focus on a Gaussian (Case I) and a Uniform distribution (Case II). Case I. If ξ is a Gaussian random variable, the forcing can be represented exactly by the GPC basis using Hermite-chaos and has the following form:





(3.4) f (t, θ) = f¯(t) + γf (t) + (σf (t) + 3δf (t)) ξ + γf (t) ξ 2 − 1 + δf (t) ξ 3 − 3ξ . Case II. If ξ is a random variable with a Uniform distribution (particular case of a Beta distribution), the forcing can be represented exactly by the GPC basis using Legendre-chaos (particular case of Jacobi polynomials) and has the following form:     γf (t) 3 2 1 2 ¯ + σf (t) + δf (t) ξ + γf (t) (3ξ − 1) f (t, θ) = f (t) + 3 5 3 2   2 1 3 + δf (t) (5ξ − 3ξ) . (3.5) 5 2 We decompose the random forcing and the sought solution in its GPC expansion. After substituting in the equation and projecting onto the random space, we obtain a set of coupled equations similar to (2.14). This nonlinear system is simplified if we write it as a state equation. We obtain the following discrete system which consists of a set of simultaneous nonlinear first-order differential equations: ⎧ 1 2 X˙ m (t) = Xm (t), ⎪ ⎪ ⎪ ⎪ ⎪ P P ⎪ ⎪ ⎨ X˙ 2 (t) + c  X 2 (t) = −k  X 1 (t), i i m i=0  i=0 ⎪  ⎪ P  P P  ⎪  ⎪ k ⎪ 1 1 1 ⎪ ⎪ − Xi (t)Xk (t)Xl (t)eiklm + fm (t), ⎩ Φ2m  i=0 k=0 l=0

where eiklm = Φi Φk Φl Φm . These coefficients as well as Φ2m  can be determined analytically or numerically very efficiently using multidimensional Gauss–Legendre quadratures. Obviously, when  = 0, we need at least a third-order GPC expansion to represent the forcing exactly. Because of the form of the solution, we expect the energy injected in the system through the forcing to concentrate mainly in the mean and the first mode of the solution. The energy present in the other random modes should be zero. Since the resulting ODEs are deterministic, we use standard explicit schemes (Euler-forward and Runge–Kutta of second order and fourth order) to check the convergence rate of the solution in time. The following results are obtained using the standard fourth-order Runge–Kutta scheme. The structural parameters in the system (see (3.1) and (3.3)) and initial conditions are set to ¯ σA ) = (0.6, 0.06), ω = 1.05, φ = 0; c = 0.05, k = 1.05, (A, ¯ x1 (t = 0) = σA , x0 (t = 0) = A, x˙ 0 (t = 0) = x˙ 1 (t = 0) = 0, xi>1 (t = 0) = x˙ i>1 (t = 0) = 0.

726

D. LUCOR AND G. E. KARNIADAKIS 0.6 x (mean) 0 x

Solution

0.4

1

0.2 0 –0.2 –0.4 –0.6

0

50

100

150

100

150

time – 12

4

x 10

x2 x3 x 4 x5

Solution

2

0

–2

–4

0

50

time Fig. 3.1. Time evolution of the random modes solution for Case I (Gaussian) using a GPC expansion of six terms (p = 5);  = 1.0.

Figure 3.1 shows the time evolution of the random modes solution for Case I (Gaussian) with  = 1.0. A fifth-order polynomial is used to solve the problem. The top plot shows the mean and the first mode of the solution. We notice that they have the proper amplitude and frequency that we imposed by assuming the form of the solution. The lower plot represents the higher modes, which should be indentically zero. They are very small and completely controlled by the temporal discretization error. In this case, for fixed ∆t, an increase of the polynomial order p does not improve the solution error. In fact, we obtain the same results with a cubic order polynomial, as we know that it is enough to represent exactly the forcing term. We notice in the lower plot that there exists a transient state with a burst of energy in the high modes which interact in a nonlinear manner. At longer times the amplitude of the high modes remains bounded and the system is stable. Similar observations and conclusions can be made for the case of the Uniform input (Case II) for the same values of the parameters. Different values of the nonlinear parameter  were investigated for fixed values of the other parameters. The magnitude and duration of the observed transient of the high modes mentioned above depends on the value of  (and σA ). As  increases, the transient state takes place earlier in time with an increased magnitude. Next, we choose  = 5 with the same set of parameters and an input with Uniform distribution (Case II). We perform a long-time integration for different values of the polynomial order (from p = 3 to p = 11). Figure 3.2 shows results for p = 3. We present the time evolution of the four random modes (x0 (mean), x1 , x2 , and x3 ); see Figure 3.2. In this case, we notice that both the mean and the first mode eventually deviate from the expected solution. Higher modes also deviate toward another solution, and their magnitude becomes nonnegligible. The temporal location of the onset of the bifurcation varies as a function of the temporal error introduced by the scheme. However, the bifurcation always exists, even if the temporal error introduced is slightly above machine precision. Moreover, we observe very similar asymptotic behavior for higher

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

727

0.5

x0

0

−0.5 0

50

100

150

200

250

300

0

50

100

150

200

250

300

0

50

100

150

200

250

300

0

50

100

150

200

250

300

0.1

x

1

0

−0.1

0.1

x

2

0

−0.1

0.1

x

3

0

−0.1

time

Fig. 3.2. Time evolution of the random modes solution for Case II (Uniform) using a GPC expansion of four terms (p = 3);  = 5.

values of p even though the transient states are somewhat different. The critical value of  for Case II is around  ≈ 4.8. No bifurcation of the solution is obtained for an  below this threshold value. The critical value of  for Case I is around  ≈ 3.7. Slightly above this value, a long-term instability develops that brings the initially regular (expected) solution to a chaotic state. For both distributions, for a fixed value of , a change in the standard deviation of the input noise can change the regularity of the solution and bring it to another state. For instance, for  = 5 for Case II, the transition in the solution to another state takes place if σA /A¯ > 4%. Because of the way the forcing term is defined, increasing values of the nonlinear parameter can be seen as increasing forcing magnitudes in the equivalent normalized form of the Duffing equation [16]. Moreover, multifrequencies are introduced in the forcing for  within some critical range. For instance, for small values of , the forcing is very close to a perfectly single-frequency harmonic signal. However, in our case, the multifrequencies forcing brings the oscillator’s mean value to two limit cycles of different stability which coexist for certain values of the control parameters; see plot (c-1) in Figure 3.3. For a limited parameter range, two stable closed orbits coexist. This kind of jump phenomenon is observed for the Duffing oscillator for which we change slightly the forcing frequency [16]. We verified that once the oscillator jumps to the new solution, it does not switch back to the original one. Concerning the first mode x1 , a flip bifurcation-like occurs [16] where the initial limit cycle loses its stability, while another closed orbit takes place whose period is half the period of the original cycle; see plot (c-2) in Figure 3.3. One fundamental question is whether the bifurcation is intrinsic to the deterministic system or whether it is in fact triggered by the uncertainty of the random input. Deterministic computations for this case are done using the extreme values of the random input for the deterministic forcing. This investigates the response of the deterministic oscillator subject to deterministic forcing whose amplitude is evaluated at the boundary of the density probability support (here Uniform distribution). ¯ σA ) = (0.6 ± 0.06, 0.0). While one This is equivalent to setting the parameters (A,

728

D. LUCOR AND G. E. KARNIADAKIS Deterministic

Deterministic

0.5 1

(a)

(b)

t

y

yt

0.5 0

0

−0.5 −0.5 −0.5

0 y

0.5

−1

Stochastic; Mean

−0.5

0 y

0.5

1

Stochastic; First mode

0.6

0.2

(c−1)

(c−2)

0.4 0.1 t

y

yt

0.2 0

0

−0.1

−0.2

−0.2

−0.4

−0.3

−0.6 −0.5

0 y

0.5

−0.2

0 y

0.2

Fig. 3.3. Phase projections of deterministic solutions and stochastic (Uniform distribution) solutions.

case gives a single limit cycle solution (see plot (a) in Figure 3.3), the other case ¯ σA ) = (0.6 + 0.06, 0.0); see plot (b) in Figure 3.3) exhibits two limit cycles with ((A, different amplitudes but the same frequency. So it seems that in this case the bifurcation is intrisic to the system. In summary, what we have studied in this section shows complex and different dynamics for the stochastic Duffing oscillator. A straightforward implementation of GPC is possible since for the problem considered the relatively simple forcing does not require very high order in the GPC expansion. However, in the general case of arbitrary stochastic forcing, the computational complexity increases tremendously as shown in Table 1.1. To this end, we need to implement adaptive procedures to lower the computaional complexity of stochastic nonlinear oscillators. 3.2. Solutions via adaptive GPC. We consider a nonlinear Duffing oscillator subject to a random process excitation f (t, θ) applied over a time interval. The equation governing the motion is given by (3.6)

x ¨(t, θ) + 2ζω0 x(t, ˙ θ) + ω02 (x(t, θ) + µx3 (t, θ)) = f (t, θ),

x(0) = x(0) ˙ = 0.

We assume that the input process f (t, θ) is a weakly stationary random process, with zero mean and correlation function Rf f (t1 , t2 ), given by (3.7)

Rf f (t1 , t2 ) = σf2 e−

|t1 −t2 | A

,

A > 0,

where A is the correlation length and σf denotes the standard deviation of the process. If we normalize the equation using nondimensional time τ = ω0 t and nondimensional displacement y = x/σx , where σx represents the standard deviation of the linear system (µ = 0) with a stationary excitation of infinite duration (T → ∞), we have (3.8)

y¨(t, θ) + 2ζ y(t, ˙ θ) + (y(t, θ) + y 3 (t, θ)) =

f (t, θ) , σx ω02

y(0) = y(0) ˙ = 0,

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

729

where  = µσx2 . Using the above nondimensional time, the autocorrelation function takes the form Rf f (∆τ ) = σf2 e−

(3.9)

|τ1 −τ2 | Aω0

,

A > 0,

where σx is given by  (3.10)

σx =

2ζω0 + 2ζω03

1 A

 ω02 +

σf2 1 2 A

 +

2ζω0 A

.

We also use (2.10) to represent the stochastic forcing, i.e., (3.11)

f (t, θ) =

M 

fi (t)ξi (θ) = f¯ + σf

i=0

M  

λi φi (t)ξi (θ),

i=1

and represent the solution y(t, θ) of the problem by its GPC expansion y(t, θ) =

(3.12)

P 

yi (t)Φi (ξ(θ)).

i=0

The number (P + 1) of terms required in the expansion grows very rapidly as the number of (M + 1) terms in the expansion for the input process f (t, θ) increases; see Table 1.1. However, some of the terms in the expansion for y(t, θ) do not contribute significantly to its value. An adaptive procedure, first introduced by Li and Ghanem [11], can be designed in order to keep only the terms which have the greatest contribution to the solution. The expansion for the excitation (3.11) is decomposed into two summations: K  M   (3.13) fi (t)ξi (θ) + fi (t)ξi (θ) . f (t, θ) = f¯ + σf i=1

i=K+1

The first summation contains the terms whose higher-order (nonlinear) contributions to the solution y(t, θ) will be kept at a given step of the iterative process. The second summation contains the terms whose higher-order (nonlinear) contributions will be neglected in the computation. Correspondingly, the expansion of the solution becomes y(t, θ) = y¯ +

K 

yi (t)ξi (θ) +

i=1

(3.14)

+

N 

M 

yi (t)ξi (θ)

i=K+1

yj (t)Ψj (ξi (θ) |K i=1 )

with N < P.

j=M +1

The first two summations represent the linear contributions. The third summation represents higher-order terms, i.e., at least quadratic polynomials in the random variables {ξi }K i=1 . Another way to understand the method is to consider that we enrich the space of random variables by adding L = (M − K) linear terms to the standard GPC expansion (see (2.12)). With the expansions for y(t, θ) and f (t, θ), we now solve the system for the random modes yi (t) over the time domain. Once the current computation is completed, we then evaluate the L2 norm of each function yi (t) over the

730

D. LUCOR AND G. E. KARNIADAKIS 1

0.9

I

Second−order moment response

0.8

I: GPC (K=20, L=0, p=1);ε=0 II: MC (M=30; 500,000 events) III: GPC (K=20, L=0, p=1) IV: GPC (K=10, L=0, p=3) V: AGPC (K=10, L=10, p=3)

0.7

0.6

0.5

II

V

0.4

III

0.3

0.2

IV

0.1

0

0

5

10

15

ω0t

20

25

30

Fig. 3.4. Case I (Gaussian): Comparison of second-order moment response obtained by adaptive GPC (AGP C ) and Monte Carlo simulation (M C ) (500,000 events). ω0 = 1.0; ζ = 0.1; A = 1.0;  = 1.0.

time interval. The K linear components yi (t), among i ≤ M , with the largest norm, are sorted and reordered and then used to produce the higher-order components in the next iteration. The iterative process is repeated. Convergence is reached and the iterative process stops when the ordering of the K largest contributors to the solution does not change. We present numerical results for both Case I (Gaussian) and Case II (Uniform) for different values of the nonlinear parameter  and different K, L, and polynomial order p combinations. The values for the structural parameters are (3.15)

ω0 = 1.0,

(f¯, σf ) = (0.0, 1.0),

A = 1.0.

The time domain extends over 30 nondimensional units (T = 30). Values of the damping coefficient ζ will be specified for the different cases, as we will see that it plays a key role in the efficiency of the adaptive method. Because of the mean forcing f¯ being zero, we have an asymptotic value of the mean of the solution that tends to zero, and only the random modes associated with polynomials of odd order are excited due to the form of the nonlinearity. Therefore, we compare the second-order moment responses obtained by GPC with and without the use of the adaptive method and also by Monte Carlo simulation. The variance of the solution includes the square of all random modes (except mode zero), so we expect a truncated representation of the solution (without reordering) to always underpredict the exact variance of the solution. Figure 3.4 shows results for Case I (Gaussian case) with  = 1.0. The damping coefficient ζ = 0.1 is quite large, so the solution converges quickly within the imposed time domain. Because the problem has been normalized, the asymptotic value of the variance of the solution for the linear case ( = 0.0) has to be one. The linear case is run first to estimate how many terms are needed to capture the scale associated with the correlation length A of f . We found that 20 terms (K = 20) are enough for the variance of the solution to reach its asymptotic value. Monte Carlo simulation

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

731

1

0.9

I

Second−order moment response

0.8

I: GPC (K=20, L=0, p=1);ε=0 II: MC (M=30; 500,000 events) III: GPC (K=20, L=0, p=1) IV: GPC (K=10, L=0, p=3) V: AGPC (K=10, L=10, p=3)

0.7

0.6

0.5

0.4

II

V III

0.3

0.2

IV

0.1

0

0

5

10

15

ω0t

20

25

30

Fig. 3.5. Case II: Comparison of second-order moment response obtained by adaptive GPC and Monte Carlo simulation (500,000 events). ω0 = 1.0; ζ = 0.1; A = 1.0;  = 1.0.

(MC) with 500,000 realizations for the nonlinear case ( = 1.0) was performed with 30 random dimensions to keep a safety margin. We notice that cubic order (p = 3) GPC with only 10 random dimensions (K = 10) is far from converging to the Monte Carlo simulation. Linear chaos with 20 random dimensions (K = 20) still underestimates the Monte Carlo simulation. The adaptive GPC of cubic order with the addition of 10 more random dimensions (L = 10) shows very clear improvement to the standard GPC, and it also improves the phasing of the solution, but it still underestimates the value of the variance. In this case, cubic polynomials are not enough to capture the strong nonlinear behavior of the oscillator. It is worth mentioning that the use of the incomplete, adaptive third-order GPC (p = 3, K = 10, L = 10) versus the complete standard GPC expansion (p = 3, K = 20, L = 0) lowers significantly the number of unknown random coefficients from 1,771 to 296. Figure 3.5 shows very similar results for Case II. Structural parameters, correlation length, and nonlinear parameter are set to the same values, and only the type of distribution of the input is changed to the Uniform distribution. Here again, the nonlinearity is too large for a cubic polynomial order even with reordering of the modes. Figures 3.6 and 3.7 show results for Cases I and II with an  = 0.1 smaller than the previous cases. The damping coefficient ζ = 0.02 is kept low. Consequently, the solution does not converge to its asymptotic value within the imposed time domain. However, low damping implies sharper peaks in the energy spectrum of the oscillator. Therefore, a finite number of random dimensions is more likely to capture most of the energy in the system. Reordering in this case also helps by sorting out the most significant random modes corresponding to the resonant frequencies and keeping the associated nonlinear components. Figure 3.6 shows the time evolution of variance of the solution for Case I. We see that the adaptive GPC with reordering is very close to the Monte Carlo simulation. In Figure 3.7, we present differently the same type of result for Case II by

732

D. LUCOR AND G. E. KARNIADAKIS

0.9 I: GPC(K=20,L=0,p=1),ε=0.0 II: MC(K=40), ε=0.1 III: GPC(K=20,L=0,p=1),ε =0.1 IV: AGPC(K=10,L=10,p=3),ε =0.1,NO reordering V: AGPC(K=10,L=10,p=3), ε=0.1,WITH reordering

0.8

Second−order moment response

0.7

0.6

0.5

0.4

0.3

ω =1.0 0

0.2

ζ=0.02 A=1

0.1

0

0

5

10

15

ω0t

20

25

30

Fig. 3.6. Comparison of second-order moment response obtained by adaptive GPC and Monte Carlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0;  = 0.1 (Case I: Gaussian).

III: APC (K=10, L=0, p=1) IV: APC (K=10, L=10, p=3), NO reordering V: APC (K=10, L=10, p=3), WITH reordering

Pointwise Error

0.02

0.01

0

−0.01

0

5

10

15

ωt

20

25

30

0

Fig. 3.7. Comparison of second-order moment response obtained by adaptive GPC and Monte Carlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0;  = 0.1 (Case II: Uniform).

showing the pointwise error of the adaptive GPC solution against the Monte Carlo simulation. Figure 3.8 shows the energy distribution among the random modes for Case I before and after reordering. Region I in the figure represents the linear terms or random modes associated with the linear polynomials. Similarly, region II represents the quadratic terms (which are zero as explained previously). Finally, the cubic terms are all grouped in region III. The linear terms distribution before reordering clearly

733

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS NO reordering last reordering

2

10

0

10

−2

||yi(t)||

10

−4

10

−6

10

I

III

II

−8

10

0

50

100

150

200

250

300

P Fig. 3.8. L2 norm of the random adaptive GPC modes with no reordering and with reordering. ω0 = 1.0; ζ = 0.02; A = 1.0;  = 0.1. Relates to V: adaptive GPC (K = 10, M = 10, p = 3).

illustrate the concentration of energy in the system around the peak of resonance. We also represent the last reordering after the iterative process has converged. We notice that the most energetic frequencies have been placed first and the corresponding cubic terms have increased by as much as four orders of magnitude and about two orders of magnitude on average. 3.3. A new adaptive approach to GPC. The concept of truncated representation of the solution in the framework of the adaptive GPC method can be extended further. This time, the solution is again expanded as in (3.14) but with the distinction that not all of the nonlinear terms from the third summation based on the K first random dimensions are kept in the decomposition. In our case, we observe that the modal energy is always large for the nonlinear terms corresponding to cross products between random dimensions (see Figure 3.8). Accordingly, we keep only the coefficients corresponding to the nonlinear polynomials of the form (3.16)

Ψj (ξi (θ) |K i=1 ) =



ξ (θ)···ξi (θ)···ξK (θ)

Cl 1

with l = 1, 2, . . . , K,

 where the operator C represents the product of the combination of the K possible linear polynomials ξi (θ) taken l at a time. The case of Figure 3.4 is repeated using the aforementioned method with adaptive seventh-order GPC (p = 7, K = 7, L = 13) which represents a total number of 141 random modes instead of 888,030 for a standard complete GPC expansion (p = 7, K = 20, L = 0). Results are shown in Figures 3.9 and 3.10. In this case, the adaptive GPC solution does not approach uniformly the Monte Carlo solution over the entire time domain, but it is locally very accurate. The error is very small in some places, and this is an example of local nonuniform convergence of the method. A finite number of modes might be enough to capture the behavior of the oscillator at some instants of time but insufficient at others.

734

D. LUCOR AND G. E. KARNIADAKIS

Second−order moment response

0.6

0.4

0.2

I: MC(K=40), ε=0.1 II: AGPC(K=7,L=13,p=7), ε =0.1, NO reordering III: AGPC(K=7,L=13,p=7), ε =0.1,WITH reordering 0

0

5

10

15

ω0t

20

25

30

Fig. 3.9. Comparison of second-order moment response obtained by adaptive GPC and Monte Carlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0;  = 0.1 (Case I: Gaussian). 0.03

II: AGPC(K=7,L=13,p=7), ε =0.1, NO reordering III: AGPC(K=7,L=13,p=7), ε =0.1,WITH reordering

|Pointwise Error|

0.02

0.01

0

0

5

10

15

ωt

20

25

30

0

Fig. 3.10. Absolute value of second-order moment pointwise error obtained by adaptive GPC and Monte Carlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0;  = 0.1 (Case II: Uniform).

4. Summary. High-order polynomial chaos solutions are prohibitively expensive for strongly nonlinear systems when the number of dimensions of the stochastic input is large. Progress can be made, however, by careful adaptive procedures and selectively incorporating the nonlinear expansion terms. In this paper, we demonstrated such a procedure, proposed previously by Li and Ghanem [11], in the context of the stochastic Duffing oscillator. The adaptive scheme improves the accuracy of this method by reordering the random modes according to their magnification by the system. An extension of the originally proposed adaptive procedure was presented

ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS

735

that uses primarily contributions corresponding to cross products between random dimensions. REFERENCES [1] M. F. Shlesinger and T. Swean, Stochastically Excited Nonlinear Ocean Structures, World Scientific, River Edge, NJ, 1998. [2] M. Ding, E. Ott, and C. Grebogi, Controlling chaos in a temporally irregular environment, Phys. D, 74 (1994), pp. 386–394. [3] P. Sekar and S. Narayanan, Periodic and chaotic motions of a square prism in cross-flow, J. Sound Vibration, 170 (1994), pp. 1–24. [4] N. Wiener, The homogeneous chaos, Amer. J. Math., 60 (1938), pp. 897–936. [5] R. G. Ghanem and P. Spanos, Stochastic Finite Elements: A Spectral Approach, SpringerVerlag, New York, 1991. [6] R. G. Ghanem, Stochastic finite elements for heterogeneous media with multiple random nonGaussian properties, ASCE J. Engrg. Mech., 125 (1999), pp. 26–40. [7] R. G. Ghanem, Ingredients for a general purpose stochastic finite element formulation, Comput. Methods Appl. Mech. Engrg., 168 (1999), pp. 19–34. [8] R. H. Cameron and W. T. Martin, The orthogonal development of nonlinear functionals in series of Fourier-Hermite functionals, Ann. of Math. (2), 48 (1947), pp. 385–392. [9] D. Xiu and G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24 (2002), pp. 619–644. [10] R. V. Field and M. Grigoriu, A new perspective on polynomial chaos, in Computational Stochastic Mechanics, CSM4, P. D. Spanos and G. Deodatis, eds., Millpress, Rotterdam, The Netherlands, 2003, pp. 199–205. [11] R. Li and R. Ghanem, Adaptive polynomial chaos expansions applied to statistics of extremes in nonlinear random vibration, Prob. Engrg. Mech., 13 (1998), pp. 125–136. [12] W. Schoutens, Stochastic Processes and Orthogonal Polynomials, Springer-Verlag, New York, 2000. [13] H. Ogura, Orthogonal functionals of the Poisson process, IEEE Trans. Inform Theory, 18 (1972), pp. 473–481. [14] T. T. Soong and M. Grigoriu, Random Vibration of Mechanical and Structural Systems, Prentice–Hall, Englewood Cliffs, NJ, 1993. `ve, Probability Theory, 4th ed., Springer-Verlag, New York, 1977. [15] M. Loe [16] J. M. T. Thompson and H. B. Stewart, Nonlinear Dynamics and Chaos, John Wiley and Sons, Chichester, UK, 1986.