Example - Semantic Scholar

Report 4 Downloads 89 Views
Fast and Reliable Methods for Determining the Evolution of Uncertain Parameters in Differential Equations Donald Estep

David Neckels

Department of Mathematics Colorado State University

Research supported by Department of Energy: DE-FG02-04ER25620; Sandia Corporation: PO299784; National Aeronautics and Space Administration: NNG04GH63G; National Science Foundation: DMS-0107832, DGE-0221595003, MSPA-CSE-0434354

Donald Estep and David Neckels, Colorado State University – p. 1/97

Introduction

Donald Estep and David Neckels, Colorado State University – p. 2/97

Description of the Problem F is a nonlinear operator:

Space of data and parameters

−→ F

Space of outputs

Assume that the data and/or the parameters are unknown within a given range and/or subject to random or unknown variation For example, as a result of experimental or modeling error Problem: determine the effect of the uncertainty or variation in the input on the output of the operator

Donald Estep and David Neckels, Colorado State University – p. 3/97

Description of the Problem Typically, not all possible configurations of the input are equally plausible We assign probabilities to different input values We consider the input to be a random vector associated with some probability distribution The output of the model is random vector associated with a new distribution.

Donald Estep and David Neckels, Colorado State University – p. 4/97

Description of the Problem Example Consider the model q(λ) = tanh(2λ)/ tanh(2) with several distributions of λ on [−1, 1] q(λ)

Input Densities for λ

Output Densities for q(λ)

1

1

2 3

2

λ 1

–1

1

2 1 1 2 –1

3 3 4 5 0

1

–1

4 –1

5 0

1

Donald Estep and David Neckels, Colorado State University – p. 5/97

The Monte-Carlo Method The Monte-Carlo method is the standard technique for solving this problem The model is solved for a sample of the input space selected at random according to its distribution The computed model solutions form a pointwise approximation to the distribution of the output The Monte-Carlo Method has robust convergence properties and is easy to implement on scalar and parallel computers However, the Monte-Carlo Method can require many simulations to produce an accurate estimate of the desired information

Donald Estep and David Neckels, Colorado State University – p. 6/97

The Monte-Carlo Method Example The Monte-Carlo approximation of the average value of a function g : Z n X 1 1 g(λi ) ≈ g(λ) dλ n Vol(Λ) Λ i=1

{λ1 , · · · , λn } chosen independently at random from Λ

Law of the Iterated Logarithm: with probability 1, for any C > 1, the errors associated to a sequence of sample points are bounded above and below by √ σ 2 log log n √ ±C (σ = variance of g) n for n large Donald Estep and David Neckels, Colorado State University – p. 7/97

The Monte-Carlo Method The errors of 10 sequences of Monte-Carlo approximations of the average value of λ21 + λ22 over [−1, 1] × [−1, 1] with the error bound: 0.05

0

−0.05 1

2

3 n

4

5 4 x 10

Donald Estep and David Neckels, Colorado State University – p. 8/97

The Monte-Carlo Method When the operator F is expensive to evaluate, the Monte-Carlo method can be impractical In general, it is difficult to determine how to sample the input space in order to accurately represent the output distribution

Donald Estep and David Neckels, Colorado State University – p. 9/97

A New Approach It applies in the case that the operator F depends smoothly on the inputs The focus is on nonlinear differential equations We adapt techniques from a posteriori error analysis for finite element methods The new approach can be used either to create a higher order method or produce error estimates for a given representation. We can also design adaptive sampling methods based on the estimate. The higher order and adaptive methods are generally orders of magnitude faster than Monte-Carlo methods

Donald Estep and David Neckels, Colorado State University – p. 10/97

Outline of the Talk • Review some notions from probability • Introduce the new approach in a simple finite-dimensional

example

• Extend the approach to ordinary differential equations • Describe the higher order method • Describe some error estimates • Describe an adaptive sampling method • Present applications

Donald Estep and David Neckels, Colorado State University – p. 11/97

Some Notions from Probability The inputs are random vectors, or Rd -valued measurable functions, on a probability space (Ω, B, P ), where Ω is a set of outcomes, B a σ -field of sets, and P is a measure with P (Ω) = 1 A random vector X defines the distribution measure µX on the Borel subsets of Rd via µX (A) = P (X ∈ A) The distribution measure of a set is the probability that X takes the values in the set

Donald Estep and David Neckels, Colorado State University – p. 12/97

Some Notions from Probability If µX is absolutely continuous with respect to the Lebesgue measure µL , there is an ρ : Rd → RR, called the probability density of X , such that µX (A) = A ρdµL Density ρ

Probability that a<X 0 are the parameters Consider the solution in the first quadrant λ1 is uniformly distributed on µ1 ± 0.1 independently of λ2 normally distributed N (µ2 , 0.1), for a fixed (µ1 , µ2 )

Choose the dual data ψ = (0, 1)⊤ to give the x2 value of the solution

Donald Estep and David Neckels, Colorado State University – p. 25/97

The Higher Order Parameter Sampling Method We first consider (µ1 , µ2 ) = (0.5, 1) We compute Monte-Carlo approximations using 1000 and 10000 points drawn from the distribution of (λ1 , λ2 ) For HOPS, we calculate y¯ at (µ1 , µ2 ) = (0.5, 1), solve for the generalized Green’s vector, and approximate x2 = hx, ψi ≈ y2 − hDλ f (y; µ)(λ − µ), φi.

Donald Estep and David Neckels, Colorado State University – p. 26/97

The Higher Order Parameter Sampling Method

7.1

Densities

5.3 3.6

Monte-Carlo (10000) HOPS (1 point) Monte-Carlo (1000)

1.8 0

0.45

0.50

0.60 0.55 Values of x2

0.65

0.70

Donald Estep and David Neckels, Colorado State University – p. 27/97

The Higher Order Parameter Sampling Method Computational savings: HOPS requires only one solution of the nonlinear system, one solution to the linear adjoint problem, and then a vector dot product for each parameter value The Monte-Carlo approach requires 10000 solutions to the nonlinear system

Donald Estep and David Neckels, Colorado State University – p. 28/97

The Higher Order Parameter Sampling Method In general, linearization at one point is insufficient to describe the response of the system to variations throughout the parameter set Example The solution of the previous example is very sensitive to variations in the parameter near (µ1 , µ2 ) = (0.89, 1)

Donald Estep and David Neckels, Colorado State University – p. 29/97

The Higher Order Parameter Sampling Method 5.3

Densities

3.6

1.8 Monte-Carlo (10000) HOPS (1 point) Monte-Carlo (1000) 0 0

0.1

0.2 Values of x2

0.3

0.4

Donald Estep and David Neckels, Colorado State University – p. 30/97

The Higher Order Parameter Sampling Method

Norm of generalized Green’s vector

The generalized Green’s vector indicates this increased sensitivity: 5

4

3

2

1 .84

.88

.92

.96

1

λ1 Donald Estep and David Neckels, Colorado State University – p. 31/97

The Higher Order Parameter Sampling Method We compute HOPS approximations at a number of points in the parameter set and form a piecewise linear approximation The partition consists of sub-intervals centered at the sample points Example In the previous example, we compute a HOPS approximation using five points

Donald Estep and David Neckels, Colorado State University – p. 32/97

The Higher Order Parameter Sampling Method

Densities

5.3

3.6

1.8 Monte-Carlo (10000) HOPS (5 points) Monte-Carlo (1000) 0 0

.10

.20

.30

.40

Values of x2 Donald Estep and David Neckels, Colorado State University – p. 33/97

Ordinary Differential Equations

Donald Estep and David Neckels, Colorado State University – p. 34/97

Ordinary Differential Equations We consider the initial value problem ( x(t; ˙ λ) = f (x(t; λ); λ1 ), x(0; λ) = λ0

t > 0,

with x ∈ Rn and f : Rn+d → Rn The parameter λ = (λ1 , λ0 )⊤ ∈ Rd+n

• λ1 ∈ Rd represents parameters in the model f • λ0 ∈ Rn represents the initial conditions

Donald Estep and David Neckels, Colorado State University – p. 35/97

Ordinary Differential Equations We consider λ = λ(ω) as a random vector on a probability space (Ω, B, P ) The solution x(t; λ) ∈ Rn is a function of this random vector Under the standard Lipschitz assumption on f , x is also a random vector The set of trajectories indexed by ω is a collection of random vectors x(t; ω) = x(t; λ(ω)) indexed by t This is a stochastic process whose increments are characterized deterministically

Donald Estep and David Neckels, Colorado State University – p. 36/97

Quantity of Interest The quantity of interest q(ω) = q(λ(ω)) =

Z

0

T

hx(t; λ(ω)), ψ(s)i ds,

h·, ·i denotes the standard inner product in Rn ψ corresponds to the linear functional for the quantity of interest

Examples • ψ(s) = δ(s − t)(0, . . . , 1, 0, . . . )⊤ • ψ(s) = (1, . . . , 1)⊤ /T

• ψ(s) = (0, . . . , 1, 0, . . . )⊤ /T

Donald Estep and David Neckels, Colorado State University – p. 37/97

Quantity of Interest The quantity of interest q(ω) = q(λ(ω)) =

Z

0

T

hx(t; λ(ω)), ψ(s)i ds,

The goal is to approximate the cumulative distribution function Fq (s) = P (q(ω) ≤ s) of q accurately and efficiently From Fq , we can recover any desired statistic, e.g., means and moments The plots of distributions in this talk are computed using a kernel density estimator on a data set constructed with the distribution of Fq˜, where q˜ ≈ q .

Donald Estep and David Neckels, Colorado State University – p. 38/97

Generalized Green’s Function The generalized Green’s function solves the adjoint problem, ( ˙ − A⊤ (t)φ(t) = ψ(t), −φ(t) T ≥ t ≥ 0, φ(T ) = 0, A(t) ≡ Dxf (y(t); µ1 ) y(t) denotes the deterministic solution with the reference parameter value µ = (µ1 , µ0 )⊤

Donald Estep and David Neckels, Colorado State University – p. 39/97

Representation of the Output A variational argument gives Theorem q(λ) =

Z

0

T

hx, ψi ds ≈

Z

0

T

hy, ψi ds + hλ0 − µ0 , φ(0)i +

Z

0

T

hDλ1 f (y; µ1 )(λ1 − µ1 ), φi ds.

The last term on the right describes the effect of variations in the model parameters The second to last term describes the effect of variations in the initial conditions

Donald Estep and David Neckels, Colorado State University – p. 40/97

The One Point HOPS Approximation The one point HOPS approximation is q(λ) ≈ q(µ) + ∇q(µ)(λ − µ).

Theorem If there is a constant K such that |f (x; λ1 ) − f (y; µ1 )| ≤ K(|x − y| + |λ1 − µ1 |),

for all x, y ∈ Hs and µ, λ ∈ Hp , where the solutions remain in the set Hs for all parameters in Hp and t ∈ [0, T ], then ∇q(µ)[·] = h[·]0 , φ(0)i +

Z

0

T

hDλ1 f (y; µ1 ) [·]1 , φi ds.

Donald Estep and David Neckels, Colorado State University – p. 41/97

The One Point HOPS Approximation Example ( x(t; ˙ λ) = λx(t; λ), x(0) = x0 ,

t > 0,

λ(ω) is a random variable with mean µ

With data ψ(s) = δ(s − t), q(λ) = x(t; λ) ≈ y(t; µ) + (λ(ω) − µ)

Z

t

eµ(t−s) y(s; µ) ds,

0

We can compute exact formulas for the true and approximate densities

Donald Estep and David Neckels, Colorado State University – p. 42/97

The One Point HOPS Approximation t4=4 3 t1=.04

2 Densities

y0=10 λ(ω) unif. dist. on [-3/2,-1/2] ψ(s)=δ(s-ti)

1 t3=1

0

2

t2=.3

4 6 8 Values of the Solution

10

Donald Estep and David Neckels, Colorado State University – p. 43/97

The Lipschitz Assumption The Lipschitz condition is the natural extension of the standard assumption for local existence and uniqueness The solution set Hs generally needs to be compact for general f The restrictions on the parameter set Hp are more complicated to determine and highly problem dependent In general, Hp must at least be compact, but a given differential equation is likely a valid model only for a certain range of parameters In practical Monte-Carlo computations, there is an implicit assumption that the operator being investigated dies off rapidly for large values of the random variable

Donald Estep and David Neckels, Colorado State University – p. 44/97

The Lipschitz Assumption Example ( x′ = x2 , x(0) = x0 ,

0 ≤ t ≤ T,

with x0 uncertain The solution is 1 x(t) = −1 x0 − t

This blows up at t = 1/x0 provided x0 > 0 If x0 is not bounded from above, there is no solution on any time interval Using a normal distribution for the parameter is physically inappropriate Donald Estep and David Neckels, Colorado State University – p. 45/97

The Multi-point HOPS Approximation We combine HOPS approximations computed at multiple reference values to obtain globally accurate approximations First, the piecewise linear HOPS approximation: We choose a sample {µi }N i=1 and a partition of generalized rectangles {Ri }N i=1 with µi ∈ Ri for all i q(λ) ≈

N X i=1

χRi (λ) (q(µi ) + ∇q(µi )(λ − µi ))

χRi is 1 for λ ∈ Ri and 0 otherwise

Donald Estep and David Neckels, Colorado State University – p. 46/97

The Multi-point HOPS Approximation We can also use a partition of unity to form an approximation We let {Λi }N i=1 be a finite open cover of the parameter set A corresponding partition of unity is a set of continuous, piecewise smooth functions {θi }N i=1 such that θi (x) = 0 for x ∈ / Λi and

N X

θi (x) = 1 for all x

i=1

We use the partition of unity suggested by Renka q(λ) ≈

N X i=1

(q(µi ) + h∇q(µi ), (λ − µi )i) θi (λ).

Donald Estep and David Neckels, Colorado State University – p. 47/97

Reliably Accurate Parameter Sampling

Donald Estep and David Neckels, Colorado State University – p. 48/97

Reliably Accurate Parameter Sampling Another important scientific goal is to compute information whose accuracy can be quantified We write the HOPS approximation as q(λ) − q(µ) ≈ ∇q(µ)(λ − µ), ∇q(µ)(λ − µ) approximates the error in using the sample value q(µ) instead of the true value q(λ)

We use this information to compute several a posteriori error estimates We call these Reliably Accurate Parameter Sampling Methods, or RAPS Donald Estep and David Neckels, Colorado State University – p. 49/97

Error Estimates for the Monte-Carlo Method Producing an error estimate for a given Monte-Carlo computation is an interesting problem Recall that the error does not have to decrease monotonically as the number of sample points increases In other words, increasing the number of sample points does not automatically decrease the error.

Donald Estep and David Neckels, Colorado State University – p. 50/97

Ways to Measure the Error With q˜(λ) ≈ q(λ), we can use • The expected value of the error in a statistical function

E[S(˜ q (ω)) − S(q(ω))]

e.g., S(s) = s gives the expected value and S(s) = (s − E(q))2 gives the variance

• The L1 error

E[|˜ q (ω) − q(ω)|]

controls convergence in probability and in distribution and the error in the cumulative distribution function • Pointwise accuracy of the computed information, e.g., plots

of densities

Donald Estep and David Neckels, Colorado State University – p. 51/97

A Piecewise Constant Interpretation We interpret a given sample as a piecewise constant approximation N X q˜(λ) = q(µi )χRi (λ), i=1

with a partition {Ri } as before The estimate for the expected value is E pc =

N Z X i=1

Ri

h∇q(µi ), (z − µi )i dµλ(z).

Donald Estep and David Neckels, Colorado State University – p. 52/97

A Piecewise Constant Interpretation We interpret a given sample as a piecewise constant approximation N X q˜(λ) = q(µi )χRi (λ), i=1

with a partition {Ri } as before The estimate for an expected value of a statistical function is E pc =

N Z X i=1

Ri

S ′ (q(µi ))h∇q(µi ), (z − µi )i dµλ(z)

Donald Estep and David Neckels, Colorado State University – p. 52/97

A Piecewise Constant Interpretation We interpret a given sample as a piecewise constant approximation N X q˜(λ) = q(µi )χRi (λ), i=1

with a partition {Ri } as before The estimate for the L1 error is E pc =

N Z X i=1

Ri

|h∇q(µi ), (z − µi )i| dµλ(z),

Donald Estep and David Neckels, Colorado State University – p. 52/97

A Piecewise Constant Interpretation Example ( x(t; ˙ λ) = λx(t; λ), x(0) = x0 ,

t > 0,

x0 = 1 and λ is uniformly distributed in [−.5, .5] 15 Estimate/Error

1 True L1 Error

10

Estimate

5

.8 .6 .4 .2

0

10

30

50 N

70

0

10

30 N

50

70

Donald Estep and David Neckels, Colorado State University – p. 53/97

A Piecewise Constant Interpretation Example ( x(t; ˙ λ) = λx(t; λ), x(0) = x0 ,

t > 0,

x0 = 1 and λ is uniformly distributed in [−.5, .5] 14 Estimate/Error

10

1.4

Second Moment Error Estimate

6 2 0

1.2 1 .8 .6

10

20

30 N

40

50

0

10

20

30

40

50

N Donald Estep and David Neckels, Colorado State University – p. 53/97

A Partition of Unity Approach We write the approximation using a partition of unity chosen as before N X q(µi )θi (λ) q(λ) ≈ i=1

We obtain estimates in the form E pou =

N Z X i=1

Λi

h∇q(µi ), (z − µi )iθi (z) dµλ(z)

Note that this defines an error function We obtain similar numerical results

Donald Estep and David Neckels, Colorado State University – p. 54/97

Fast Adaptive Parameter Sampling

Donald Estep and David Neckels, Colorado State University – p. 55/97

Fast Adaptive Parameter Sampling An accurate error estimate gives the possibility of optimizing the sample process in terms of achieving a desired accuracy We use the error estimates for a piecewise constant approximation to guide the sampling process using an adaptive mesh refinement strategy We call such methods Fast Adaptive Parameter Sampling Method, or FAPS

Donald Estep and David Neckels, Colorado State University – p. 56/97

Fast Adaptive Parameter Sampling The goal of the adaptive strategy is to achieve the desired accuracy using the fewest sample points The refinement process works iteratively: • For a given sample, we estimate the local contributions to

the error using the estimate

• We add new sample points in regions that contribute the

most in order to decrease the error

Donald Estep and David Neckels, Colorado State University – p. 57/97

Choosing a Refinement Criterion We need to quantify the contributions to the error from the individual elements in the current partition The standard optimization framework for adaptivity requires an estimate in a norm to guide refinement When the chosen estimate is too large, we add sample points in order to reduce E[|˜ q (ω) − q(ω)|] The piecewise constant and partition of unity interpretations lead to different sampling algorithms

Donald Estep and David Neckels, Colorado State University – p. 58/97

Sampling for the Piecewise Constant Interpretation For a current sample {µi }N i=1 , associated partition {Ri }, and estimate E pc =

N Z X i=1

Ri

|h∇q(µi ), (z − µi )i| dµλ(z),

We refine the rectangles with the largest element indicators Z Eipc = |∇q(µi )(z − µi )| dµλ(z) Ri

one dimension at a time

Donald Estep and David Neckels, Colorado State University – p. 59/97

Sampling for the Piecewise Constant Interpretation From the element indicators Z Eipc = |∇q(µi )(z − µi )| dµλ(z) Ri

We define the d = p + n directional element indicators Z pc Ei,k = |∂λk q(µi )(z k − µki )| dµλ(z), Ri

with z = (z 1 , · · · , z d )⊤ and µi = (µ1i , · · · , µdi )⊤ When Ri is marked for refinement, we divide it into thirds along pc the dimension with the largest Ei,k

Donald Estep and David Neckels, Colorado State University – p. 60/97

The Sampling Strategy

pc Ei,2

µi pc Ei,1

Ri

Donald Estep and David Neckels, Colorado State University – p. 61/97

Stopping and Starting Given an error tolerance T OL, we repeat the iteration until E ≤ T OL is achieved The choice of the initial partition is an important issue • Using a relatively coarse initial partition increases the

possibility of achieving optimality

• Using a relatively coarse initial partition increases the

chances that the initial error estimate is too inaccurate

Uniform initial partitions suffer from the curse of dimensionality. Should we use hybrid methods?

Donald Estep and David Neckels, Colorado State University – p. 62/97

Sampling for a Partition of Unity Interpretation The partition of unity interpretation yields an approximate error function We use this function to create a density function that indicates regions with insufficient representation We use a Monte-Carlo method to choose new sample points according to the computed density

Donald Estep and David Neckels, Colorado State University – p. 63/97

Application: Model of a Harvested Fish Population

Donald Estep and David Neckels, Colorado State University – p. 64/97

Schaefer Model Assuming the rate of harvesting of the fish population x is the effort times the number of fish, the Schaefer Model is ( R (K − x(t; λ1 )) x(t; λ1 ) − Hx(t; λ1 ), t > 0, x(t; ˙ λ1 ) = K x(0; λ1 ) = x0 , where λ1 = (R, K, H)⊤ We take data ψ(s) = δ(s − 3) and x0 = 1 Case 1: H = 0 and (R, K) are independent and N (0.1, 0.5), N (4.5, 0.1) (truncated) respectively The norm of the Green’s function φ(s) is largest near R = 0, where small changes in R cause large changes in the accumulated growth Donald Estep and David Neckels, Colorado State University – p. 65/97

Schaefer Model: Norm of Green’s Function for H

=0

3.5 2.5 1.5 0.5 –2

4 4.2

–1

R

0 1 2

5

4.8

4.6

4.4

K

Donald Estep and David Neckels, Colorado State University – p. 66/97

Schaefer Model: HOPS Results for H

=0

Densities

.53 Monte Carlo (30000) HOPS (256) Monte Carlo (1000)

.36

.18

0 −1

0

1

2 3 4 Values of x at t=3

5

6

Donald Estep and David Neckels, Colorado State University – p. 67/97

Schaefer Model: FAPS Results for H

=0

.53 Monte Carlo (30000)

Densities

FAPS (596) .36

Monte Carlo (1000)

.18

0 −1

0

1

2 3 4 Values of x at t=3

5

6

Donald Estep and David Neckels, Colorado State University – p. 68/97

Schaefer Model: FAPS Meshes for H Initial

=0

Final

5.5

Detail

5.5 5 5

K

K

5

4.5

4

4

3.5

3.5 −2

0 R

2

4.5

4.5

4

−2

0 R

2

0

1

2

Donald Estep and David Neckels, Colorado State University – p. 69/97

Schaefer Model Case 2: H 6= 0 The system has two fixed points x ¯1 = 0 and x ¯2 = K(1 −

H R)

x ¯1 is stable and x ¯2 unstable for R < H and the opposite when R>H

As t → ∞, the distribution of the solutions should split as they converge to different fixed points We take x0 = 1, T = 20, (R, K, H) independent, N (0.8, 0.5), N (4.5, 0.01), N (0.5, 0.2) (truncated) respectively

Donald Estep and David Neckels, Colorado State University – p. 70/97

Schaefer Model: FAPS Results for H .35

Monte-Carlo (30000) FAPS (451) Asymptotic (large time)

.28 Densities

6= 0

.21 .14 .07 0

0

1 2 3 4 5 6 7 Values of x at t=20 and asymptotic

8

Donald Estep and David Neckels, Colorado State University – p. 71/97

Schaefer Model: Gain in Efficiency We compare convergence rates for cumulative distribution functions For a quantity of interest q and an approximation q˜, we use the Kolmogorov-Smirnov (K-S) test statistic max |Fq (x) − Fq˜(x)| x∈R

We use a 70, 000 point Monte-Carlo computation to generate a reference value for q For the Monte-Carlo computations, we compute an empirical cumulative distribution function FnM C (x) = {number of i with q(λi ) ≤ x}/n {λi } are sample points, {q(λi )} are computed values Donald Estep and David Neckels, Colorado State University – p. 72/97

Schaefer Model: Gain in Efficiency using HOPS

.2

K−S Statistic

Monte-Carlo results HOPS results .1

00

2000 4000 6000 8000 Number of Nonlinear Solves

10000

Donald Estep and David Neckels, Colorado State University – p. 73/97

Schaefer Model: Gain in Efficiency using FAPS

.2

K−S Statistic

Monte-Carlo results FAPS results .1

00

2000 4000 6000 8000 Number of Nonlinear Solves

10000

Donald Estep and David Neckels, Colorado State University – p. 74/97

Schaefer Model: Partition of Unity FAPS We consider again H = 0, x0 = 1, and T = 3 We perform adaptive Monte-Carlo sampling according to the error density given by a partition of unity

Donald Estep and David Neckels, Colorado State University – p. 75/97

Schaefer Model: Partition of Unity FAPS pou (λ) computed Plots of the true error and the approximation Eloc using 20 sample points

True Error

Approximate Error 5.5

4.5

3.5-2

K

K

5.5

-1

0 R

1

2

4.5

3.5 -2

-1

0 R

1

2

Donald Estep and David Neckels, Colorado State University – p. 76/97

Schaefer Model: Partition of Unity FAPS Plots of random walk produced by a Markov Chain Monte Carlo method and ten new sample points chosen from this set 5.5

K

K

5.5

4.5

3.5 −1.5 −1

−0.5

0 R

0.5

1

1.5

4.5

3.5 −1.5 −1

−0.5

0 R

0.5

1

1.5

Donald Estep and David Neckels, Colorado State University – p. 77/97

Schaefer Model: Partition of Unity FAPS

Densities

.6 FAPS (80) Monte-Carlo (30000)

.4

.2 0 −1

0

1 2 3 4 5 Values of x at t=3

6

Donald Estep and David Neckels, Colorado State University – p. 78/97

Application: Chaotic Lorenz Model

Donald Estep and David Neckels, Colorado State University – p. 79/97

Lorenz Model The Lorenz equations   x˙ 1 = σ(x2 − x1 ), x˙ 2 = rx1 − x2 − x1 x3 ,   x˙ 3 = x1 x2 − bx3 , σ, r, b are fixed at standard values

We fix x2,0 = 0, x3,0 = 24 and take x1,0 uniformly distributed in the strip [−2, 2] The quantity of interest is the x1 coordinate of the solution for a sequence of times, so q(λ) = q(x1,0 ) = x1 (t; x1,0 ), using TOL= .01

Donald Estep and David Neckels, Colorado State University – p. 80/97

Lorenz Model All trajectories approach an attractor but solutions that start close together eventually diverge There is a manifold originating out of the x3 -axis that separates nearby solutions that subsequently move to neighborhoods of different hyperbolic fixed points Solutions corresponding to a contiguous set of initial values will be split by this manifold into two separate masses, which will then be split further, and so on

Donald Estep and David Neckels, Colorado State University – p. 81/97

Lorenz Model: The “Razor”

6

Hyperbolic Fixed Points

x2

50 x3 0 -10

30 x1

30 -20

x2

Typical solution

0

-6 -6

0 6 x1 Looking down the x3 axis

Donald Estep and David Neckels, Colorado State University – p. 82/97

Lorenz Model: Solutions Cross Sections in the x1-x3 Plane 17.96

17.92

17.88 −2

0 2 t=.1

50

35

50

50

40

30

40

40

30

25

30

30

20

20

20

20

10

15

10

10

0 10 −20 0 20 −20 0 20 t=1 t=2

0 −20 0 20 t=3

0 −20 0 20 t=5

Donald Estep and David Neckels, Colorado State University – p. 83/97

Lorenz Model: Density Results Plot of Densities versus x1 .53 .36 .18

0.14

0.14

0.11

0.11

0.07

0.07

0.04

0 −2

0 2 t=.1 MC (40000) FAPS (64)

0.04

0.11

.053

0.07

.036

0.04

.018

0 0 −20 0 20 −20 0 20 t=1 t=2 MC (40000) MC (40000) FAPS (134) FAPS (94)

0 −20 0 20 t=3 MC (40000) FAPS (240)

0 −20 0 20 t=5 MC (40000) FAPS (336)

Donald Estep and David Neckels, Colorado State University – p. 84/97

Lorenz Model Every time the solutions pass the separatix, the functional q(x1,0 ) develops a steep gradient The FAPS algorithm places a large number of samples at the location of these gradients Since the number of vertical regions doubles each time the mass returns to the manifold, eventually it becomes impossible to compute the functional accurately

Donald Estep and David Neckels, Colorado State University – p. 85/97

Lorenz Model: FAPS Mesh Selection Plot of q(x1,0) versus x1,0 20

20

20

20

20

15

15

15

15

15

10

10

10

10

10

5

5

5

5

5

0

0

0

0

0

−5

−5

−5

−5

−5

−10

−10

−10

−10

−10

−15

−15

−15

−15

−15

−20 −2

−20 −2

−20 −2

−20 −2

0

2

0

−20 2 −2

0

2

0

2

0

2

FAPS Mesh Density Donald Estep and David Neckels, Colorado State University – p. 86/97

Application: An SIR Model of Disease

Donald Estep and David Neckels, Colorado State University – p. 87/97

SIR Model We consider an SIR model of disease that includes birth/death processes and the possibility that the offspring of the resistant class may inherit the resistance Assuming a logistics model for birth/death processes,  N ) (x1 + x2 + (1 − pR )x3 ) − dn x1 − rI x1 x2 , x ˙ = r (1 −  1 n k   x˙ = r x x − (d + d )x − a x , 2 n I 1 2 I 2 R 2  x˙ 3 = pR rn (1 − Nk )x3 − dn x3 + aR x2 ,    x1 (0) = x1,0 , x2 (0) = x2,0 , x3 (0) = x3,0 ,

0 ≤ t ≤ T,

x1 is the population susceptible to the disease, x2 the infected population, x3 the resistant class, and N = x1 + x2 + x3

Donald Estep and David Neckels, Colorado State University – p. 88/97

SIR Model Parameter Values Description Recovery rate Natural growth rate Carrying capacity Probability of inheriting resistance Natural death rate Contraction rate Death rate from disease

Name aR rn k pR dn rI rI

Mean

Perturbation

0.2 1 100 0.1 0.2 0.2 1

±0.1 ±0.2 ±5 ±0.01 ±0.1 ±0.1 ±0.2

Parameters are uniformly distributed in the ranges given

Donald Estep and David Neckels, Colorado State University – p. 89/97

SIR Model Behavior The introduction of a small number of infected leads to an epidemic in the form of a transient spike of infections 25 x3 x1 = susceptible population

x2

x2 = infected population x3 = resistant population

x1 0

0

5

15

25

35

We measure the impact of the infection using q(λ) =

Z

T

x2 (s; λ) ds,

0 Donald Estep and David Neckels, Colorado State University – p. 90/97

SIR Model HOPS Results .14

Monte-Carlo (70000) HOPS (256)

Densities

.11

.07

.04

0

5

10

15

20

25

q(λ)

Donald Estep and David Neckels, Colorado State University – p. 91/97

SIR Model FAPS Results Monte-Carlo (70000) FAPS (1527) FAPS (2037) FAPS (11763)

0.1

Densities

0.08 0.06 0.04 0.02

0

5

10

15

20

25

q(λ) Donald Estep and David Neckels, Colorado State University – p. 92/97

SIR Model Cumulative Distribution Functions 1.2 Monte-Carlo (70000) HOPS (256)

Monte-Carlo (70000) FAPS (1527) FAPS (2037) FAPS (11763)

1.2

Cumulative Distribution

Cumulative Distribution

1 .8 .6 .4 .2 0

1 .8 .6 .4 .2

0

5

10 15 q(λ)

20

25

0 0

5

10 15 q(λ)

20

25

Donald Estep and David Neckels, Colorado State University – p. 93/97