Multiplicative Adams Bashforth–Moulton methods - Semantic Scholar

Report 1 Downloads 85 Views
Numer Algor (2011) 57:425–439 DOI 10.1007/s11075-010-9437-2 ORIGINAL PAPER

Multiplicative Adams Bashforth–Moulton methods Emine Misirli · Yusuf Gurefe

Received: 14 May 2009 / Accepted: 9 November 2010 / Published online: 23 November 2010 © Springer Science+Business Media, LLC 2010

Abstract The multiplicative version of Adams Bashforth–Moulton algorithms for the numerical solution of multiplicative differential equations is proposed. Truncation error estimation for these numerical algorithms is discussed. A specific problem is solved by methods defined in multiplicative sense. The stability properties of these methods are analyzed by using the standart test equation. Keywords Multiplicative calculus · Backward division operator · Truncation error estimation · Stability analysis · Adams methods

1 Introduction Differential calculus is used in a lot of problems in which the mathematical modelling is required. The mathematical modelling of most phenomena in science and engineering are based on an evolutionary description. Therefore it is natural to model such phenomena through differential equations. Some of these problems may involve difficult approaches when the classical concepts are used for mathematical formulation. For example, Riza et al. [5] shows that, the problem involving growth rate can be effectively expressed using multiplicative calculus althought it can be also expressed using classical concepts.

E. Misirli · Y. Gurefe (B) Department of Mathematics, Ege University, Izmir, Turkey e-mail: [email protected] E. Misirli e-mail: [email protected]

426

Numer Algor (2011) 57:425–439

This will necessitate a lot more effort. Michael Grossman and Robert Katz alternatively defined some new calculi in [1, 3, 4] and have shown that each of these calculi can be effectively used for mathematical approaches of some problems. These calculi, multiplicative or exponential as Grossman and Katz terminology and bigeometric calculus are examined in different papers [2, 5]. It is evident that many studies should be carried out to improve the usage of multiplicative calculus in science and engineering. Some recent studies on this topic [1, 2, 5], as well as Volterra’s multiplicative differential equations, have produced encouraging results. Following the usage of these multiplicative differential equations, their respective solutions become very important. From this point of view, we developed multiplicative Adams Bashforth– Moulton algorithms, for the numerical approximation of the solutions of these multiplicative differential equations. For the numerical solutions of multiplicative initial value problems and boundary value problems, multiplicative Runge–Kutta algorithms, which are also called one step methods in [2], and multiplicative finite difference algorithm in [5] were respectively given. The well known classical Adams Bashforth–Moulton algorithms, which are called multistep methods, can also be used in solving the multiplicative initial value problems. These methods are based on the exponential Newton backward division formula. This paper is organized as follows: In Sections 2 and 3 some basic definitions and results on multiplicative calculus are given, multiplicative Adams Bashforth–Moulton methods are developed to obtain the numerical solutions of the initial value problem in multiplicative case, and error estimation of these methods is also discussed, the stability properties are analyzed in Section 4, the numerical results are presented in Sections 5 and 6 gives the conclusions.

2 Basic definitions and results on multiplicative calculus 2.1 Multiplicative differentiation The classical derivative of the function f at x is defined as the limit f  (x) = lim

h→0

f (x + h) − f (x) . h

(1)

Replace the difference f (x + h) − f (x) by the ratio f (x + h)/ f (x) and the division by h by the raising to the reciprocal power 1/ h in this formula. Then we get the multiplicative derivative of f at x as the following limit 



f (x) = lim

h→0

f (x + h) f (x)

 h1

.

(2)

Numer Algor (2011) 57:425–439

427

In [1] it is shown that if f is a positive function on an open subset A of the real line R and its derivative f  (x) exists, then its multiplicative derivative f ∗ (x) also exists and they are related as    f ∗ (x) = exp ln f (x) . (3) Moreover, if nth derivative f (n) (x) exists, then nth multiplicative derivative f ∗(n) (x) also exists and  (n)  f ∗(n) (x) = exp ln f (x) , n = 0, 1, .... (4) Some properties of multiplicative derivatives and the existence and uniqueness theorem for solutions of multiplicative differential equations are investigated in [1]. 2.2 Multiplicative integration A Riemann integral in the multiplicative form is also defined in [1] for positive bounded functions and shown its relation to ordinary Riemann integral as  b   b   dx f (x) = exp ln f (x) dx . (5) a

a

This integral has the properties:  b

dx  = f (x) p a



b



b

f (x)dx

p

, p ∈ R,

(6)

a



dx  f (x)g(x) =

a

b

 f (x)dx .

a

 a

b



f (x) g(x)

dx

b

g(x)dx ,

(7)

a

b = ab a

f (x)dx g(x)dx

,

(8)

where f and g are integrable on [a, b ] in the multiplicative sense. Theorem 1 (Bashirov et al. [1]) Let f : [a, b ] → R be dif ferentiable in the multiplicative sense. If f ∗ has a multiplicative integral on [a, b], then  b f (b ) f ∗ (x)dx = . (9) f (a) a 2.3 Multiplicative differential equation The equation

  y∗ (x) = f x, y(x) ,

(10)

428

Numer Algor (2011) 57:425–439

containing the multiplicative derivative of y, is called a multiplicative differential equation. This equation has sense if f is a positive function defined on some subset G of R × R+ . For more details, we refer the reader to [1]. Certain dynamical systems cannot be described with common differential calculus. For example, when fractals are employed to model processes and effects occurring in nature, models contain labile fractal dimension. Additive derivative of dimensional function does not exist, since it is not possible to define differential quotient [2]. In this case, the multiplicative calculus introduced by Volterra [11] can be applied. This calculus is called Volterra type multiplicative calculus. Volterra type multiplicative differential equation is defined as follows:   yπ (x) = f x, y(x) .

(11)

However, some relevant studies were presented by using this multiplicative differential calculus. Aniszewska and Rybaczuk [12] contains derivation of stability theory of the Lyapunov type for system of autonomous multiplicative differential equations. For the multiplicative Lorenz system described with multiplicative derivatives, the largest Lyapunov exponent was obtained. Also, the paper [13] is devoted to fractal model of fatigue defects growth. The paper [14] presents other arguments supporting multiplicative calculus. According to experience from field theory or quantum physics, such derivatives give good results (see [14]) and in problems involving multiple scale of length approach. So, multiplicative differential calculus becomes very important. 2.4 Exponential Newton backward division formula Numerous approximating and interpolating methods, in numerical analysis, can be applied by using the polynomial, rational function, trigonometric function and exponential function methods. In this section, using the exponential function methods we construct the exponential backward division formula, similar to backward difference formula in polynomial case, for numerical solution of the multiplicative differential equation. Here multiplication, division and power, in multiplicative calculus, take the role of summation, difference and multiplication in ordinary calculus. For this reason, we observe that forward difference, backward difference and divided difference formula, in the polynomial case, are respectively replaced by forward division, backward division and power division formula in exponential case. For more details, we refer the reader to [8]. We define first the backward division operator ∗ by ∗ fi =

fi fi−1

, i = 0, 1, ...,

(12)

Numer Algor (2011) 57:425–439

429

where fi = f (xi ). The following relations hold:

fi fi−2 ∗(2) fi = ∗ ∗ fi =  2 , fi−1 ∗(3)







∗(2)

fi =  

fi

 3 fi fi−2 , = 3 fi−1 fi−3

. . .

∗(n)







∗(n−1)

fi =  

fi

 C(n,0)  C(n,2)  (−1)n−1 C(n,n−1) fi ... fi−n+1 fi−2 =  C(n,1)  C(n,3)  (−1)n−1 C(n,n) . fi−1 fi−3 ... fi−n

Thus the generalized exponential backward division formula is defined as follows: ∗(k) fi =

k 

fi−s

(−1)s C(k,s)

, i = 1, 2, ...,

(13)

s=0 k! where C(k, s) = (k−s)!s! for k, s = 0, 1, 2, .... Reformulating the backward difference interpolation polynomial for ln fi , we obtain multiplicative version of that polynomial as follows:



C(r,1)

C(r+1,2)

C(r+n−1,n) ∗(2) fi ... ∗(n) fi . En (r) = fi ∗ fi Hence, we have n

C(r+k−1,k) ∗(k) fi .

En (r) =

(14)

k=0

Substituting (13) into (14), we obtain the generalized exponential backward division interpolation formula C(r+k−1,k) n  k (−1)s C(k,s) En (r) = ( fi−s ) , k=0

where r =

x−xi h

s=0

for xi+1 = xi + h.

(15)

430

Numer Algor (2011) 57:425–439

3 Adams Bashforth–Moulton methods Multiplicative Runge–Kutta methods are called one step methods because they use only the information from one previous point to compute the successive point; that is, only the initial point (x0 , y0 ) is used to compute (x1 , y1 ) and in general, yi is needed to compute yi+1 . After several points have been found, it is feasible to use several prior points in the calculation. For illustration, we develop the Adams Bashforth four-step method which requires yi−3 , yi−2 , yi−1 and yi in calculation of yi+1 . 3.1 Multiplicative Adams Bashforth algorithms According to [6, 7], we can now construct the multiplicative Adams Bashforth methods for ordinary multiplicative differential equation of the form:   y∗ (x) = f x, y(x) , y(x0 ) = y0 .

(16)

It is called an initial value problem in multiplicative sense. Integrating both sides of equation (16) over the interval [xi , xi+1 ], we derive so as to be 

xi+1





y (x)

dx

xi+1

=

xi

f (x, y)dx .

(17)

xi

On the other hand, making the transformation f (x, y) ≈ En (r), we find the formula 

1

yi+1 = yi

h En (r)

dr

,

(18)

0

where r = 1 for x = xi+1 , r = 0 for x = xi and dx = hdr. Using the exponential approximation En (r) for f (x, y), is based on the points fi = f (xi , yi ), fi−1 = f (xi−1 , yi−1 ), fi−2 = f (xi−2 , yi−2 ), fi−3 = f (xi−3 , yi−3 ), and integrating (18) in multiplicative sense, we eventually obtain multiplicative Adams Bashforth algorithms, are also called the multiplicative predictor formula, respectively: i) If n = 0, then  h p yi+1 = yi fi .

(19)

This can be also called multiplicative explicit Euler formula. ii) If n = 1, then  h p −1 2 yi+1 = yi fi3 fi−1 ,

(20)

Numer Algor (2011) 57:425–439

431

iii) If n = 2, then  h p −16 5 yi+1 = yi fi23 fi−1 fi−2 12 ,

(21)

 h p −59 37 −9 24 yi+1 = yi fi55 fi−1 fi−2 fi−3 .

(22)

iiii) If n = 3, then

3.2 Error estimation for multiplicative Adams Bashforth algorithms We can use (15) for En (r) and En+1 (r), in order to estimate the errors of multiplicative multi-step methods. As the rate of change between En (r) and En+1 (r) give the truncation errors of these methods for En (r), we consider  1 h p dr En+1 (r) (23) yi+1 = yi 0

and p yi+1



1

= yi

h En (r)

dr

.

(24)

0

Then, a relation can be obtained by dividing the (23) by the (24) as follows:  1  ∗(n+1) C(r+n,n+1) dr ∼ (25)  fi = 1. 0

Therefore,

hn+1 . 01 C(r+n,n+1)dr  ∗(n+1) (μi ) , Error En (r) = f 

(26)

 hn+1 where ∗(n+1) fi = f ∗(n+1) (μi ) for xi−n ≤ μi ≤ xi+1 . Recalling the relation (4) and that f (x) must  be a positive function, it seems evident to make the assumption f (x) = exp y(x) to simplify the system as examined in detail in [5]. Hence, using   f ∗(n) (x) = exp y(n) (x) , (27) we derive the generalized local truncation error formula

  1   (n+1) n+1 Error En (r) = exp y (μi )h C(r + n, n + 1)dr .

(28)

0

The errors can respectively be computed by using the relation (28) as:

   1  E E0 = exp hy (μi ) for xi ≤ μi ≤ xi+1 , 2   E E1 = exp

5 2  h y (μi ) 12

(29)

 for xi−1 ≤ μi ≤ xi+1 ,

(30)

432

Numer Algor (2011) 57:425–439

  E E2 = exp   E E3 = exp



3 3  h y (μi ) 8



251 4 (4) h y (μi ) 720

for xi−2 ≤ μi ≤ xi+1 ,

(31)

 for xi−3 ≤ μi ≤ xi+1 .

(32)

3.3 Multiplicative Adams Moulton algorithms Now, a new method in multiplicative sense can be developed for solving the  initial value problem (16). For this, expanding fi+1 into (13) and taking f x, y(x) ≈ En (r) for r = (x − xi+1 )/ h, we reconstruct (17) and obtain  0 h yi+1 = yi En (r)dr , (33) −1

where r = 0 for x = xi+1 , r = −1 for x = xi and dx = hdr. Using the exponential approximation En (r) for f (x, y), is based on the points fi−2 = f (xi−2 , yi−2 ), fi−1 = f (xi−1 , yi−1 ), fi = f (xi , yi ), fi+1 = p f (xi+1 , yi+1 ), and integrating (33) in multiplicative sense, we derive multiplicative Adams Moulton algorithms which are also called the multiplicative corrector formula, respectively: i) If n = 0, then  h c yi+1 = yi fi+1 . This can be also called multiplicative implicit Euler formula. ii) If n = 1, then  h c yi+1 = yi fi+1 fi 2 ,

(34)

(35)

iii) If n = 2, then  5 8 −1  12h c = yi fi+1 fi fi−1 , yi+1

(36)

 9 19 −5 h c yi+1 = yi fi+1 fi fi−1 fi−2 24 .

(37)

iiii) If n = 3, then

3.4 Error estimation for multiplicative Adams Moulton algorithms In a similar way to the error estimation of multiplicative Adams Bashforth methods discussed in Section 3.2, the generalized error formula of multiplicative Adams Moulton methods for En can be easily determined by dividing the equation  0 h c dr En+1 (r) yi+1 = yi −1

Numer Algor (2011) 57:425–439

433

by the equation  c yi+1

= yi

0

h En (r)

dr

−1

which gives 0

hn+1 −1 C(r+n,n+1)dr   ∗(n+1) Error En (r) = f (μi ) ,

(38)

 hn+1 for xi−n+1 ≤ μi ≤ xi+1 . Hence, the generalwhere ∗(n+1) fi = f ∗(n+1) (μi ) ized error formulation for En can be simplified under the assumption (27) to the following relation

      0 Error En (r) = exp hn+1 y(n+1) (μi ) C(r + n, n + 1)dr . (39) −1

Using (39), we calculate   E E0 = exp 



E E1 = exp 



E E2 = exp   E E3 = exp

 1 − hy (μi ) for xi ≤ μi ≤ xi+1 , 2

1 − h2 y (μi ) 12

1 − h3 y (μi ) 24



(40)

 for xi ≤ μi ≤ xi+1 ,

(41)

for xi−1 ≤ μi ≤ xi+1 ,

(42)



19 4 (4) h y (μi ) 720

 for xi−2 ≤ μi ≤ xi+1 .

(43)

4 Stability analysis The behaviour of numerical methods on stiff problems can be analyzed by applying these methods to the standard test problem y (x) = λy(x), y(x0 ) = y0 ,

(44)

where λ is a complex parameter. In many applications the solution of (44) asymptotically vanishes as x → ∞ and the same stable behaviour can be seen in numerical solutions. For this many authors study the above linear test problem [15–18] and so on. Because multiplicative calculus is defined as an alternative to the classical calculus, alternative solutions to the classical problems can be obtained by multiplicative version of these problems. For this reason we converted the classical test problem into multiplicative test problem in order to investigate the behaviour of the presented methods

434

Numer Algor (2011) 57:425–439

for the classical linear equation and determine their stability domain. For multiplicative Adams algorithms   k  βi h p f xn−i , yn−i , (45) yn+1 = yn i=0

  k   β h p β h ycn+1 = yn f xn+1 , yn+1 0 f xn−i+1 , yn−i+1 i

(46)

i=1

where k 

βi = 1,

(47)

i=0

the multiplicative version of the above standard test problem is proposed in the form: y∗ (x) = exp(λ), y(x0 ) = y0 .

(48)

The analytic solution of this problem can be expressed as   y(x) = exp λ(x − x0 ) y0 . This solution approaches zero as x → ∞ when Re(λ) < 0. If the numerical method also exhibits this behaviour, then the method is said to be A-stable [9]. Applying (45) and (46) to (48) we obtain yn+1 = exp(z) = R(z), (49) yn where z = λh. Here R(z) is called the stability function of the methods. Also the stability domain or region for these methods can be described as: ⏐ ⏐   S∗ = z ∈ C : ⏐ exp(z)⏐ < 1 . (50) Hence, using

  0 < exp − |λ|h < 1,

we obtain 0 < h < ∞.

(51)

Due to this criteria it is seen that the developed methods are unconditionally ⏐ ⏐ ⏐ ⏐ stable.  Onthe other hand by (50), we deduce Re(z) < 0, where exp(z) = exp Re(z) . When Re(z) < 0 the region of absolute stability contains the left half plane, so all of the proposed methods are A-stable. Although classical explicit multistep methods can never be A-stable and implicit multistep methods can only be A-stable if their order is at most 2, explicit and implicit methods in multiplicative case are A-stable. Also a method is L-stable if it is A-stable and R(z) → 0 as |z| → ∞ [10]. According to this definition, multiplicative Adams Bashforth-Moulton methods are L-stable as they are A-stable and | exp(z)| → 0 as |z| → ∞.

Numer Algor (2011) 57:425–439

435

Table 1 The comparison of numerical results for MR-K methods and MAB-M methods for h = 0.1 x

y(x)

425.9956668083

419.9412376339

426.0

419.9455606537

2153.1824394181

2145.5077371871

2153.2

2145.5252896133

10883.1966582076

10873.9016829198

10883.2

10873.9050244051

Method

Result

R. error (%)

MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4 MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4 MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4

415.5685869679 420.5358872408 420.2994798832 419.8627174937 419.8008829906 419.8239686466 420.1948651000 419.7499747000 419.8375502000 2111.8918755939 2149.3920729325 2147.2543523302 2145.4424305860 2145.3806119562 2145.4036976062 2146.19323 2145.280161 2145.428735 10647.0966921181 10897.8773521744 10882.3906119410 10873.8221622635 10873.7603467482 10873.7834323984 10875.51242 10873.56281 10873.82945

1.0412500 0.1416030 0.0853077 0.0197271 0.0344515 0.0289542 0.0593658 0.0465741 0.0257201 1.5668000 0.1810450 0.0814080 0.003861 0.006743 0.005667 0.031131 0.011425 0.004500 2.08577 0.220488 0.078067 0.0007620 0.0013305 0.0011182 0.0147821 0.0031471 0.0006950

Table 2 The comparison of numerical results for MR-K methods and MAB-M methods for h = 0.01 x

y(x)

431.5900388582

425.5225627046

431.59

425.5225239364

2185.0196106504

2177.3302305678

2185.02

2177.3306197391

10844.1750165992

10834.8836332492

10844.18

10834.8886161904

Method

Result

R. error (%)

MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4 MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4 MR-K2 MR-K3 MR-K4 MAB-2 MAB-3 MAB-4 MAM-2 MAM-3 MAM-4

425.4766063982 425.5268708335 425.5266219083 425.5214298 425.5195694 425.519788 425.5262424 425.5193203 425.5198139 2176.9765505746 2177.3530108055 2177.3508349009 2177.329525 2177.327666 2177.327884 2177.340408 2177.327173 2177.327932 10832.5451941033 10835.0011320594 10834.9859239692 10834.88753 10834.88567 10834.88597 10834.91179 10834.8847 10834.88587

0.0108 0.00101243 0.000953934 0.000257138 0.000694333 0.000642953 0.000873867 0.00075287 0.000636873 0.0162437 0.00104625 0.000946312 5.02576e-05 0.000135679 0.000125667 0.000449552 0.000158292 0.000123421 0.0215825 0.00108445 0.000944087 1.00645e-05 2.72313e-05 2.44625e-05 0.000213892 3.61233e-05 2.53037e-05

436

Numer Algor (2011) 57:425–439

For the multiplicative version of the standard test equation described with multiplicative derivative, the stability properties are easily analyzed. 5 Application Let us consider the following simple Volterra-type differential equation x − 1

yπ (x) = exp (52) y which was solved to test Multiplicative Runge–Kutta methods in [2]. An analytic solution of this equation is y(x) = x − ln(x).

(53)

yπ (x) = (y∗ (x))x

(54)

Using the relation

between Volterra type derivative and multiplicative derivative as given in [5] we can write the following multiplicative differential equation x − 1

y∗ (x) = exp , (55) xy for the initial condition y(0.1) = 2.402585093. Approximate Exact Error

7 6 5

y

4 3 2 1 0 0

2

4

6

8

10

Time Fig. 1 The trajectory of (53) and points of (55)–(56) calculated with MAB-2, provided for different time ranges

Numer Algor (2011) 57:425–439

437

This multiplicative initial value problem is solved by using MAB-2 (the second-order Multiplicative Adams Bashforth) method, MAB-3 (the thirdorder Multiplicative Adams Bashforth) method, MAB-4 (the fourth-order Multiplicative Adams Bashforth) method, MAM-2 (the second-order Multiplicative Adams Moulton) method, MAM-3 (the third-order Multiplicative Adams Moulton) method and MAM-4 (the fourth-order Multiplicative Adams Moulton) method. Then the results obtained by using these algorithms are presented in Tables 1 and 2 and the solutions are compared with the analytic solution and the solutions of Multiplicative Runge–Kutta algorithms. The relative error values in Tables 1 and 2 are respectively calculated for h = 0.1 and h = 0.01. Also the errors and results of the test computations for MAB-2 are shown in Figs. 1, 2 and 3. The relative error values are computed by the formula in the following:   yapp. (xi ) − yexact (xi ) Relative Error = yexact (xi )

  . 

(56)

For the numerical solution of the equation (55) Matlab codes of the developed algorithms were presented and simulations were performed using Matlab version R2008b.

Approximate Exact Error

500

400

y

300

200

100

0 0

100

200

300

400

500

600

Time Fig. 2 The trajectory of (53) and points of (55)–(56) calculated with MAB-2, provided for different time ranges

438

Numer Algor (2011) 57:425–439 Approximate Exact Error

9000 8000 7000 6000

y

5000 4000 3000 2000 1000 0 0

2000

4000

6000

8000

10000

Time Fig. 3 The trajectory of (53) and points of (55)–(56) calculated with MAB-2, provided for different time ranges

6 Conclusions In this study, we presented multiplicative Adams Bashforth–Moulton methods to solve the first order multiplicative differential equations. These methods are tested for the multiplicative initial value problem solved in [2]. Comparing the obtained approximate numerical solution with the multiplicative Runge–Kutta methods, we observe that the presented algorithms give more approximate results than the others. This case can be obviously seen with the relative error analysis, since the maximum relative errors from the proposed algorithms are smaller than those from multiplicative Runge–Kutta algorithms. Our algorithms for the first order multiplicative differential equations are very efficient, effective and numerically unconditional stable. Numerical results are presented which exhibit the high accuracy of the proposed algorithms. Consequently it can be said that many problems in engineering and sciences might be solved by these developed methods.

References 1. Bashirov, A.E., Misirli Kurpinar, E., Ozyapici, A.: Multiplicative calculus and its applications. J. Math. Anal. Appl. 337, 36–48 (2008) 2. Aniszewska, D.: Multiplicative runge-kutta methods. Nonlinear Dyn. 50, 265–272 (2007)

Numer Algor (2011) 57:425–439

439

3. Grossman, M.: Bigeometric Calculus, a System with a Scale-free Derivative. Archimedes Foundation, Rockport (1983) 4. Grossman, M., Katz, R.: Non-Newtonian Calculus. Lee Press, Pigeon Cove, Massachusats (1972) 5. Riza, M., Ozyapici, A., Misirli, E.: Multiplicative Finite Difference Methods. Q. Appl. Math. 67, 745–754 (2009) 6. Suli, E., Mayers, D.F.: An Introduction to Numerical Analysis. Cambridge University Press, Cambridge (2003) 7. Butcher, J.C.: Numerical Methods for Ordinary Differential Equations. Wiley, Chichester (2003) 8. Misirli, E., Ozyapici, A.: Exponential approximations on multiplicative calculus. Proc. Jangjeon Math. Soc. 12, 227–236 (2009) 9. Dahlquist, G.G.: A special stability problem for linear multistep methods. BIT 3, 27–43 (1963) 10. Ehle, B.L.: On Pade approximations to the exponential function and a-stable methods for the numerical solution of initial value problems. Report 2010, University of Waterloo (1969) 11. Volterra, V., Hostinsky, B.: Operations Infinitesimales Lineares. Herman, Paris (1938). 12. Aniszewska, D., Rybaczuk, M.: Lyapunov type stability and Lyapunov exponent for exemplary multiplicative dynamical systems. Nonlinear Dyn. 54, 345–354 (2008) 13. Rybaczuk, M., Stoppel, P.: The fractal growth of fatigue defects in materials. Int. J. Fract. 103, 71–94 (2000) 14. Nottale, L.: Scale, relativity and fractal spacetime: applications to quantum physics, cosmology and chaotic systems. Chaos Soliton Fract. 7, 877–938 (1996) 15. DAmbrosio, R., Ferro, M., Jackiewicz, Z., Paternoster, B.: Two-step almost collocation methods for ordinary differential equations. Numer. Algorithms 54, 169–193 (2010) 16. Khaliq, A.Q.M., Twizell, E.H.: Stability regions for one-step multiderivative methods in PECE mode with application to stiff systems. Int. J. Comput. Math. 17, 323–338 (1985) 17. Prothero, A., Robinson, A.: On the stability and the accuracy of one-step methods for solving stiff systems of ordinary differential equations. Math. Comput. 28, 145–162 (1974) 18. Lambert, J.D.: Computational Methods in Ordinary Differential Equations, xv+278 pp. Wiley, Chichester (1973)