Convergence radius of Halley•s method for ... - Semantic Scholar

Report 3 Downloads 41 Views
Applied Mathematics and Computation 265 (2015) 1011–1018

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Convergence radius of Halley’s method for multiple roots under center-Hölder continuous condition Suzhen Liu a, Yongzhong Song b,c, Xiaojian Zhou a,b,∗ a

School of Science, Nantong University, Nantong, 226007, PRChina Jiangsu Key Laboratory for Numerical Simulation of Large Scale Complex Systems, Nanjing, 210046, PRChina c Institute of Mathematics, Nanjing Normal University, Nanjing, 210046, PRChina b

a r t i c l e

i n f o

Keywords: Nonlinear equation Multiple roots Convergence radius Halley’s method Center-Hölder condition Taylor’s expansion

a b s t r a c t Recently, a new treatment based on Taylor’s expansion to give the estimate of the convergence radius of iterative method for multiple roots has been presented. It has been successfully applied to enlarge the convergence radius of the modified Newton’s method and Osada’s method for multiple roots. This paper re-investigates the convergence radius of Halley’s method under the condition that the derivative f (m+1) of function f satisfies the center-Hölder continuous condition. We show that our result can be obtained under much weaker condition and has a wider range of application than that given by Bi et. al.(2011) [21]. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Solving non-linear equations is a common and important problem in science and engineering. In this study, we consider the iterative method for finding a root x∗ of multiplicity m of a nonlinear equation f (x ) = 0 on an open interval D⊆R, i.e., f (i ) (x∗ ) = 0, i = 0, 1, . . . , m − 1 and f(m) (x∗ ) = 0. It is known that, the modified Newton’s method for multiple roots, given by Schröder[1], is quadratically convergent and defined by

xn+1 = xn − m

f ( xn ) . f  ( xn )

(1)

The main goal and motivation in constructing iterative methods for solving nonlinear equations are to give higher order iterative methods with minimal computational cost. The Kung–Traub conjecture[2] suggests that an effective way to improve the order of convergence of iteration is to use more information of function f. For example, using multistep iterative approach, a lot of higher-order iterative methods have been presented, see [3–11] and reference therein. Another efficient way is using single-step iteration, but in this case, the second or higher–order derivative of f should be needed. For example, in [12], Traub has presented the following cubically convergent iterative method

xn+1 = xn −

m ( 3 − m ) f ( xn ) m2 f (xn )2 f  (xn ) − , 2 f  ( xn ) 2 f  ( xn ) 3

which can be viewed as an extension of Chebyshev’s method. ∗

Corresponding author. Tel.: +86 18921608416. E-mail addresses: [email protected] (S. Liu), [email protected] (Y. Song), [email protected], [email protected] (X. Zhou).

http://dx.doi.org/10.1016/j.amc.2015.05.147 0096-3003/© 2015 Elsevier Inc. All rights reserved.

1012

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

Another famous third order method for multiple roots is Halley’s method, presented by Hansen and Patrick[13]

xn+1 = xn −

m+1 2m

f ( xn ) .  f  (xn ) − f (x2nf) (fxn()xn )

(2)

Also using the second derivative, Osada[14] gives another cubically convergent iterative method for multiple roots

xn+1 = xn −

1 1 f ( xn ) f  ( xn ) m (m + 1 )  + (m − 1 )2  . 2 f ( xn ) 2 f ( xn )

(3)

Although more and more iterative methods for multiple roots have been presented, these results show that if the initial guess x0 is sufficiently close to the root x∗ of the function involved, the sequence {xn } generated by the method is well defined, and converges to x∗ . But, how close to the root x∗ the initial guess x0 should be? These local results give no information on the radius of the convergence ball for the corresponding method. In fact, for the iterative methods for simple roots, many results on the radius of convergence ball have been studied. However, there are few results on the iterative methods for multiple roots. Until recently, Ren and Argyros first give an estimate of the convergence radius of the modified Newton’s method (1) in [15], assuming that the function f satisfies the Hölder continuous condition

| f (m) (x∗ )−1 ( f (m) (x ) − f (m) (y ))| ≤ K |x − y| p , ∀x, y ∈ D, K > 0,

(4)

and the center-Hölder continuous condition

| f (m) (x∗ )−1 ( f (m) (x ) − f (m) (x∗ ))| ≤ K0 |x − x∗ | p , ∀x, y ∈ D, K0 > 0,

(5)

where 0 < p ≤ 1, K and K0 are positive constants. Remark 1. It should be pointed out that K0 < K and the radio KK can be arbitrarily large[16]. What is more, the constant K > 0 0 may not exist (see Example 3 or Example 4.3 in [15]). So, the center-Hölder continuous condition (5) is much weaker than the Hölder continuous condition (4). Their work is a milestone, which not only extends the results in [17–19], but also shows us an approach to deal with the local convergence analysis of the iterative methods for multiple roots. The key idea of them can be summarized as 1. 2. 3. 4.

Higher-order derivative Higher-order divided difference Multiple integral Integral inequality

This approach has been successfully applied to give the convergence radius of Osada’s method[20] and Halley’s method[21]. Under the Hölder continuous condition

| f (m) (x∗ )−1 ( f (m+1) (x ) − f (m+1) (y ))| ≤ K |x − y| p , ∀x, y ∈ D ⊆ R, K > 0, and then bounded condition

| f (m) (x∗ )−1 f (m+1) (x )| ≤ M,

for some M > 0.

Bi et al. give the estimate of the convergence radius of Halley’s method (2) as the minimum positive zero r∗ of the function

h(r ) = 2mm!MKr p+2 + (m − 1 )

m+1 

(m + p + 2 − i )K 2 r2 − 2m(m + 1 )!Mr p+1

i=1

−m(2m + 1 )

m+1 

(m + p + 2 − i )Kr + m2 (m + 1 )

i=1

m+1 

( m + p + 2 − i ).

(6)

i=1

Different from the approach given by Ren and Argyros in [15], recently, another treatment, based on Taylor expansion, has been presented by Zhou et. al.[22], which can be outlined as 1. Higher-order derivative 2. Taylor expansion with integral form remainder 3. Integral inequality Obviously, this approach is simpler than Ren and Argyros. Above all, Zhou et. al. have shown that even under the weaker condition (the center-Hölder continuous condition (5)), better results can be obtained for the modified Newton’s method (1) and Osada’s method (3). So, in this work, we reconsider the local convergence of Halley’s method (2) by Taylor expansion under the condition that f (m+1 ) satisfies the center-Hölder continuous condition

| f (m) (x∗ )−1 ( f (m+1) (x ) − f (m+1) (x∗ ))| ≤ K0 |x − x∗ | p , ∀x ∈ D, K0 > 0.

(7)

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

1013

2. Main result First, we introduce the following Taylor expansion with integral form remainder, which is also used in [23]. Lemma 1 ([24]). Suppose that f(x) is differentiable (n + 1 )−time in the ball U(a, r), r > 0, and f (n+1 ) (x ) is integrable from a to any x ∈ U(a, r), then

1 1 (x − a )2 f  (a ) + (x − a )3 f  (a ) + · · · 2 3!  x 1 1 + ( x − a )n f ( n ) ( a ) + ( f (n+1) (t ) − f (n+1) (a ))(x − t )n dt. n! n! a

f (x ) = f (a ) + f  (a )(x − a ) +

Lemma 2 ([23]). Let x∗ be a root of function f with multiplicity m, then we have

f ( xn ) =

em 1 n f ( m ) ( x∗ ) + g ( xn ) , m! m!

(8)

f  ( xn ) =

em−1 1 n f ( m ) ( x∗ ) + g ( x ), ( m − 1 )! ( m − 1 )! 1 n

(9)

f  (xn ) =

em−2 1 n f ( m ) ( x∗ ) + g ( x ), ( m − 2 )! ( m − 2 )! 2 n

(10)

where



g ( xn ) =

xn

x∗  xn

g1 (xn ) =

x∗  xn

g2 (xn ) =

x∗

[ f (m+1) (t ) − f (m+1) (x∗ )](xn − t )m dt, [ f (m+1) (t ) − f (m+1) (x∗ )](xn − t )m−1 dt, [ f (m+1) (t ) − f (m+1) (x∗ )](xn − t )m−2 dt.

Let en = xn − x∗ , we have the following local convergence result for Halley’s method (2). Theorem 3. Let D ⊆ R be an open, convex and non-empty set, f : D → R be in Cm (D), x∗ be a root of f(x) with multiplicity m(m ≥ 2). Let r∗ be the unique positive zero of the function

(2m2 + ( p + 1 )(2m + 1 ))((m − 1 )! )2 K02 2p+2 (4 + 2m2 + 5p + p2 + 2m(2 + p))(m − 1 )! p+1 K0 r − m+1 r m+1  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 2 − i ) m i=1 (m + p + 1 − i ) ((m − 1 )! )2 (2m + 1 )K02 (2m + p + 2 )(m − 1 )!K0 p+1 − r − m r2+2p , (11) m  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 1 − i ) m i=1 (m + p + 2 − i )

h (r ) = 1 −

and



1 ⎞ p+1

m+1 

⎜ i=1 (m + p + 2 − i ) ⎟ ⎟ ⎝ ⎠ m!K0

R=⎜

.

Assume the center-Hölder continuous condition (7) is satisfied. Then, for any initial point x0 ∈ U(x∗ , r∗ ), the sequence {xn }(n ≥ 0) generated by Halley’s method (2) is well defined, and converges at a rate of order p + 2 to the unique solution x∗ in U(x∗ , R), which is bigger than U(x∗ , r∗ ) . Moreover, the following error bound holds for all n ≥ 0

|en+1 | ≤

|en | p+2 r∗ p+1

.

Proof. Since h(0 ) = 1 > 0, and

h (R ) = −

m3 + ( p + 1 )2 + m( p + 1 )(2p + 5 ) + m2 (4p + 5 ) < 0, m2

h(r) has a positive root r∗ such that 0 < r∗ < R. Noting that h(r) can be viewed as a quadratic function of r p+1 , r∗ is unique.

(12)

1014

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

On the other hand, h(r∗ ) = 0 leads to

(2m2 + ( p + 1 )(2m + 1 ))((m − 1 )! )2 K02 ∗2p+2 (4 + 2m2 + 5p + p2 + 2m(2 + p))(m − 1 )! ∗ p+1 K0 r + m+1 r m+1  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 2 − i ) m i=1 (m + p + 1 − i ) ((m − 1 )! )2 (2m + 1 )K02 (2m + p + 2 )(m − 1 )!K0 ∗ p+2 =1− r − m r∗2p+3 m  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 1 − i ) m i=1 (m + p + 2 − i ) < 1.

(13)

Next, we shall show the main results by induction. For n = 0, iteration (2) becomes

f ( x0 )

x 1 = x0 −

.

(14)

f ( x0 ) =

em 1 0 f ( m ) ( x∗ ) + g ( x0 ) , m! m!

(15)

f  ( x0 ) =

em−1 1 0 f ( m ) ( x∗ ) + g ( x ), ( m − 1 )! ( m − 1 )! 1 0

(16)

f  (x0 ) =

em−2 1 0 f ( m ) ( x∗ ) + g ( x ). ( m − 2 )! ( m − 2 )! 2 0

(17)

m+1 2m

f

( x0 ) −

f (x0 ) f  (x0 ) 2 f  ( x0 )

From (8)–(10), we have

Substituting (15)–(17) into (14), and noting that e1 = x1 − x∗ , e0 = x0 − x∗ , we get

e1 =

δ1 e1−m f (m) (x∗ ) + δ2 e3−2m 0 0 , ( f (m) (x∗ ))2 + δ4 e1−m f (m) (x∗ ) + δ3 e2−2m 0 0

(18)

where

1 2 1 = ((m + 1 )g1 (x0 )2 − (m − 1 )g(x0 )g2 (x0 ) − 2e−1 0 g(x0 )g1 (x0 )), 2 1 = ((m + 1 )g1 (x0 )2 − (m − 1 )g(x0 )g2 (x0 )), 2 1 = (m + 1 )g1 (x0 ) − (m − 1 )(e0 g2 (x0 ) + e−1 0 g(x0 )). 2

δ1 = (2me0 g1 (x0 ) − (m − 1 )e20 g2 (x0 ) − (m + 1 )g(x0 )), δ2 δ3 δ4

From the definitions of g, g1 g2 , and with the aid of mathematica software, we have

m ∗ −1 f (x ) δ4

1 = f m (x∗ )−1 (m + 1 )g1 (x0 ) − (m − 1 )(e0 g2 (x0 ) + e−1 g ( x )) 0 0 2  x0

m−1 m = f m (x∗ )−1 [ f (m+1) (t ) − f (m+1) (x∗ )] (m + 1 )(x0 − t )m−1 − (e0 (x0 − t )m−2 + e−1 ( x − t ) ) dt 0 0 2

x∗



m−1 m ∗ −1 (m+1) m ≤ f (x ) [ f (t ) − f (m+1) (x∗ )] (m + 1 )(x0 − t )m−1 − (e0 (x0 − t )m−2 + e−1 ( x − t ) ) dt 0 0 2 x∗  x0 m − 1 m m−1 ≤ K0 (e0 (x0 − t )m−2 + e−1 ) dt |t − x∗ | p 0 ( x0 − t ) − ( x0 − t ) 2 x∗  x0 (m + 3 ) + K0 (x0 − t )m−1 dt |t − x∗ | p 

x0

2

x∗

=

(4 + 2m + 5p + p + 2m(2 + p))(m − 1 )! K0 |e0 | p+m ,  2 m+1 i=1 (m + p + 2 − i ) 2

2

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

and

1015

m ∗ −1 f (x ) δ3 1 − m (m) ∗ −1 m + 1 (m) ∗ −1 f (x ) g1 (x0 ) × f (m) (x∗ )−1 g1 (x0 ) + f (x ) g(x0 ) × f (m) (x∗ )−1 g2 (x0 ) = 2 2  x0  x0 m+1 K0 ≤ |t − x∗ | p (x0 − t )m−1 dt×K0 |t − x∗ | p (x0 − t )m−1 dt

2 x∗ x∗  x0  x0 m−1 K0 + |t − x∗ | p |(x0 − t )m |dt×K0 |t − x∗ | p (x0 − t )m−2 dt ∗ ∗ 2 x x m−1 m+1 ( m − 1 )! m! ( m − 1 )! 2 2 2p+2m ) K0 |e0 | ( m + K02 |e0 |2p+2m = m−1 m+1 2 2 ( m + p + 1 − i ) i=1 i=1 (m + p − i ) i=1 (m + p + 2 − i )

=

(2m2 + ( p + 1 )(2m + 1 ))((m − 1 )! )2 K02 |e0 |2p+2m .  m 2 m+1 i=1 (m + p + 1 − i ) i=1 (m + p + 2 − i )

Thus we can obtain

1 − ( f (m) (x∗ )2 )−1 [( f (m) (x∗ ))2 + δ4 e1−m f (m) (x∗ ) + δ3 e2−2m ] 0 0 = f m (x∗ )−1 δ4 e1−m + f m (x∗ )−1 δ3 e2−2m 0 0 m ∗ −1 1−m m ∗ −1 2−2m ≤ f (x ) δ4 e0 + f (x ) δ3 e0 (2m2 + ( p + 1 )(2m + 1 ))((m − 1 )! )2 K02 (4 + 2m2 + 5p + p2 + 2m(2 + p))(m − 1 )! K0 |e0 | p+1 + m+1 |e0 |2p+2 m+1  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 2 − i ) m i=1 (m + p + 1 − i ) (2m2 + ( p + 1 )(2m + 1 ))((m − 1 )! )2 K02 ∗2p+2 (4 + 2m2 + 5p + p2 + 2m(2 + p))(m − 1 )! ∗p+1 ≤ K0 r + m+1 r m+1  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 2 − i ) m i=1 (m + p + 1 − i ) < 1. (by (13 )) =

By the Banach lemma, we get

1

( f (m) (x∗ )2 )−1 [( f (m) (x∗ ))2 + δ4 e1−m f (m) (x∗ ) + δ3 e2−2m ] 0



0

1 2 2 +2m (2+p))(m−1 )! (2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 m+1 m−1 1 − (4+2m +5p+p K0 |e0 | p+1 − m+1 |e0 |2p+2

2

i=1

(m+p+2−i )

2

i=1

(m+p+2−i )

i=1

(m+p+1−i )

On the other hand, we have

|( f (m) (x∗ ))−1 δ1 |

  1 |( f (m) (x∗ ))−1 2me0 g1 (x0 ) − (m − 1 )e20 g2 (x0 ) − (m + 1 )g(x0 ) ||e1−m | 0 2  x0 1 ≤ K0 |t − x∗ | p [(m + 1 )|e0 (x0 − t )m−1 − (x0 − t )m | 2 x∗ =

+ (m − 1 )e0 |(x0 − t )m−1 − e0 (x0 − t )m−2 |]dt |e1−m | 0 =

(2m + p + 2 )(m − 1 )!K0 2

m 

(m + p + 2 − i )

|e0 |m+p+1 ,

i=1

and

(m) ∗ 2 −1 ( f (x ) ) δ2 1 = ( f (m) (x∗ )2 )−1 ((m + 1 )g1 (x0 )2 − (m − 1 )g(x0 )g2 (x0 ) − 2e−1 0 g(x0 )g1 (x0 )) 2 m + 1 (m) ∗ 2 −1 ≤ ( f (x ) ) g1 (x0 )(g1 (x0 ) − e−1 0 g(x0 )) 2 m − 1 (m) ∗ 2 −1 + ( f (x ) ) g(x0 )(e−1 0 g1 (x0 ) − g2 (x0 )) 2 m + 1 (m) ∗ −1 = ( f (x )) g1 (x0 ) × ( f (m) (x∗ ))−1 (g1 (x0 ) − e−1 0 g(x0 )) 2 m − 1 (m) ∗ −1 + ( f (x )) g(x0 ) × ( f (m) (x∗ ))−1 (e−1 0 g1 (x0 ) − g2 (x0 )) 2

.

(19)

1016

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018



=

 x0  x0 m+1 K0 (x0 − t )m dt |t − x∗ | p (x0 − t )m−1 dt × K0 |t − x∗ | p (x0 − t )m−1 − e−1 0 2 x∗ x∗  x0  x0 m−1 p + (x0 − t )m−1 − (x0 − t )m−2 dt K0 |t − x∗ | |(x0 − t )m |dt × K0 |t − x∗ | p e−1 0 2 x∗ x∗

((m − 1 )! )2 (2m + 1 ) K02 |e0 |2m+2p .  2 i=1 (m + p + 1 − i ) m i=1 (m + p + 2 − i ) m

So we have

(m) ∗ 2 −1 ( f (x ) ) [δ1 e1−m f (m) (x∗ ) + δ2 e3−2m ] 0 0 (m) ∗ 2 −1 3−2m + ( f (x ) ) δ2 e0 ≤ f (m) (x∗ )−1 δ1 e1−m 0 ≤

(2m + p + 2 )(m − 1 )! ((m − 1 )! )2 (2m + 1 ) K0 |e0 | p+2 + m K02 |e0 |3+2p . m  2 i=1 (m + p + 2 − i ) 2 i=1 (m + p + 1 − i ) m i=1 (m + p + 2 − i )

(20)

Together (18)–(20) with the definition of r∗ , we get

|e1 | ≤

(2m+p+2 )(m−1 )! m+1 K0 |e0 | p+2 i=1 (m+p+2−i )

2



2

m i=1

((m−1 )! )2 (2m+1 ) K02 |e0 |3+2p (m+p+1−i ) m i=1 (m+p+2−i )

2 2 +2m (2+p))(m−1 )! (2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 m+1 m 1 − (4+2m +5p+p K0 |e0 | p+1 − m+1 |e0 |2p+2

2

=

+

i=1

(m+p+2−i )

2

(2m+p+2 )(m−1 )! m+1 K |e | p+1 2 i=1 (m+p+2−i ) 0 0

1− (

+

(2+p))(m−1 )! K0 |e0 | p+1 (m+p+2−i )

4+2m2 +5p+p2 +2m 2

m+1 i=1

(2m+p+2 )(m−1 )! m+1 K r∗p+1 2 i=1 (m+p+2−i ) 0

+

2

i=1

(m+p+2−i )

i=1

(m+p+1−i )

((m−1 )! )2 (2m+1 )  K02 |e0 |2+2p m 2 m i=1 (m+p+1−i ) i=1 (m+p+2−i )

m



(2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 m+1  |e0 |2p+2 2 i=1 (m+p+2−i ) m i=1 (m+p+1−i )

|e0 |

((m−1 )! )2 (2m+1 ) K02 r∗2p+2 (m+p+1−i ) m i=1 (m+p+2−i )

|e | 2 2 +2m (2+p))(m−1 )! (2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 ∗2p+2 0 m+1 m 1 − (4+2m +5p+p K0 r∗p+1 − m+1 r 2

i=1

i=1

(m+p+2−i )

2

i=1

(m+p+2−i )

i=1

(m+p+1−i )

= |e0 | < r∗ . So we have x1 ∈ U(x∗ , r∗ ). Furthermore, if x0 ∈ U(x∗ , r∗ ), we get

|e1 | ≤

(2m+p+2 )(m−1 )! m+1 K0 |e0 | p+1 i=1 (m+p+2−i )

2



=

2

m

((m−1 )! )2 (2m+1 ) K02 |e0 |2p+2 (m+p+1−i ) m i=1 (m+p+2−i )

|e | 2 2 +2m (2+p))(m−1 )! (2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 ∗2p+2 0 m+1 m 1 − (4+2m +5p+p K0 r∗p+1 − m+1 r 2

=

+

i=1

(m+p+2−i )

|e p+1 | (2m+p+2 )(m−1 )! m+1 K 0 r∗p+1 2 i=1 (m+p+2−i ) 0 r∗p+1

1− (

i=1

2

+

(2+p))(m−1 )! K0 r∗p+1 (m+p+2−i )

4+2m2 +5p+p2 +2m 2

m+1 i=1

(2m+p+2 )(m−1 )! m+1 K r∗p+1 2 i=1 (m+p+2−i ) 0

+

i=1

(m+p+2−i )

i=1

(m+p+1−i )

|e02p+2 | ∗2p+2 ((m−1 )! )2 (2m+1 )  K02 r∗2p+2 r m 2 m i=1 (m+p+1−i ) i=1 (m+p+2−i )

2

m i=1



(2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 ∗2p+2 m+1  r 2 i=1 (m+p+2−i ) m i=1 (m+p+1−i )

((m−1 )! )2 (2m+1 ) K02 r∗2p+2 (m+p+1−i ) m i=1 (m+p+2−i )

2 2 +2m (2+p))(m−1 )! (2m2 +( p+1 )(2m+1 ))((m−1 )! )2 K02 m+1 m 1 − (4+2m +5p+p K0 r∗p+1 − m+1 r

p+2 e 0

r∗p+1

2

i=1

(m+p+2−i )

2

i=1

(m+p+2−i )

i=1

(m+p+1−i )

|e0 |

p+2 e0 ∗p+1 ∗2p+2 r

.

Thus the error estimate (12) holds for n = 0. Now we suppose the point xk generated by Halley’s method (2) is well defined, and xk ∈ U(x∗ , r∗ ) holds for some fixed integer k ≥ 1. Then, all statements above will be satisfied if we replace x0 by xk , x1 by xk+1 , e0 by ek and e1 by ek+1 . Therefore, xk+1 is well defined, xk+1 ∈ U (x∗ , r∗ ) and

|ek+1 | ≤

|ek | p+2 r∗ p+1

.

By induction, for any initial point x0 ∈ U(x∗ , r∗ ), the sequence {xn } generated by Halley’s method (2) is well defined, remains in U(x∗ , r∗ ), and the error bound (12) is true. Hence, the sequence {xn } converges to x∗ with order at least p + 2. For the uniqueness, the proof is just same as that in [23], so we omit here.  3. Numerical results In this section, we provide some numerical tests to show the application of the convergence results obtained in Section 2. For easy comparison with the results in [21] for Halley’s method, we use the examples therein. Example 1 ([21]). Let D = (− π2 , π2 ), and define function f on D by

f (x ) = cos(x ) − 1.

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

x∗

1017

Since f  (x ) = − sin(x ), f  (x ) = − cos(x ) and f  (x ) = sin(x ), it is easy to see that f (0 ) = f  (0 ) = 0, and f  (0 ) = −1 = 0. So, = 0 is a root of f with multiplicity m = 2. For K0 and p, we have

| f  (x∗ )( f  (x ) − f  (x∗ ))| = |sin(x )| ≤ |x|, which implies K0 = 1 and p = 1. Thus from (11), the convergence radius of Halley’s method is r∗ ≈ 0.993723, which is smaller than r∗ ≈ 1.26795 given by (6)(see Example 4.1 in [21]). Example 2 ([21,23]). Let D = ( 12 ,

3 2 ),

and define function f on D by

5 2

f ( x ) = ( x − 1 )2 . √

In [23], we have shown x∗ = 1 is a root of f with multiplicity m = 2, p = 12 and K0 = 35 + 5710 2 . Using (11), we have the convergence radius r∗ ≈ 0.183162, which is bigger than r∗ ≈ 0.123481 given by (6)(see Example 4.2 in [21]). Remark 2. Examples 1 and 2 above show that (11) can not always give a larger convergence radius than (6) and vice versa. It seems that our result is no better than that of W. Bi et. al. However, the next examples will show us the advantages of our result. As we have pointed out the center-Hölder continuous condition is weaker than the Hölder continuous condition. The next example, a modification of Example 4.3 in [15], shows that the constant K > 0 may not exist. In this situation, the main result in [21] is invalid. Example 3. Define function f on D = R by

f (x ) = where

G (x ) =



x



0



0

x

x

 (1 + G(x ))dx dx,

(1 + x sin

0

π x

)dx.

Then, we get

f  (x ) =



x

0

(1 + G(x ))dx,

and

f  (x ) = 1 + G(x ) = 1 +



x

0

(1 + x sin

π x

)dx.

We can see f (0 ) = f  (0 ) = 0, while f  (0 ) = 1 = 0, Thus we can conclude that the root x∗ = 0 is of multiplicity m = 2. Moreover, we have



f (x ) = 

1 + x sin πx , 1,

x = 0, x = 0.

In [15], the authors have shown that K > 0 does not exist(see Eq.(4.31) therein), so for this function f, Eq. (6) can not give the information of the convergence radius. However, we can obtain

π | f  (x∗ )( f  (x ) − f  (x∗ ))| = x sin ≤ |x| = |x − x∗ |, x

so K0 = 1 and p = 1. Thus, Eq. (11) is well defined and the same convergence radius as Example 1 can be obtianed. The following example shows that the condition, f (m+1 ) (x ) is bounded in [21], is also more strict sometimes. In this situation, Eq. (6) can not give the information of convergence radius, either. Example 4 ([23]). Let D = R, and define function f(x) on D by

f ( x ) = x2 ( x 2 − 1 ) . As shown in [23], x∗ = 0 is a root of f with multiplicity m = 2. For any M > 0, let |x| >

M 12

we have

| f (x ) f (x )| = 12|x| > M, 

∗ −1 

which implies f (m+1 ) (m = 2 ) is unbounded, and hence Eq. (6) is meaningless. On the other hand, (11) holds for p = 1 and K0 = 12. Thus the convergence radius is r∗ ≈ 0.286863. Remark 3. These two examples show that not only the Hölder continuous condition, but also the condition that f (m+1 ) (x ) is bounded is much stronger sometimes, hence, our results have a wider range of application than that of Bi et.al.

1018

S. Liu et al. / Applied Mathematics and Computation 265 (2015) 1011–1018

Acknowledgements This work is supported by the National Natural Science Foundation of China (11271196, 11401296), the Natural Science Foundation of Jiangsu Province (BK20141008), the Natural Science Fund for university in Jiangsu Province (14KJB110007) and the Ph.D. Research Foundation of Nantong University(14B25). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

E. Schröder, Über unendlich viele Algorithmen zur Auflösung der Gleichungen, Math. Ann. 2 (1870) 317–365. H. Kung, J. Traub, Optimal order of one-point and multipoint iteration, J. ACM. 21 (1974) 643–651. H. Victory, B. Neta, A higher order method for multiple zeros of nonlinear functions, Int. J. Comput. Math. 12 (1983) 329–335. B. Neta, A. Johnson, High-order nonlinear solver for multiple roots, Comput. Math. Appl. 55 (2008) 2012–2017. C. Chun, B. Neta, A third-order modification of Newton’s method for multiple roots, Appl. Math. Comput. 211 (2009) 474–479. C. Chun, H. Bae, B. Neta, New families of nonlinear third-order solvers for finding multiple roots, Comput. Math. Appl. 57 (2009) 1574–1582. ´ Construction of optimal order nonlinear solvers using inverse interpolation, Appl. Math. Comput. 217 (2010) 2448–2455. B. Neta, M. Petkovic, B. Yun, New higher order methods for solving nonlinear equations with multiple roots, J. Comput. Appl. Math. 235 (2011) 1553–1555. X. Zhou, X. Chen, Y. Song, Constructing higher order methods for multiple roots of nonlinear equations, J. Comput. Appl. Math. 235 (2011) 4199–4206. X. Zhou, X. Chen, Y. Song, Families of third and fourth order methods for multiple roots of nonlinear equations, Appl. Math. Comput. 219 (2013) 6030–6038. B. Liu, X. Zhou, A New Family of Optimal Fourth-order Methods for Multiple Roots of Nonlinear Equations, Nonlinear Anal.-Model. 18 (2013) 143–152. J. Traub, Iterative methods for the solution of equations, Chelsea Publishing Company, New York, 1977. E. Hansen, M. Patrick, A family of root finding methods, Numer. Math. 27 (1977) 257–269. N. Osada, An optimal multiple root-finding method of order three, J. Comput. Appl. Math. 51 (1994) 131–133. H. Ren, I. Argyros, Convergence radius of the modified Newton method for multiple zeros under Hölder continuous derivative, Appl. Math. Comput. 217 (2010) 612–621. I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Series: Studies in Computational Mathematics, vol. 15, Elsevier Publ. Co., New York, U.S.A., 2007. X. Wang, The convergence ball on Newton’s method (in Chinese), Chinese Science Bulletin, A Special Issue of Mathematics, Physics, Chemistry. 25 (1980) 36–37. J. Traub, H. Wozniakowski, Convergence and complexity of Newton iteration for operator equation, J. ACM. 26 (1979) 250–258. I. Argyros, On the convergence and application of Newton’s method under weak Hölder continuity assumptions, Int. J. Comput. Math. 80 (2003) 767–780. X. Zhou, Y. Song, Convergence Radius of Osada’s Method for Multiple Roots under Hölder and Center–Hölder Continuous Conditions, ICNAAM 2011, AIP Conference Proceedings 1389 (2011) 1836–1839. W. Bi, H. Ren, Q. Wu, Convergence of the modified Halley’s method for multiple zeros under Hölder continuous derivative, Numer. Algor. 58 (2011) 497–512. X. Zhou, X. Chen, Y. Song, On the convergence radius of the modified Newton’s method for multiple roots under the Center-Hölder condition, Numer. Algor. 65 (2014) 221–232. X. Zhou, Y. Song, Convergence radius of Osada’s method under center-Hölder continuous condition, Appl. Math. Comput. 243 (2014) 809–816. J. Stoer, R. Bulirsch, Introduction to numerical analysis, Springer–Verlag, New York, 1980.