Numer Algor DOI 10.1007/s11075-009-9342-8 ORIGINAL PAPER
On new iterative method for solving systems of nonlinear equations Fadi Awawdeh
Received: 3 August 2009 / Accepted: 29 September 2009 © Springer Science + Business Media, LLC 2009
Abstract Solving systems of nonlinear equations is a relatively complicated problem for which a number of different approaches have been proposed. In this paper, we employ the Homotopy Analysis Method (HAM) to derive a family of iterative methods for solving systems of nonlinear algebraic equations. Our approach yields second and third order iterative methods which are more efficient than their classical counterparts such as Newton’s, Chebychev’s and Halley’s methods. Keywords Homotopy analysis method · Systems of nonlinear equations · Iterative methods Mathematics Subject Classifications (2000) 65H20 · 65H10 · 65B99
1 Introduction The solutions of systems of algebraic equations have a well-developed mathematical and computational theory when solving linear systems, or a single nonlinear equation. The situation is much more complicated when the equations in the system do not exhibit nice linear or polynomial properties. In this general case, both the mathematical theory and computational practices are far from complete understanding of the solution process. Thus, we are left with numerical routines to determine the roots of the system.
F. Awawdeh (B) Hashemite University, Zarqa, Jordan e-mail:
[email protected] Numer Algor
The standard approach for the iterative solution of a nonlinear equation F(x) = 0 (where F : D ⊆ Rn → Rn is a smooth mapping, D is open and convex) can be interpreted as follows: Given an approximate root xn of F, solve the linear equation 0 = F(xn ) + An (xn+1 − xn ) in order to get an improved approximation xn+1 of the root. If the linear mapping An ∈ L(Rn , Rn ) is chosen as an approximation of 1
F (xn + t(α − xn ))dt,
0
(here, and in the sequel, α denotes a zero of F which is always assumed to be regular, i.e. F (α) is continuously invertible) then the iterative application of this principle leads to Newton-like methods [5, 26]. Newton-like methods differ from each other only by the choice of An . Newton’s method for systems requires a good initial guess x0 and generates a sequence xk = xk−1 − F (xk−1 )−1 F(xk−1 ), that converges rapidly to a solution α if x0 is sufficiently close to α. A class of algorithms called quasi-Newton methods [6, 7] replaces the Jacobian matrix in Newton’s method with an approximation that is updated at each iteration. Broyden’s method is a quasi-Newton method that reduces the amount of computations at each step without significantly degrading the speed of convergence. Broyden’s method also requires a good initial guess. As a way to obtain good initial guesses for Newton’s and Broyden’s methods, the Steepest Descent method is used. Some methods are based upon a quadratic approximation of F: 0 = F(xn ) + F (xn )(xn+1 − xn ) + Bn (xn+1 − xn )2 ,
n = 0, 1, 2, . . . .
Here Bn is a symmetric bilinear operator which approximates 1
(1 − t)F (xn + t(α − xn ))dt.
0
These methods differ from each other by the choice of Bn as well as by the procedure chosen to approximate the solution of the quadratic equation. Both the well-known classical methods of Chebyshev and Halley [2, 25] use Bn = F (xn )/2. However, they use different procedures to approximate the solution of the quadratic equation. Both methods are locally of Q−order 3 if α is a regular zero of F.
Numer Algor
Trust-region, Homotopy, and continuation methods are also used for nonlinear systems and are the subject of current research [4]. Many powerful methods have been presented [3, 5, 8–14, 20–23]. In this paper, the homotopy analysis method (HAM) [1, 16–19] is used to construct a more efficient family of iterative methods for solving systems of nonlinear algebraic equations: Given x0 ∈ D, compute yn , xn+1 from 0 = F(xn ) + F (xn )(yn − xn ), h 0 = F(xn )+ F (xn )(xn+1 −xn )− F (xn )(yn −xn )2 , 2
n = 0, 1, 2, . . . . (1)
Notice that for h = −1, HAM method (1) gives rise to Chebyshev’s method and for h = 0, it gives rise to Newton’s method. In actual computations the n−th step of (1) proceeds as follows: 1st Stage: Compute a LR-decomposition of F (xn ) by Gaussian elimination. 2nd Stage: Solve the linear system F (xn )an = −F(xn ). 3rd Stage: Solve the linear system h F (xn )b n = − F (xn )a2n . 2 4th Stage: Set xn+1 = xn + b n . Now, let us briefly indicate the specific advantages and disadvantages of the HAM method (1): (a) Advantages: Only one LR-decomposition of F (xn ) is required in each step and it is locally of Q−order of convergence 2 or 3 for regular zeros of F. (b) Disadvantages: The bilinear mapping h2 F (xn ) must be stored at each step. Let us emphasize that iterative methods of the above type, i.e. methods which require the computation of F , are by far not as relevant in practice as Newton-like methods. However, for special types of nonlinear systems of equations, such as those which arise from discretization of nonlinear integral equations or nonlinear two point boundary value problems, the methods under consideration may be advantageous. (Thus, the above mentioned “disadvantage” actually is not as severe as it may seem at first glance since in most cases where the application of our method is reasonable, the computation and the storage of F is inexpensive.) The practical relevance of these methods increases since computer aided formulae manipulation facilities became a common tool in numerical analysis. Unlike all previous techniques, the proposed method (1) provides us with a family of iterative methods in auxiliary parameter h. As a result, the convergence region and rate of convergence depend upon the auxiliary parameter h.
Numer Algor
Thus they can be greatly enhanced by means of choosing a proper value of h. This provides a convenient way to adjust and control convergence region and rate of convergence given by this technique. The value of h is determined by the h-curves suggested by Liao [16].
2 Numerical schemes Consider the system of nonlinear algebraic equations F(x) = 0,
(2)
where F(x) = ( f1 (x), f2 (x), · · · , fn (x))T , x = (x(1) , x(2) , · · · , x(n) )T and fi : Rn → R is a nonlinear function. Assume that F has continuous second order partial derivatives on a convex set D ⊆ Rn , and that it has a locally unique root α ∈ D. Furthermore, assume that the Jacobian matrix F (x) is invertible in a neighborhood of α and that p0 is an approximation of α. A second order Taylor series expansion of the scalar-valued function fi (x) expanded about p0 , and evaluated at x = p0 − β is fi ( p0 − β) = fi ( p0 ) − ∇ fiT ( p0 )β +
1 T 2 β D fi ( p0 )β + · · · , 2!
where β = (β (1) , β (2) , · · · , β (n) ), ∇ fi ( p0 ) is the gradient vector of fi at p0 ∇
fiT ( p0 )
=
∂ fi ∂ fi ∂ fi , ,··· , ∂ x1 ∂ x2 ∂ xn
|x= p0
,
and D2 fi is the Hessian matrix ⎡
∂ 2 fi ∂ 2 fi ⎢ ⎢ ∂ x21 ∂ x1 ∂ x2 ⎢ 2 ⎢ ∂ fi ∂ 2 fi ⎢ ⎢ ∂ x2 ∂ x1 ∂ x22 D2 fi = ⎢ ⎢ .. .. ⎢ . . ⎢ ⎢ ⎢ 2 ⎣ ∂ fi ∂ 2 fi ∂ xn ∂ x1 ∂ xn ∂ x2
··· ··· ..
.
···
∂ 2 fi ∂ x1 ∂ xn ∂ 2 fi ∂ x2 ∂ xn .. . ∂ 2 fi ∂ x2n
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
In this line we can write, F( p0 − β) F( p0 ) − F ( p0 )β +
1 ∗ D F( p0 )λ, 2
(3)
Numer Algor
where F ( p0 ) is the Jacobian matrix of F at ⎡ 2 ∂ 2 f1 ∂ 2 f1 ∂ f1 ∂ 2 f1 · · · ⎢ ∂ x2 ∂ x ∂ x ∂ x1 ∂ xn ∂ x22 1 2 ⎢ 1 ⎢ ⎢ ∂ 2 f2 ∂ 2 f2 ∂ 2 f2 ∂ 2 f2 ⎢ · · · ⎢ ∗ ∂ x1 ∂ xn ∂ x22 D F = ⎢ ∂ x21 ∂ x1 ∂ x2 ⎢ . . .. .. . ⎢ . .. .. . . ⎢ . ⎢ 2 ⎣ ∂ fn ∂ 2 fn ∂ 2 fn ∂ 2 fn ··· ∂ x1 ∂ xn ∂ x22 ∂ x21 ∂ x1 ∂ x2
p0 and ⎤ ∂ 2 f1 ∂ 2 f1 ∂ 2 f1 ··· ··· ∂ x2 ∂ x3 ∂ x2 ∂ xn ∂ x2n ⎥ ⎥ ⎥ 2 2 2 ∂ f2 ∂ f2 ∂ f2 ⎥ ⎥ ··· ··· ∂ x2 ∂ x3 ∂ x2 ∂ xn ∂ x2n ⎥ ⎥, .. .. .. ⎥ .. .. . . . ⎥ . . ⎥ ⎥ ∂ 2 fn ∂ 2 fn ⎦ ∂ 2 fn ··· ··· ∂ x2 ∂ x3 ∂ x2 ∂ xn ∂ x2n
λ = β (1) β (1) 2β (1) β (2) 2β (1) β (3) · · · 2β (1) β (n) β (2) β (2) 2β (2) β (3) · · · 2β (2) β (n) · · · β (n−1) β (n) β (n) β (n)
T
.
We are looking for β that satisfies F( p0 − β) = 0, or what amounts to an approximation of β that satisfies β−
1 F ( p0 )−1 D∗ F( p0 )λ = F ( p0 )−1 F( p0 ). 2
We will apply the HAM to find β. Let h = 0 be an auxiliary parameter and L be an auxiliary linear operator with the property L[ f (x)] = 0 when f (x) = 0. Then using q ∈ [0, 1] as an embedding parameter, we construct such a homotopy (1 − q)L[w(q) − β0 ] − qhN[w(q)] = H [w(q); β0 , h, q].
(4)
It should be emphasized that we have great freedom to choose the initial guess β0 , the auxiliary linear operator L and the non-zero auxiliary parameter h. Enforcing the homotopy (4) to be zero gives H [w(q); β0 , h, q] = 0. Now we have the so-called zero-order deformation equation (1 − q)L[w(q) − β0 ] = qhN[w(q)].
(5)
When q = 0, the zero-order deformation equation (5) becomes w(0) = β0 ,
(6)
and when q = 1, since h = 0, the zero-order deformation equation (5) is equivalent to w(1) = β.
(7)
Thus, according to (6) and (7), as the embedding parameter q increases from 0 to 1, w(q) varies continuously from the initial approximation β0 to the
Numer Algor
exact solution β. Such a kind of continuous variation is called deformation in homotopy. By Taylor’s theorem, w(q) can be expanded in a power series of q as follows w(q) = β0 +
∞
βm q m ,
(8)
m=1
where βm =
(1) (2) 1 dm w(q) (n) |q=0 = βm , βm , · · · , βm . m m! dq
If the initial guess β0 , the auxiliary linear parameter L and the nonzero auxiliary parameter h are properly chosen so that the power series (8) converges at q = 1, then we have the series solution β = w(1) = β0 +
∞
βm .
(9)
m=1
The governing equation of βm can be derived by differentiating the zeroorder deformation equation (5) m times with respect to q, then dividing by m!, and finally setting q = 0. Now we have the so-called mth-order deformation equation 1 −1 ∗ −1 L βm −χm βm−1 = h βm−1 − F ( p0 ) D F( p0 )B−(1−χm )F ( p0 ) F( p0 ) , 2 (10) where B=
m−1
(1) βk(1) βm−1−k
2
m−1
k=0
k=0
m−1
m−1
(2) βk(2) βm−1−k 2
k=0 m−1
(2) βk(1) βm−1−k
···
k=0
2
m−1
(3) βk(2) βm−1−k ··· 2
χm =
m−1
(n) βk(2) βm−1−k
k=0 (4) βk(3) βm−1−k
···
k=0
(n) βk(1) βm−1−k
k=0
k=0 (3) βk(3) βm−1−k
m−1
m−1
T (n) βk(n) βm−1−k
,
k=0
0, 1,
m≤1 . m>1
For the details of the deduction, please refer to [16]. Choosing L[β] = β, we have for m 1, βm = (h + χm )βm−1 −
h F ( p0 )−1 D∗ F( p0 )B − h(1 − χm )F ( p0 )−1 F( p0 )]. 2 (11)
Numer Algor
Setting β0 = F ( p0 )−1 F( p0 ), and taking the first-order approximation of β in (9), we can obtain α = p0 − β p0 − β0 = p0 − F ( p0 )−1 F( p0 ).
(12)
We can write the iteration form of (12) as follows xn+1 = xn − F (xn )−1 F(xn ), which is the Newton–Raphson method. Using second-order approximation of β in (9), we get α = p0 − β p0 − (β0 + β1 ) = p0 − F ( p0 )−1 F( p0 ) ⎡ 2 ⎤ β0(1) ⎢ ⎥ ⎢ ⎥ ⎢ 2β0(1) β0(2) ⎥ ⎢ ⎥ ⎢ ⎥ .. ⎢ ⎥ . ⎢ ⎥ ⎢ (1) (n) ⎥ ⎢ 2β0 β0 ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ β0(2) ⎥ ⎢ ⎥ h ⎢ ⎥ + F ( p0 )−1 F( p0 )D∗ F( p0 ) ⎢ 2β (2) β (3) ⎥ . 0 0 ⎢ ⎥ 2 ⎢ ⎥ .. ⎢ ⎥ . ⎢ ⎥ ⎢ (2) (n) ⎥ ⎢ 2β0 β0 ⎥ ⎢ ⎥ ⎢ ⎥ .. ⎢ ⎥ . ⎢ (n−1) ⎥ (n) ⎥ ⎢ 2β β0 ⎥ ⎢ 0 ⎣ 2 ⎦ β0(n)
(13)
The iteration form of (13) can be given as follows
xn+1
⎡ 2 ⎤ a(1) n ⎢ (2) ⎥ ⎢ 2a(1) ⎥ n an ⎢ ⎥ .. ⎢ ⎥ ⎢ ⎥ . ⎢ ⎥ ⎢ 2a(1) a(n) ⎥ n n ⎢ ⎥ ⎢ (2) 2 ⎥ ⎢ an ⎥ ⎢ ⎥ h ⎢ (3) ⎥ = xn − F (xn )−1 F(xn ) + F (xn )−1 D∗ F(xn ) ⎢ 2a(2) ⎥, n an ⎢ ⎥ 2 .. ⎢ ⎥ . ⎢ ⎥ ⎢ ⎥ (n) ⎥ ⎢ 2a(2) n an ⎢ ⎥ ⎢ ⎥ .. ⎢ ⎥ . ⎢ ⎥ ⎢ 2a(n−1) a(n) ⎥ ⎣ n n ⎦ (n) 2 an
(2) (n) −1 where an = (a(1) n , an , · · · , an ) = F (xn ) F(xn ).
(14)
Numer Algor
Iteration (14) can be written as follows: Given x0 ∈ D, compute yn , xn+1 from 0 = F(xn ) + F (xn )(yn − xn ), h 0 = F(xn )+ F (xn )(xn+1 −xn )− F (xn )(yn −xn )2 , 2
n = 0, 1, 2, . . . . (15)
Notice that for h = −1, method (15) gives rise to Chebyshev’s method, a well known method for solving nonlinear equations. For more details, see [12] for instance. So, method (15) could be seen as a Chebyshev-like method. Note that we have great freedom to choose the value of the auxiliary parameter h. The value of h is determined by the h-curves suggested by Liao [16]. Let Rh denote the set of all values of h which ensure the convergence of (15). Represent h on the horizontal axis and the solution x of (15) on the vertical axis. Plot the curve x vs h. Because the limit of all convergent sequences (15) is the same, there exists a horizontal line segment above the region h ∈ Rh . In other words, all values of h ∈ Rh give the same convergent solution. So, by plotting the curve x vs h at a high enough order approximation, one can find an approximation of the set Rh .
3 Analysis of convergence Before exploring the local convergence properties of the family (15), we will state the following result on Taylor’s expansion of vector functions [13]. Lemma 1 Assume that F : D ⊂ Rn → Rn be a C p function defined on D = {x :
x − a < r}; then for any v ≤ r, the following expression holds, F(a + v) = F(a) + F (a)v +
1 1 F (a)vv + · · · + F ( p) (a)v . . . v + R p , 2 p!
where p R p ≤ sup v F ( p) (x) − F ( p) (a) . x∈D p!
We can now state and prove the main result. Theorem 1 Let F : D ⊂ Rn → Rn be a C4 function in an open convex set D ⊂ Rn . Assume that there exists an α ∈ D such that F(α) = 0 and F (α)−1 exists. Then there exists an > 0 such that for any x0 ∈ U(α, ), the sequence generated by (15) is well defined and converges to the zero α of F, and the process has order (1) 2, if h = −1 (2) 3, if h = −1
Numer Algor
Proof We introduce the notations δ = xn+1 − α, = xn − α, H(xn ) = F (xn )−1 F(xn ), B(xn ) = F (xn )−1 F (xn ). From (15), we obtain δ = ( − H(xn )) +
h B(xn )H(xn )H(xn ) 2
Using Lemma 1 and that H (α) = I, we represent H(xn ) by Taylor expansion: 1 1 H (α) + H (α) + O 4 2 6 1 1 = + H (α) + H (α) + O 4 . 2 6
H(xn ) = H(α) + H (α) +
(16)
Similarly, we express: B(xn ) = B(α) + B (α) +
1 1 B (α) + B (α) + O 4 . 2 6
(17)
Using (16) and (17), we obtain 4 1 1 δ = − + H (α) + H (α) + O 2 6 1 1 h + B(α) + B (α) + B (α) + B (α) + O 4 2 2 6 4 1 1 + H (α) + H (α) + O 2 6 1 1 + H (α) + H (α) + O 4 . 2 6 Further, we obtain 1+h 1 h h H (α) + − H (α) + B(α)H (α) + B (α) δ= 2 6 4 2 4 +O , where H (α) = −F (α)−1 F (α), H (α) = 2 F (α)−1 F (α) + F (α)−1 F (α), B (α) = F (α)−1 F (α) + F (α)−1 F (α). The latter expression implies the assertions in the statement of Theorem 1, and this completes the proof.
Numer Algor
4 Numerical examples In order to demonstrate the performance of the introduced iterative method (15) as a novel solver for systems of nonlinear algebraic equations, four different problems were selected as test problems. We present the results of our comparison of the method (15) (HAM) with the classical Newton’s method (NM), Chebyshev’s method (CM) and Halley’s method (HM). The calculations were done using MATLAB. Our comparison of the methods is based upon the following criteria: number of iterations and computational order of convergence. We use the following stopping criterion for our computer programs:
xk+1 − xk < , where = 2.22e − 16 is a MATLAB constant. The computational order of convergence ρ(i, k), for series {x(i) k }, is computed by [24]: (i) ln x(i) − α / x − α k+1 k , i = 1, 2, . . . , n, k ≥ 2. ρ(i, k) = (i) ln x(i) k − α / xk−1 − α We introduce the following notations: x0 : an initial approximation, iter : number of iterations, N D : Not defined, coc : computational order of convergence computed by ρavg (k) = average(ρ(1, k), . . . , ρ(n, k)). Throughout this section the values of h need not be optimal but they are arbitrary values from the set Rh . Example 1 Consider the system: 0 = x3 − 3xy2 − 1, 0 = 3x2 y − y3 + 1. This system has a zero α ≈ (−0.290514555, 1.084215081). The so called hcurves of x and y for various initial approximations are as shown in Fig. 1. It is clear from Fig. 1 that all values of h ∈ (−3, 1) are appropriate when x0 = (−1, 2) and the values h ∈ (−2.2, 0) are appropriate when x0 = (−3, 3). Clearly, as the initial guess gets farther from the required root, the range of values of h narrows. The value h = −2 is chosen from the h-curves and the results are given in Table 1. Example 2 Consider the system of three nonlinear equations in three unknowns: 0 = 3x − cos(yz) − 0.5, 0 = x2 − 81(y + 0.1)2 + sin z + 1.06, 0 = e−xy + 20z +
10π − 3 . 3
Numer Algor
a
b 3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
−0.5
− 0.5
−1
−1
−1.5
−1.5
−2
−4
−2
0
2
−2
−4
−2
0
2
Fig. 1 a h−curves of x (solid lines) and y (dashed lines) when x0 = (−1, 2) using five iterations. b h−curves of x (solid lines) and y (dashed lines) when x0 = (−3, 3) using five iterations
This has a zero α ≈ (0.49814468, −0.19960589, −0.52882597). The h-curves of z for various initial approximations are as shown in Fig. 2. For x0 = (5, 5, 2) the proper values of h are h ∈ (−1, −0.55) and for x0 = (5, 5, 2) are h ∈ (0, 0.018). If the iterative method (15) converges when h = −1 then h = −1 is the best value of h, as shown in Theorem 1. Table 2 illustrates that if x0 is not sufficiently close to the actual root, there is enough reason to suspect that Newton’s method (NM), Chebyshev’s method (CM) and Halley’s method (HM) will diverge and in this case we can still find a proper value of h that ensures the convergence of method (15).
Example 3 The third application we consider, originally described in [15], and also found in [9] and [10], concerns a model of two continuous non-adiabatic stirred tank reactors. These reactors are in a series, at steady state with a
Table 1 Numerical results of the solutions in Example 1
Method NM CM HM HAM
x0 = (−1, 2) iter
ρavg (3)
x0 = (−3, 3) iter
ρavg (3)
6 4 5 4(h = −2)
1.70 2.56 3.13 2.24
9 6 6 5(h = −2)
1.42 1.80 2.29 1.93
Numer Algor
b0
a−0.522
− 0.1
−0.523
− 0.2 −0.524 − 0.3 − 0.4
−0.526
z
z
−0.525
− 0.5 − 0.6
−0.527
− 0.7 −0.528 − 0.8 −0.529
− 0.9
−0.53 −1
−0.5 h
−1 0
0
0.01
0.02
0.03
h
Fig. 2 a h−curve of z when x0 = (5, 5, 2) using ten iterations. b h−curve of z when x0 = (−15, −15, −15) using ten iterations
recycle component, and have an exothermic first-order irreversible reaction. When certain variables are eliminated, the model results in a system of two nonlinear equations f1 = (1 − R)
D − φ1 e 10(1 + β1 )
10φ1 10φ 1+ γ 1
− φ1
D − β1 φ1 − (1 + β2 ) φ2 e f2 = φ1 − (1 + β1 )φ2 + (1 − R) 10
10φ2 10φ 1+ γ 2
in two unknowns φ1 and φ2 . The unknowns represent the dimensionless temperatures in the two reactors, and both are bounded in the unit interval, i.e. φi ∈ [0, 1], i = 1, 2.
Table 2 Numerical results of the solutions in Example 2
Method NM CM HM HAM
x0 = (5, 5, 2) iter
ρavg (3)
x0 = (−15, −15, −15) iter ρavg (3)
14 8 8 6(h = −0.6)
1.08 2.84 2.34 2.07
Divergent Divergent Divergent 9(h = 0.001)
0.22 ND ND 1.89
Numer Algor Table 3 Numerical results of the solutions in Example 3
Method NM CM HM HAM
x0 = (5, 5) iter
ρavg (3)
x0 = (−10, −5) iter ρavg (3)
52 36 28 21(h = −3)
1.02 1.03 1.04 1.60
98 Divergent Divergent 13(h = −2)
1.94 ND 0.53 1.88
Floudas et. al. [9] stated that “solving this system can be regarded as a challenging problem.” The parameters γ , D, β1 , β2 are set to 1000, 22, 2, and 2, respectively. As the recycle ratio parameter R takes on the value R = 0.94, we obtain Table 3. (The solution of this system in the unit interval φi ∈ [0, 1] is φ1 = 0.724233423 and φ2 = 0.245133062.) Example 4 Consider the kinematic synthesis mechanism for automotive steering. This problem is originally described in [21]. The Ackerman steering mechanism is a four-bar mechanism for steering four wheel vehicles. When a vehicle turns, the steered wheels need to be angled so that they are both 90◦ with respect to a certain line. This means that the wheels will have to be at different angles with respect to the non-steering wheels. The Ackerman design arranges the wheels automatically by moving the steering pivot inward. Pramanik [21] stated that “the Ackerman design reveals progressive deviations from ideal steering with increasing ranges of motion.” Pramanik instead considered a six-member mechanism. This produces the system of equations given, for i = 1, 2, 3, by Gi (ψi , φi ) = [Ei (x2 sin ψi − x3 ) − Fi (x2 sin φi − x3 )]2 + [Fi (1 + x2 cos φi ) − Ei (x2 cos ψi − 1)]2 − [(1 + x2 cos φi )(x2 sin ψi − x3 )x1 − (x2 sin φi − x3 ) (x2 cos ψi − x3 )x1 ]2 , where Ei = x2 (cos φi − cos φ0 ) − x2 x3 (sin φi − sin φ0 ) − (x2 sin φi − x3 )x1 and Fi = −x2 cos ψi − x2 x3 sin ψi + x2 cos ψ0 + x1 x3 + (x3 − x1 )x2 sin ψ0 .
Table 4 Angular data (in radians) for automotive steering problem
i
ψi
φi
0 1 2 3
1.395417 1.744482 2.065623 2.460067
1.746175 2.036469 2.239097 2.460067
Numer Algor
The unknowns are x1 (representing the normalized steering pivot rod radius), x2 (representing the normalized tire pivot rod radius), and x3 (representing the normalized length direction distance from steering rod pivot point to the tire pivot). The input parameters are the angles ψi and φi , for i = 1, 2, 3. When the angles ψi and φi are given as in Table 4, there are two roots for the system in the domain [0.5, 1]3 . With initial guess (0.7, 0.7, 0.7), we obtain the first root α1 = (0.990385519, 0.640150295, 0.593280193) after four iterations (h = −1.5), and using the initial guess (0.8, 0.5, 0.5), we obtain the second root α2 = (0.942663052, 0.579042243, 0.515760356) after three iterations (h = −1.5). 5 Conclusions Our study presents a family of second and third order iterative methods for solving systems of nonlinear algebraic equations. The family depends on a real parameter h. The numerical examples show that our method is very effective and efficient and provide highly accurate results in a less number of iterations as compared to some well-known methods. When the initial value x0 is not good, HAM method requires a lot fewer iterations than needed by other methods and even if the chosen initial guess leads to divergence by other method, we can still find the root efficiently (see Table 2). It is clearly illustrated that, by means of HAM, convergence region and rate of convergence can be enhanced and controlled by the auxiliary parameter h.
References 1. Abbasbandy, S., Tan, Y., Liao, S.J.: Newton-homotopy analysis method for nonlinear equations. Appl. Math. Comput. 188(2), 1794–1800 (2007) 2. Alefeld, G.: On the convergence of Hally’s method. Am. Math. Mon. 88, 530–536 (1981) 3. Amat, S., Busquier, S., Gutiérrez, J.M.: Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157, 197–205 (2003) 4. Allgower, E., George, K.: Numerical continuation methods: an introduction, 388 pp. QA377.A56 640. Springer, New York (1990) 5. Broyden, C.G.: A class of of methods for solving nonlinear simultaneous equations. Math. Comput. 19, 577–593 (1965) 6. Broyden, C.G.: Quasi-Newton methods and thier application to function minimization. Math. Comput. 21, 368–381 (1967) 7. Dennis, J.E., More, J.: Quasi-Newton methods: motivation and theory. SIAM Rev. 19, 46–84 (1977) 8. Kaya, D., El-Sayed, S.M.: Adomian’s decomposition method applied to systems of nonlinear algebraic equations. Appl. Math. Comput. 154, 487–493 (2004) 9. Floudas, A., Pardalos, P.M., Adjiman, C., Esposito, W., Gumus, Z., Harding, S., Klepeis, J., Mayer, C., Schweiger, C.: Handbook of Test Problems in Local and Global Optimization. Kluwer Academic, Dordrecht (1999) 10. Floudas, A.: Recent advances in global optimization for process synthesis, design, and control: enclosure of all solutions. Comput. Chem. Eng. S, 963–973 (1999) 11. Golbabai, A., Javidi, M.: Newton-like iterative methods for solving system of non-linear equations. Appl. Math. Comput. 192, 546–551 (2007) 12. Gutierrez, J.M., Hernández, M.A.: A family of Chebyshev-Hally type methods in Banach spaces. Bull. Aust. Math. Soc. 55, 113–130 (1997)
Numer Algor 13. Nedzhibov, G.H.: A family of multi-point iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 222, 244–250 (2008) 14. Homeier, H.H.H.: A modified Newton method with cubic convergence: the multivariate case. J. Comput. Appl. Math. 169, 161–169 (2004) 15. Kubicek, M., Hoffman, H., Hlavacek, V., Sinkule, J.: Multiplicity and stability in a sequence of two nonadiabatic nonisothermal CSTR. Chem. Eng. Sci. 35, 987–996 (1980) 16. Liao, S.J.: Beyond Perturbation: Introduction to the Homotopy Analysis Method. Chapman & Hall/CRC, Boca Raton (2003) 17. Liao, S.J.: On the homotopy analysis method for nonlinear problems. Appl. Math. Comput. 47, 499–513 (2004) 18. Liao, S.J.: Notes on the homotopy analysis method: some defnitions and theorems. Commun. Nonlinear Sci. Numer. Simul. (2008). doi:10.1016/j.cnsns.2008.04.013 19. Liao, S.J., Tan, Y.: A general approach to obtain series solutions of nonlinear differential equations. Stud. Appl. Math. 119, 297–355 (2007) 20. Burden, R.L., Faires, J.D.: Numerical Analysis, 8th edn. Thomson Brooks/Cole (2005) 21. Pramanik, S.: Kinematic synthesis of a six-member mechanism for automotive steering. ASME J. Mech. Des. 124, 642–645 (2002) 22. Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, 2nd edn. Texts in Applied Mathematics 12. Springer, New York (1992) 23. Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Engle-wood Cliffs (1964) 24. Weerakoon, S., Fernando, T.G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–03 (2000) 25. Werner, W.: Iterative solution of systems of nonlinear equations based upon quadratic approximations. Comput. Math. Appl. 12A(3), 331–343 (1986) 26. Wu, X.Y.: A new continuation Newton-like method and its deformation. Appl. Math. Comput. 112(1), 75–78 (2000)