The Bisection–Artificial Bee Colony algorithm to ... - Semantic Scholar

Report 2 Downloads 50 Views
Applied Soft Computing 26 (2015) 143–148

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

The Bisection–Artificial Bee Colony algorithm to solve Fixed point problems P. Mansouri a,b,∗ , B. Asady a , N. Gupta b a b

Department of Mathematics, Arak Branch, Islamic Azad University, Arak, Iran Department of Computer Science, Delhi University, Delhi, India

a r t i c l e

i n f o

Article history: Received 29 September 2012 Received in revised form 18 May 2014 Accepted 3 September 2014 Available online 28 September 2014 Keywords: Bisection method Fixed point problems Artificial Bee Colony algorithm

a b s t r a c t In this paper, we introduce a novel iterative method to finding the fixed point of a nonlinear function. Therefore, we combine ideas proposed in Artificial Bee Colony algorithm (Karaboga and Basturk, 2007) and Bisection method (Burden and Douglas, 1985). This method is new and very efficient for solving a non-linear equation. We illustrate this method with four benchmark functions and compare results with others methods, such as ABC, PSO, GA and Firefly algorithms. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Solving equations is one of the most important problem in engineering and science. The bisection method in mathematics, is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing so that the range of possible solutions is halved in each iteration. This is a very simple and robust method, but it is also relatively slow. Thus, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods [1]. The bisection method is also called the binary search method because of its similarity to the binary search algorithm [1] in computer science. Bio-inspired algorithms are amongst the most powerful algorithms for the optimization problems [2–7,18], especially for the NP-hard problems like the traveling salesman problem and others. Particle Swarm Optimization (PSO) algorithm was developed by Kennedy and Eberhart in 1995 [8] based on the swarm behavior(/intelligence) such as that of fishes and birds schooling in nature. Though particle swarm optimization has many similarities with genetic algorithms, it is much simpler because it does not use mutation/crossover operators. Instead, it uses the real-number randomness and the global communication among the swarming

∗ Corresponding author at: Department of Mathematics, Arak Branch, Islamic Azad University, Arak, Iran. Tel.: +98 9187590319. E-mail addresses: [email protected], [email protected], [email protected] (P. Mansouri). http://dx.doi.org/10.1016/j.asoc.2014.09.001 1568-4946/© 2014 Elsevier B.V. All rights reserved.

particles. In this sense, it is also easier to implement. The Firefly Algorithm(FA) was introduced by X.S. Yang in 2009 [9] for multimodal optimization applications. He compared FA algorithm with other metaheuristic algorithms such as Particle swarm optimization (PSO). Motivated by the intelligent behavior of honey bees, Dervis Karaboga gave Artificial Bee Colony algorithm(ABC) [10,11] in 2005. It is as simple as Particle Swarm Optimization (PSO) and Differential evolution (DE) algorithms, Genetic algorithm (GA) [12], biogeography based optimization (BBO) algorithm, and uses only common control parameters such as colony size and maximum cycle number. ABC as an optimization tool, provides a population-based search procedure in which food positions are modified by the artificial bees with time and the bee’s aim is to discover the places of food sources with high nectar amount and finally the one with the highest nectar. In ABC system, artificial bees fly around in a multidimensional search space and some (employed and onlooker bees) choose food. Development of an ABC algorithm for solving generalized assignment problem, which is known to be NP-hard, is presented in detail along with some comparisons in [13]. Sources, depending on their experience and the experience of their nest mates, adjust their positions. Some applications of the ABC algorithm to solve hard problems are presented in [14–17]. In this paper, we introduce a novel iterative method that combines the advantages of both the Bisection method and the Artificial Bee Colony algorithm to solve the hard fixed point problem. In Section 2, fixed point problem is defined and brief overview of the ABC algorithm and the Bisection method are described. In Section 3, we explain and discuss about our method. In Section 4, we

144

P. Mansouri et al. / Applied Soft Computing 26 (2015) 143–148

Fig. 1. Fixed Point iteration scheme.

Fig. 2. Bisection method to compute the roots of a function.

compare accuracy and complexity of proposed method with other algorithms on four benchmark functions. In Section 5, we discuss about the ability of the proposed method.

number c is a fixed point for a given function g if g(c) = c. A set of fixed points is sometimes called a fixed set. For example, benchmark √

2.1. Fixed point of a function

function g3 = 20 + e − 20e−0.2 1/x − e1/cos(2x) , x ∈ [1, 21] has a fixed point (see in Section 3), but not all functions have fixed points. Function g(x) = x + 1, x ∈ R, has no fixed points, since x is never equal to x + 1 for any real number. An iterative method for solving equation g(x) = x is the recursive relation xi+1 = g(xi ), i = 0, 1, 2, . . ., with some initial guess x0 . The algorithm stops when one of the following stopping criterion is met:

In mathematics, a fixed point(invariant point) of a function is a point that is mapped to itself by the function. In other word, a

• D1 : total number of iterations is N, for some N, fixed apriori. • D2 : |xi+1 − xi | <  for some , fixed apriori (Fig. 1).

2. Preliminaries In this section, we define the fixed point problem and briefly explain the ABC algorithm and the Bisection method.

2

Fig. 3. Artificial Bee Colony algorithm.

P. Mansouri et al. / Applied Soft Computing 26 (2015) 143–148

145

Fig. 4. Bisection ABC algorithm.

Theorem 1. If g is continuous on [a, b] and g(x) ∈ [a, b] for all x ∈ [a, b], then g has a fixed point in [a, b]. Proof.

(See chapter 2 of [1]).

Theorem 2. If g(x) and its derivatives are continuous, |g (c)| < 1 and g(c) = c, then there is an interval I = [c − ı, c + ı], ı ≥ 0, such that the iterative scheme xk+1 = g(xk ) converges to c for every x0 ∈ I. Further, if g (c) = / 0, then the convergence is linear. Alternatively, if g(c) = / 0, then the convergence is of g  (c) = ... = g (p−1) (c) = 0 and g(p) (c) = order p. Proof.

(See theorem 2.4 of [1]).

From Theorems 1 and 2, it follows that the iterative scheme is convergent if there exists a ı > 0, such that |g(x)| < 1,

∀ x ∈ (c − ı, c + ı)

However, the computation of ı is sometimes very difficult to perform. In other words, finding an interval I = [c − ı, c + ı] is difficult. 2.2. The Bisection method The Bisection method is a numerical method for estimating the roots of a real-valued function. Given a continuous function f on an interval [a, b], where f(a) and f(b) have opposite signs, the problem

is to find x that satisfies f(x) = 0. Fig. 2 gives bisection method to compute the roots of a function. 2.3. Artificial Bee Colony algorithm Artificial Bee Colony algorithm (ABC) is an algorithm based on the intelligent foraging behavior of honey bee swarm, purposed by Karaboga in 2005 [4]. In ABC model, the colony consists of three groups of bees: employed bees, onlookers and scouts. It is assumed that there is only one artificial employed bee for each food source. In other words, the number of employed bees in the colony is equal to the number of food sources around the hive. Employed bees go to their food source and come back to hive and dance on this area. The employed bee whose food source has been abandoned becomes a scout and starts to search for a new food source. Onlookers watch the dances of employed bees and choose food sources depending on dances. The pseudo-code of the ABC algorithm is given in Fig. 3. In ABC which is a population based algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of the employed bees is equal to the number of solutions in the population. At the first step, a randomly distributed initial population (food source positions) is generated. After initialization, the population is subjected to repeat the cycles of the search processes of the employed, onlooker, and scout bees, respectively. An employed

146

P. Mansouri et al. / Applied Soft Computing 26 (2015) 143–148

Fig. 5. The fixed point of functions gi (x), i = 1, 2, 3, 4.

bee produces a modification on the source position in her memory and discovers a new food source position. Provided that the nectar amount of the new one is higher than that of the previous source, the bee memorizes the new source position and forgets the old one. Otherwise she keeps the position of the one in her memory. After all employed bees complete the search process, they share the position information of the sources with the onlookers on the dance area. Each onlooker evaluates the nectar information taken from all employed bees and then chooses a food source depending on the nectar amounts of sources. As in the case of the employed bee, she produces a modification on the source position in her memory and checks its nectar amount. Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one. The sources abandoned are determined and new sources are randomly produced to be replaced with the abandoned ones by artificial scouts. After initialization, the population is subjected to repeat the cycles of the search processes of the employed, onlooker, and scout bees, respectively. An employed bee produces a modification on the source position in her memory and discovers a new food source position. Provided that the nectar amount of the new one is higher than that of the previous source, the bee memorizes the new source

Table 1 Some benchmark functions. Function f1 (x) f2 (x) f3 (x) f4 (x)

Domain x /4000 − cos(x) + 1 − x = 0 x2 − 10cos(2x)√ + 10 − x = 0 2

2

20 + e − 20e−0.2 x − ecos(2x) − x = 0 √ 418.9829 − xsin x − x = 0

[− 20, 20] [− 20, 1] [1, 21] [400, 500]

position and forgets the old one. Otherwise she keeps the position of the one in her memory. After all employed bees complete the search process, they share the position information of the sources with the onlookers on the dance area. Each onlooker evaluates the nectar information taken from all employed bees and then chooses a food source depending on the nectar amounts of sources. As in the case of the employed bee, she produces a modification on the source position in her memory and checks its nectar amount. Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one. The sources abandoned are determined and new sources are randomly produced to be replaced with the abandoned ones by artificial scouts.

P. Mansouri et al. / Applied Soft Computing 26 (2015) 143–148

147

Table 2 Comparative results of performance GA,PSO, ABC, FA and BABC algorithms. g

g1 (x)

g2 (x)

−7.3765e−6 0.0021087 0.003990

GA

x* Mean SD

PSO

x* Mean SD

2.41641e−009 2.41641e−009 0.0

ABC

x* Mean SD

6.9380e−008 6.93801e−008 0.0

8.55087e−06 0.0 0.0

FA

x* Mean SD

6.32344e−006 6.32342e−006 0.0

−7.82218e−007 7.82339e−007 0.0

BABC

x* Mean

−1.4478e−015 1.44329e−015

7.86354e−010 7.86354e−010

SD

0 1.2478e−005 1.639e−005 0.00252 −0.001260 0.0

0.0

0.0

Fig. 6. The result of GA, PSO, ABC,FA algorithms and proposed method to solve the fixed point problem of function g1 .

g4 (x)

20.09812 0.04974 0.2264

490.0312407 0.3874 3.1252

19.59 8.153e−005 0.0

490.031 8.85668e−005 0.0

20.098 1.038e−005 0.0

490.031 8.33846e−007 0.0

20.09834 0.0013594 0.0

490.031 3.96786e−005 0.0

19.92 2.30926e−013

490.031 5.68434e−013

0.0

0.0

Fig. 7. The result of GA, PSO, ABC,FA algorithms and proposed method to solve the fixed point problem of function g2 .

Thus,

3. The Bisection–Artificial Bee Colony algorithm

c∈ In this section, we introduce a novel iterative algorithm by combining the Bisection method and the ABC algorithm (BABC) to obtain the solution approximation of a fixed point problem as g(x) = x. We define a function f(x) = g(x) − x. Thus the problem of finding the fixed points of g(x) is reduced to finding the roots of f(x). We further define a function h(x) = |f(x)|. The problem of finding the roots of f(x) is further reduced to finding an x that minimizes h(x). The idea here is that instead of taking the mid point of the interval I (interval I includes the solution)to be a candidate solution, ABC algorithm is used to give a better approximation, i.e. given an interval Ik = [ak , bk ] a candidate solution xk is computed using the ABC algorithm. If f(xk ) = 0 we are done, else we compute a new interval Ik+1 ⊂ Ik depending upon whether f(xk ) . f(ak ) < 0 or f(xk ) . f(bk ) < 0. The pseudo code of the proposed method is given in Fig. 4. Clearly, in Fig. 4 for each iteration we obtain randomly new approximation value xk of the solution equation by ABC algorithm on Ik , for each k ∈ N. So that, Ik+1 is either [ak , (xk + ak )/2] (only bk changes) or we have [(ak + xk )/2, xk ] or [xk , (xk + bk )/2] (both ak and bk change) or [(bk + xk )/2, bk ] (only ak changes). So, we get Ik+1 ⊆ Ik .

g3 (x)

(1)



Ik .

(2)

Therefore, the intersection of all the Ik ’s is nonempty and we have k

0 ≤ |c − xk | ≤ (bk − ak ) ≤ (l) (b − a)

(3)

for all k and 0 < l < 1. But this mean that lim (c − xk ) = 0

k→∞

(4)

and/or lim xk = c.

k→∞

(5)

Therefore, we have following theorem. Theorem 3. If g is continuous on [a, b] and g(x) ∈ [a, b] for all x ∈ [a, b] then BABC method is convergent to a fixed point of the function g in [a, b]. 4. Numerical examples In this section, we illustrate our algorithm with some examples and compare the results with other evolutionary optimization algorithms like GA, PSO, FA and also ABC.

148

P. Mansouri et al. / Applied Soft Computing 26 (2015) 143–148

Results for the four functions are shown in Table 2 and in Figs. 6–9. The table and the figures show that BABC outperforms all the algorithms on all the benchmark functions. 5. Conclusion

Fig. 8. The result of GA, PSO, ABC,FA algorithms and proposed method to solve the fixed point problem of function g3 .

In this paper, we introduce a novel iterative method for finding a fixed point of a function g in a real interval [a, b] ⊆ R by using Artificial Bee Colony algorithm and Bisection method. If the function g is hard, it is sometimes difficult to determine suitable initial value close to the location of a fixed point. Derivative method (find the derivative of g(x) − x and find its root) is also sometimes not useful for various reasons like the derivative may not exist, the derivative is hard to compute or finding the root of a derivative itself may be difficult. ABC algorithm helps in finding a good initial value and the proposed method does away with the need to compute the derivative. Use of bisection algorithm ensures that the interval including the fixed point reduces in size at each iteration and value close to a fixed point with arbitrary accuracy  = 1 · et , t 1, is obtained. However, for some functions, especially those admitting derivatives, Newton method may be a better choice. Our proposed algorithm is easy to use and reliable. As comparison with other algorithm shows, the accuracy of our proposed method also is good. Acknowledgements The authors are very grateful to the anonymous referees and editor for their comments and suggestions, which have been very helpful in improving the presentation of this paper. References

Fig. 9. The result of GA, PSO, ABC,FA algorithms and proposed method to solve the fixed point problem of function g4 .

4.1. Example Consider the following benchmark functions: g1 (x) : x2 /4000 − cos(x) + 1 = x, 2

x ∈ [−20, 20]

g2 (x) : 10 + x − 10cos(2x) = x, x ∈ [−20, 1) √ 2 g3 (x) : 20 + e − 20e−0.2 x − ecos(2x) = x, x ∈ [1, 21] √ g4 (x) : 418.9829 − xsin x = x, x ∈ [400, 500]

(6) (7) (8) (9)

Table 1 summarizes these functions. Fixed points of the four functions g1 . . . g4 are plotted in Fig. 5. It can be seen that the fixed points of g1 and g2 are attained at 0. g3 has a fixed point very close to 20 and g4 has a fixed point very close to 490. We show details of solving function f1 (x) = 0 as follow. At first step random initial value of ˛ generated with ABC algorithm at x0 ∈ I0 = [−20, 20], and then with propose algorithm in Section 3, we obtain new approximate value x1 ∈ I1 ⊆ I0 for root and continue until we will reach to best value of root with arbitrary accuracy  = 1 · e−t , t 1.

[1] L.R. Burden, F.J. Douglas, Numerical Analysis, 3rd ed., 1985. [2] A. Alizadegan, B. Asady, M. Ahmadpour, Two modified versions of artificial bee colony algorithm, Appl. Math. Comput. 225 (2013) 601–609. [3] K. Deb, Optimisation for Engineering Design, Prentice-Hall, New Delhi, 1995. [4] D.E. Goldberg, Genetic Algorithms in Search, Optimisation and Machine Learning, Addison Wesley, Reading, MA, 1989. [5] J. Kennedy, R. Eberhart, Y. Shi, Swarm Intelligence, Academic Press, 2001. [6] X.S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, 2008. [7] X.S. Yang, Biology-derived algorithms in engineering optimizaton, in: Olarius, Zomaya (Eds.), Handbook of Bioinspired Algorithms and Applications, Chapman and Hall/CRC, 2005 (chapter 32). [8] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. of IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, pp. 1942–1948. [9] X.S. Yang, Firefly Algorithms for Multimodal Optimization, Stochastic Algorithms, Foundations and Applications, Springer, Berlin, Heidelberg, 2009, pp. 169–178. [10] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: Artificial Bee Colony (ABC) algorithm, J. Glob. Optimiz. 39 (2007) 459–471. [11] Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization. Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [12] J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975. [13] A. Baykaso, P. Ozbakira, L. Tapkan, Artificial bee colony algorithm and its application to generalized assignment problem, in: F.T.S. Chan, M. Kumar (Eds.), TiwariSwarm Intelligence, Focus on Ant and Particle Swarm Optimization, 2007, ISBN 978-3-902613-09-7, Published: December 1, under CC BY-NC-SA 3.0 (Chapter 8). [14] P. Mansouri, B. Asady, N. Gupta, A novel iteration method for solve hard problems(nonlinear equations) with artificial bee colony algorithm., World Acad. Sci. Eng. Technol. 59 (2011) 594–596. [15] P. Mansouri, B. Asady, N. Gupta, An approximation algorithm for fuzzy polynomial interpolation with Artificial Bee Colony algorithm, Appl. Soft Comput. 13 (2013) 1997–2002. [16] P. Tapkana, L. ?zbakira, A. Baykasoglub, Solving fuzzy multiple objective generalized assignment problems directly via bees algorithm and fuzzy ranking, Exp. Syst. Appl. 40 (3) (2013) 892–898. [17] B. Asady, P. Mansouri, N. Gupta, The modify version of Artificial Bee colony algorithm to solve real optimization problems, Int. J. Electr. Comput. Eng. 2 (4) (2012). [18] A. Ochoa, L. Margain, A. Hernndez, J. Ponce, A.D. Luna, A. Hernndez, O. Castillo, Bat Algorithm to improve a financial trust forest, in: 5th Word Congress on Nature and Biologically Inspired computing, Fargo, North Dakota, 2013.