He Huang* , Min Zhu** and Jin Wang***An Improved Artificial Bee Colony Algorithm Based on Special Division and Intellective SearchAbstract: Artificial bee colony algorithm is a strong global search algorithm which exhibits excellent exploration ability. The conventional ABC algorithm adopts employed bees, onlooker bees and scouts to cooperate with each other. However, its one dimension and greedy search strategy causes slow convergence speed. To enhance its performance, in this paper, we abandon the greedy selection method and propose an artificial bee colony algorithm with special division and intellective search (ABCIS). For the purpose of higher food source research efficiency, different search strategies are adopted with different employed bees and onlooker bees. Experimental results on a series of benchmarks algorithms demonstrate its effectiveness. Keywords: Artificial Bee Colony , Global Search , Intellective Search , Special Division 1. IntroductionIn nature, biology conducts comprehensive tasks by swarm cooperation, and due to the interaction among its swarm, each individual’s simple behavior could exhibit powerful capability. Inspired by this group of behavior, swarm intelligence which is a branch of evolutionary computation has received extensive attention in recent years. Among the swarm intelligence algorithms, artificial bee colony (ABC) has been successfully applied to many practical problems, such as industrial systems [1] and image processing [2], due to its demonstrated competitiveness and ease of implementation. Compared to other evolutionary algorithms like particle swarm optimization (PSO) and differential evolution (DE) algorithm, ABC algorithm shows strong global search ability yet poor convergence speed [3-9]. So far, a large number of researchers have carried out plenty of works to enhance its performance. In particular, Kiran et al. [7] propose an adaptive method for ABC. They pointed out that the main reason for the slow convergence speed of ABC algorithm is the use of the blindness search equation, thus they introduced the best position found so far by the whole population to guide each bee’s search; What’s more, Li and Yang [8] took advantage of the records of individuals’ past successful search experience to guide future search for attaining superior optimization performance. In this paper, we first propose an intellective search strategy for both employed bees and onlooker bees. And then, in order to compensate for the deficiency of this strategy, a new division which for updating the best food source’s position is endowed to the corresponding employed bees and onlooker bees. Besides, in the proposed ABCIS algorithm, the greedy selection mechanism is abandoned and each food source’s position is updated at each iteration, so that the scout’s role becomes unnecessary and it is eliminated in the proposed algorithm. 2. Artificial Bee Colony with Intellective Search and Special DivisionIn the conventional ABC, both employed bees and onlooker bees update their corresponding food sources by learning from a randomly selected neighbor. This process is a bit blind. Besides, the greedy selection strategy makes all the food sources’ positions get more and more optimal, which drops out many search experience. In the proposed ABCIS algorithm, food sources’ positions are updated at each generation, and they are updated by two elite positions with better fitness value. In mathematical expression, it can be presented as:
(1)[TeX:] $$_{i j}=\frac{A l p h a_{j}+B e t a_{j}}{2} +SR\left( \begin{array}{l}{(-1)^{n 1}(c_{1})Alpha_{j}-rand(0,1)Food_{ij})} \\ {+(-1)^{n2}(c_{2}Beta_{j}-rand(0,1)Food_{ij})} \end{array} \right),$$where Alpha is the food source’s position with best fitness. In the proposed ABCIS algorithm, since the food sources are updated at each iteration, the best position found so far in history are needed to be recorded and used to guide bees’ search. Under this circumstance, this paper learns to the PSO algorithm and records each food source’s personal best position as pbest. When considering the selection of Beta, first we randomly select a food source [TeX:] $$r(r \neq i)$$ from the colony, then compare the corresponding [TeX:] $$ p b e s t_{r} \text { and } p b e s t_{i}$$, the better vector (better fitness value) is assigned to Beta. Then the search strategy illustrated in Eq. (1) is guided by two elite vectors, which will accelerate the convergence speed. n1 and n2 are two integers which independently and randomly selected as 0 or 1. SR is the success rate. At the beginning of the search state, SR is larger and then the step size is larger, which will help the swarm to execute global search and explore more extensive unknown regions. At the later state of optimization, the SR will be small and it helps bees to make fine tuning around the potential global optimum, which will enhance the solution accuracy. For the parameters [TeX:] $$c_{1} \text { and } c_{2}$$, they are generated by: [TeX:] $$c_{1}=\frac{\text {rand} \cdot \mathrm{rand}}{\text {rand}}$$ and [TeX:] $$c_{2}=\frac{\text {rand} \bullet \mathrm{rand}}{\text {rand}}$$, where rand means uniform random generator, and [TeX:] $$c_{1} \text { and } c_{2}$$ are generated independently. In addition, unlike the greedy selection mechanism in the traditional ABC algorithm, our proposed ABCIS algorithm updates the location of the food source at each iteration. Besides, after generating these two parameters, there may be some undesired values. That is to say, if 10,000 numbers are generated (here we repeat the experiments for 10,000 times), then there may be values exceed 1,000, which may not desired in the searching process. Thus this paper truncate them into the range of [0,2](when the generated value is out of that range, then it will be regenerated). Fig. 1 gives the results of 10,000 randomly generated values to show the density distribution within above range. In Eq. (1), all the dimensions are updated simultaneously at each generation. By using Eq. (1) to generate new positions, employed bees and onlooker bees will be more intellective. However, when the current food source is the global best position, both Alpha and Beta vectors will equal to Foodi itself. Then Eq. (1) cannot generate new positions, which will cause waste of evaluation. Thus, a new division for the best food source should be assigned. In ABCIS algorithm, it uses the following equation to update the best food source’s position:
(2)[TeX:] $$trial_{i j}=\text { Food}_{m j}+(-1)^{n 3}\left(c_{3} \text { Food }_{m j}-r a n d \cdot \text { Food }_{i j}\right)$$where Foodm is a randomly selected food source and it is different from Food1. Similar to n1 and n2 in Eq. (1), n3 is also a integer number selected from the collection of {0,1}. Similar to c1 and c2,c3 is also generated by [TeX:] $$\frac{r a n d \cdot r a n d}{r a n d}$$. In Eq. (2), one randomly selected dimension is generated per iteration, which is similar to a traditional ABC. Furthermore, the position of the food source can be updated at each iteration by Eq. (2). The new food source location is randomly selected between triali and Foodm. The random selection strategy can improve the global search ability of the ABCIS algorithm. 3. ExperimentTo investigate the proposed ABCIS algorithm’s effectiveness, integral and comprehensive experiments are conducted in this section. 3.1 Benchmarks’ IllustrationTo testify the proposed algorithm’s efficacy, this paper adopts five classical benchmark functions which are widely used in literature. Their detailed information, including mathematical expression, search range, and optimal value is provided in Table 1. Among of them, the first five benchmark functions are unimodal, which means there is only one local minimum point and it is also the global minimum point. The last three functions are multimodal functions whose local optimal position is numerous and the global optimal position is difficult to access. Table 1.
In this subsection, this paper conducts experiments for the purpose of comparing the proposed ABCSI’s competitiveness with other evolutionary algorithms. In this comparison, the conventional PSO, DE, ABC algorithm and five ABC variants, including GABC, qABC [6], OCABC, ABCVSS [7] as well as ABCM [8] are adopted. In this comparison, the population size is set to 40 and all the algorithms are repeated for 50 trails in order to be justice. Experiments are conducted on 30 dimensions, with the corresponding maximum iteration set as 3,000. For the maximum function evaluation number, it is set as the product of population size and maximum iteration number. When the maximum function evaluation number is achieved, the algorithm stops, and after 50 trials their final average fitness values (the first row) and standard deviations (the second row) result on these fourteen benchmarks are recorded, as presented in Table 2. Besides, the Wilcoxon rank-sum test [10] is also conducted to demonstrate the statistical effectiveness. In this measurement, the significant value is set to 5%. The symbol “=” means the proposed ABCIS algorithm attains results which are similar to the corresponding compared algorithm, while “+” means the corresponding algorithm exhibits better performance than ABCIS and “–” means worse. This outcome is also provided in Table 2. Further, Fig. 2 gives the convergence curves of these nine evolutionary algorithms on some typical benchmarks. The results intuitively indicate that the proposed ABCIS algorithm exhibits best optimization performance almost on all the proposed benchmarks and dimensions. For the unimodal functions, the proposed ABCIS algorithm obtains the optimal value on several benchmarks and achieves best accuracy on other problems, which means that the two elite vectors guiding could better perfect the convergence speed. This can also be verified in the convergence curves presented in Fig. 2. For the multimodal functions, the proposed ABCIS algorithm also achieves great results, which may contribute to the update strategy of global best food source. Further demonstration will be discussed in the following subsection. Table 2.
In this subsection, we aim at verifying each component of ABCIS’s effectiveness. The total dimension update strategy is considered. For experimental settings, the population size and the test dimension is chose as 40 and 10, respectively. Each trial is repeated for 50 times and the average fitness value (the first row) and standard deviations (the second row) is recorded. In the proposed ABCIS algorithm, food sources except the best one are updated on total dimension at each generation. To testify this component’s effectiveness, this subsection designs another ABC algorithm name as ABCIS_sd (ABCIS with single dimension) to perform their comparison. Comparison results are stated in Table 3, from which it could be observed that total dimension update strategy could achieve much better solution accuracy on unimodal functions. 4. ConclusionIn order to overcome the defects of the slow convergence of the conventional artificial bee colony algorithm, this paper first proposes an intellective search strategy for bees searching food source, and then to compensate the intellective search strategy’s drawback, we introduce a special division for the global best food source, which accelerate the solution accuracy. Experimental results show that the ABCIS algorithm proposed in this paper has made great improvements on both unimodal and multimodal functions. BiographyHe Huanghttps://orcid.org/0000-0003-3617-7062He received B.S. degree from Hangzhou Dianzi University, China, in 2001 and M.S. degree from Hangzhou Dianzi University, China, in 2006. Now, he works at Department of Information Technology, Wenzhou Vocational & Technical College, China as associate professor. His research interests mainly include optimization algorithm design and artificial intelligence. BiographyMin Zhuhttps://orcid.org/0000-0003-0601-5476She received B.S. degree from Nanjing University of Posts and Telecommunications, China in 2002, M.S. degree from Beijing University of Posts and Telecommunications, China in 2005, and received Ph.D. degree in Nanjing University of Posts and Telecommunications, China in 2018. Now, he works at College of Information Science and Technology, Zhejiang Shuren University as lecturer. Her research interests mainly include routing protocol and optimization algorithm design. BiographyJin Wanghttps://orcid.org/0000-0002-6516-6787He received the B.S. and M.S. degrees from Nanjing University of Posts and Telecommunications, China in 2002 and 2005, respectively. He received Ph.D. degree from Kyung Hee University Korea in 2010. Now, he is a professor in the College of Information Engineering, Yangzhou University. His research interests mainly include routing algorithm design, performance evaluation and optimization for wireless ad hoc and sensor networks. He is a Member of IEEE and ACM. References
|