An Improved Artificial Bee Colony Algorithm Based on Special Division and Intellective Search

He Huang* , Min Zhu** and Jin Wang***


Abstract: Artificial bee colony algorithm is a strong global search algorithm which exhibits excellent exploration ability. The conventional ABC algorithm adopts employed bees, onlooker bees and scouts to cooperate with each other. However, its one dimension and greedy search strategy causes slow convergence speed. To enhance its performance, in this paper, we abandon the greedy selection method and propose an artificial bee colony algorithm with special division and intellective search (ABCIS). For the purpose of higher food source research efficiency, different search strategies are adopted with different employed bees and onlooker bees. Experimental results on a series of benchmarks algorithms demonstrate its effectiveness.

Keywords: Artificial Bee Colony , Global Search , Intellective Search , Special Division

1. Introduction

In nature, biology conducts comprehensive tasks by swarm cooperation, and due to the interaction among its swarm, each individual’s simple behavior could exhibit powerful capability.

Inspired by this group of behavior, swarm intelligence which is a branch of evolutionary computation has received extensive attention in recent years. Among the swarm intelligence algorithms, artificial bee colony (ABC) has been successfully applied to many practical problems, such as industrial systems [1] and image processing [2], due to its demonstrated competitiveness and ease of implementation. Compared to other evolutionary algorithms like particle swarm optimization (PSO) and differential evolution (DE) algorithm, ABC algorithm shows strong global search ability yet poor convergence speed [3-9]. So far, a large number of researchers have carried out plenty of works to enhance its performance. In particular, Kiran et al. [7] propose an adaptive method for ABC. They pointed out that the main reason for the slow convergence speed of ABC algorithm is the use of the blindness search equation, thus they introduced the best position found so far by the whole population to guide each bee’s search; What’s more, Li and Yang [8] took advantage of the records of individuals’ past successful search experience to guide future search for attaining superior optimization performance.

In this paper, we first propose an intellective search strategy for both employed bees and onlooker bees. And then, in order to compensate for the deficiency of this strategy, a new division which for updating the best food source’s position is endowed to the corresponding employed bees and onlooker bees. Besides, in the proposed ABCIS algorithm, the greedy selection mechanism is abandoned and each food source’s position is updated at each iteration, so that the scout’s role becomes unnecessary and it is eliminated in the proposed algorithm.

2. Artificial Bee Colony with Intellective Search and Special Division

In the conventional ABC, both employed bees and onlooker bees update their corresponding food sources by learning from a randomly selected neighbor. This process is a bit blind. Besides, the greedy selection strategy makes all the food sources’ positions get more and more optimal, which drops out many search experience. In the proposed ABCIS algorithm, food sources’ positions are updated at each generation, and they are updated by two elite positions with better fitness value. In mathematical expression, it can be presented as:

[TeX:] $$_{i j}=\frac{A l p h a_{j}+B e t a_{j}}{2} +SR\left( \begin{array}{l}{(-1)^{n 1}(c_{1})Alpha_{j}-rand(0,1)Food_{ij})} \\ {+(-1)^{n2}(c_{2}Beta_{j}-rand(0,1)Food_{ij})} \end{array} \right),$$

where Alpha is the food source’s position with best fitness. In the proposed ABCIS algorithm, since the food sources are updated at each iteration, the best position found so far in history are needed to be recorded and used to guide bees’ search. Under this circumstance, this paper learns to the PSO algorithm and records each food source’s personal best position as pbest. When considering the selection of Beta, first we randomly select a food source [TeX:] $$r(r \neq i)$$ from the colony, then compare the corresponding [TeX:] $$ p b e s t_{r} \text { and } p b e s t_{i}$$, the better vector (better fitness value) is assigned to Beta. Then the search strategy illustrated in Eq. (1) is guided by two elite vectors, which will accelerate the convergence speed. n1 and n2 are two integers which independently and randomly selected as 0 or 1. SR is the success rate. At the beginning of the search state, SR is larger and then the step size is larger, which will help the swarm to execute global search and explore more extensive unknown regions. At the later state of optimization, the SR will be small and it helps bees to make fine tuning around the potential global optimum, which will enhance the solution accuracy. For the parameters [TeX:] $$c_{1} \text { and } c_{2}$$, they are generated by: [TeX:] $$c_{1}=\frac{\text {rand} \cdot \mathrm{rand}}{\text {rand}}$$ and [TeX:] $$c_{2}=\frac{\text {rand} \bullet \mathrm{rand}}{\text {rand}}$$, where rand means uniform random generator, and [TeX:] $$c_{1} \text { and } c_{2}$$ are generated independently. In addition, unlike the greedy selection mechanism in the traditional ABC algorithm, our proposed ABCIS algorithm updates the location of the food source at each iteration.

Besides, after generating these two parameters, there may be some undesired values. That is to say, if 10,000 numbers are generated (here we repeat the experiments for 10,000 times), then there may be values exceed 1,000, which may not desired in the searching process. Thus this paper truncate them into the range of [0,2](when the generated value is out of that range, then it will be regenerated). Fig. 1 gives the results of 10,000 randomly generated values to show the density distribution within above range.

Fig. 1.
Density distribution of generator [TeX:] $$\frac{\text {rand} \cdot \mathrm{rand}}{\text {rand}}$$ within the range of [0,2].

In Eq. (1), all the dimensions are updated simultaneously at each generation. By using Eq. (1) to generate new positions, employed bees and onlooker bees will be more intellective. However, when the current food source is the global best position, both Alpha and Beta vectors will equal to Foodi itself. Then Eq. (1) cannot generate new positions, which will cause waste of evaluation. Thus, a new division for the best food source should be assigned. In ABCIS algorithm, it uses the following equation to update the best food source’s position:

[TeX:] $$trial_{i j}=\text { Food}_{m j}+(-1)^{n 3}\left(c_{3} \text { Food }_{m j}-r a n d \cdot \text { Food }_{i j}\right)$$

where Foodm is a randomly selected food source and it is different from Food1. Similar to n1 and n2 in Eq. (1), n3 is also a integer number selected from the collection of {0,1}. Similar to c1 and c2,c3 is also generated by [TeX:] $$\frac{r a n d \cdot r a n d}{r a n d}$$. In Eq. (2), one randomly selected dimension is generated per iteration, which is similar to a traditional ABC. Furthermore, the position of the food source can be updated at each iteration by Eq. (2). The new food source location is randomly selected between triali and Foodm. The random selection strategy can improve the global search ability of the ABCIS algorithm.

3. Experiment

To investigate the proposed ABCIS algorithm’s effectiveness, integral and comprehensive experiments are conducted in this section.

3.1 Benchmarks’ Illustration

To testify the proposed algorithm’s efficacy, this paper adopts five classical benchmark functions which are widely used in literature. Their detailed information, including mathematical expression, search range, and optimal value is provided in Table 1. Among of them, the first five benchmark functions are unimodal, which means there is only one local minimum point and it is also the global minimum point.

The last three functions are multimodal functions whose local optimal position is numerous and the global optimal position is difficult to access.

Table 1.
Detailed information of fourteen benchmark functions
3.2 Comparison Experiments with Other State-of-the-Art Algorithms

In this subsection, this paper conducts experiments for the purpose of comparing the proposed ABCSI’s competitiveness with other evolutionary algorithms. In this comparison, the conventional PSO, DE, ABC algorithm and five ABC variants, including GABC, qABC [6], OCABC, ABCVSS [7] as well as ABCM [8] are adopted.

In this comparison, the population size is set to 40 and all the algorithms are repeated for 50 trails in order to be justice. Experiments are conducted on 30 dimensions, with the corresponding maximum iteration set as 3,000. For the maximum function evaluation number, it is set as the product of population size and maximum iteration number. When the maximum function evaluation number is achieved, the algorithm stops, and after 50 trials their final average fitness values (the first row) and standard deviations (the second row) result on these fourteen benchmarks are recorded, as presented in Table 2. Besides, the Wilcoxon rank-sum test [10] is also conducted to demonstrate the statistical effectiveness. In this measurement, the significant value is set to 5%. The symbol “=” means the proposed ABCIS algorithm attains results which are similar to the corresponding compared algorithm, while “+” means the corresponding algorithm exhibits better performance than ABCIS and “–” means worse. This outcome is also provided in Table 2. Further, Fig. 2 gives the convergence curves of these nine evolutionary algorithms on some typical benchmarks.

The results intuitively indicate that the proposed ABCIS algorithm exhibits best optimization performance almost on all the proposed benchmarks and dimensions. For the unimodal functions, the proposed ABCIS algorithm obtains the optimal value on several benchmarks and achieves best accuracy on other problems, which means that the two elite vectors guiding could better perfect the convergence speed. This can also be verified in the convergence curves presented in Fig. 2. For the multimodal functions, the proposed ABCIS algorithm also achieves great results, which may contribute to the update strategy of global best food source. Further demonstration will be discussed in the following subsection.

Table 2.
Comparison results (D=30)
Fig. 2.
Convergence curves (D=30) of nine evolutionary algorithms on some typical benchmarks.
3.3 Each Component of ABCIS’s Effectiveness

In this subsection, we aim at verifying each component of ABCIS’s effectiveness. The total dimension update strategy is considered. For experimental settings, the population size and the test dimension is chose as 40 and 10, respectively. Each trial is repeated for 50 times and the average fitness value (the first row) and standard deviations (the second row) is recorded.

In the proposed ABCIS algorithm, food sources except the best one are updated on total dimension at each generation. To testify this component’s effectiveness, this subsection designs another ABC algorithm name as ABCIS_sd (ABCIS with single dimension) to perform their comparison. Comparison results are stated in Table 3, from which it could be observed that total dimension update strategy could achieve much better solution accuracy on unimodal functions.

Table 3.
Effectiveness demonstration – total dimension update strategy

4. Conclusion

In order to overcome the defects of the slow convergence of the conventional artificial bee colony algorithm, this paper first proposes an intellective search strategy for bees searching food source, and then to compensate the intellective search strategy’s drawback, we introduce a special division for the global best food source, which accelerate the solution accuracy. Experimental results show that the ABCIS algorithm proposed in this paper has made great improvements on both unimodal and multimodal functions.


This paper is supported by the Wenzhou Science and Technology Bureau Program (No. G20140010) and the State Administration of Work Safety’s Key technology projects (No. zhejiang-0005-2015AQ).


He Huang

He received B.S. degree from Hangzhou Dianzi University, China, in 2001 and M.S. degree from Hangzhou Dianzi University, China, in 2006. Now, he works at Department of Information Technology, Wenzhou Vocational & Technical College, China as associate professor. His research interests mainly include optimization algorithm design and artificial intelligence.


Min Zhu

She received B.S. degree from Nanjing University of Posts and Telecommunications, China in 2002, M.S. degree from Beijing University of Posts and Telecommunications, China in 2005, and received Ph.D. degree in Nanjing University of Posts and Telecommunications, China in 2018. Now, he works at College of Information Science and Technology, Zhejiang Shuren University as lecturer. Her research interests mainly include routing protocol and optimization algorithm design.


Jin Wang

He received the B.S. and M.S. degrees from Nanjing University of Posts and Telecommunications, China in 2002 and 2005, respectively. He received Ph.D. degree from Kyung Hee University Korea in 2010. Now, he is a professor in the College of Information Engineering, Yangzhou University. His research interests mainly include routing algorithm design, performance evaluation and optimization for wireless ad hoc and sensor networks. He is a Member of IEEE and ACM.


  • 1 H. Chen, L. Ma, M. He, X. Wang, X. Liang, L. Sun, M. Huang, "Artificial bee colony optimizer based on bee life-cycle for stationary and dynamic optimization," IEEE Transactions on SystemsMan, and Cybernetics: Systems, vol. 47, no. 2, pp. 327-346, 2017.doi:[[[10.1109/TSMC.2016.2558045]]]
  • 2 A. Bose, K. Mali, "Fuzzy-based artificial bee colony optimization for gray image segmentation," SignalImage and Video Processing, vol. 10, no. 6, pp. 1089-1096, 2016.doi:[[[10.1007/s11760-016-0863-z]]]
  • 3 J. Vanus, J. Machac, R. Martinek, P. Bilik, J. Zidek, J. Nedoma, M. Fajkus, "The design of an indirect method for the human presence monitoring in the intelligent building," Human-centric Computing and Information Sciences, vol. 8, no. 28, 2018.doi:[[[10.1186/s13673-018-0151-8]]]
  • 4 X. Zhang, X. Zhang, L. Wang, "Antenna design by an adaptive variable differential artificial bee colony algorithm," IEEE Transactions on Magnetics, vol. 54, no. 3, pp. 1-4, 2018.doi:[[[10.1109/tmag.2017.2747095]]]
  • 5 Z. Ye, M. Zhu, J. Wang, "On modification and application of the artificial bee colony algorithm," Journal of Information Processing Systems, vol. 14, no. 2, pp. 448-454, 2018.doi:[[[10.3745/JIPS.01.0025]]]
  • 6 D. Karaboga, B. Gorkemli, "A quick artificial bee colony (qABC) algorithm and its performance on optimization problems," Applied Soft Computing, vol. 23, pp. 227-238, 2014.doi:[[[10.1016/j.asoc.2014.06.035]]]
  • 7 M. S. Kiran, H. Hakli, M. Gunduz, H. Uguz, "Artificial bee colony algorithm with variable search strategy for continuous optimization," Information Sciences, vol. 300, pp. 140-157, 2015.doi:[[[10.1016/j.ins.2014.12.043]]]
  • 8 X. Li, G. Yang, "Artificial bee colony algorithm with memory," Applied Soft Computing, vol. 41, pp. 362-372, 2016.doi:[[[10.1016/j.asoc.2015.12.046]]]
  • 9 W. Wu, B. Wang, Z. Deng, H. Zhang, "Secure beamforming for full-duplex wireless powered communication systems with self-energy recycling," IEEE Wireless Communications Letters, vol. 6, no. 2, pp. 146-149, 2017.doi:[[[10.1109/LWC.2016.2642954]]]
  • 10 S. Wang, J. Xie, Y. Zheng, J. Wang, T. Jiang, "A method of coupling expected patch log likelihood and guided filtering for image de-noising," Journal of Information Processing Systems, vol. 14, no. 2, pp. 552-562, 2018.doi:[[[10.3745/JIPS.02.0085]]]

Table 1.

Detailed information of fourteen benchmark functions
F Mathematical formula Range Optimal value
f1 [TeX:] $$\sum_{i=1}^{n} x_{i}^{2}$$ [-100,100] 0
f2 [TeX:] $$\sum_{i=1}^{n} i^{*} x_{i}^{2}$$ [-100,100] 0
f3 [TeX:] $$\sum_{i=1}^{n} i x_{i}^{4}+r a n d o m[0,1)$$ [-1.28,1.28] 0
f4 [TeX:] $$\sum_{i=1}^{n}\left|x_{i}\right|+\prod_{i=1}^{n}\left|x_{i}\right|$$ [-10,10] 0
f5 [TeX:] $$\max \left(\left|x_{1}\right|,\left|x_{2}\right|, \ldots,\left|x_{n}\right|\right)$$ [-100,100] 0
f6 [TeX:] $$\sum_{i=1}^{n}\left(\sum_{j=1}^{l} x_{j}\right)^{2}$$ [-100,100] 0
f7 [TeX:] $$10^{6 *} x_{1}^{2}+\sum_{i=2}^{n} x_{i}^{2}$$ [-100,100] 0

Table 2.

Comparison results (D=30)
f1 Avg. fitness value 2.05E-17 1.88E-47 5.29E-16 3.83E-16 0
Standard deviation 5.06E-17 4.89E-47 7.13E-17 9.45E-17 0
f2 Avg. fitness value 1.68E-16 1.28E-46 5.22E-16 3.98E-16 0
Standard deviation 2.08E-16 1.67E-46 9.79E-17 8.55E-17 0
f3 Avg. fitness value 2.20E-02 5.55E-03 3.97E-02 1.99E-02 2.76E-04
Standard deviation 8.44E-03 1.68E-03 1.08E-02 5.98E-03 1.20E-04
f4 Avg. fitness value 6.65E-13 1.96E-25 1.29E-15 1.21E-15 0
Standard deviation 5.57E-13 1.93E-25 1.28E-16 1.47E-16 0
f5 Avg. fitness value 2.20E-02 5.55E-03 3.97E-02 1.99E-02 2.76E-04
Standard deviation 8.44E-03 1.68E-03 1.08E-02 5.98E-03 1.20E-04
f6 Avg. fitness value 6.76E-17 1.02E-47 5.00E-16 3.69E-16 0
Standard deviation 1.67E-16 1.24E-47 8.35E-17 7.14E-17 0
f7 Avg. fitness value 3.63E+00 9.28E+00 2.19E+00 2.45E-01 5.79E-02
Standard deviation 1.07E+00 6.66E+00 6.04E-01 5.08E-02 7.29E-02

Table 3.

Effectiveness demonstration – total dimension update strategy
f1 f2 f3 f4 f5
ABCIS Avg. fitness value 0 0 2.27E-04 2.09E-243 6.45E-129
Standard deviation 0 0 1.62E-04 0 2.31E-128
ABCIS_sd Avg. fitness value 4.68E-60 2.07E-59 2.78E-04 2.49E-33 1.58E-02
Standard deviation 1.64E-59 6.32E-59 1.43E-04 3.35E-33 1.02E-02
Density distribution of generator [TeX:] $$\frac{\text {rand} \cdot \mathrm{rand}}{\text {rand}}$$ within the range of [0,2].
Convergence curves (D=30) of nine evolutionary algorithms on some typical benchmarks.