1. Introduction
Optimization theory has made great progress in recent years [1], and swarm intelligence algorithms attracting extensive attention. Their common goal is to seek the optimal solution for the problems [2]. In 2009, the gravitational search algorithm (GSA) was proposed by Rashedi et al. [3]. The inspiration for this algorithm was derived from Newtonian gravity. Using the interaction between agents in the group, each agent attracts each other to generate swarm intelligence, and the optimization search is completed. The algorithm has strong development ability, and the convergence accuracy and convergence rate are also significantly superior to those of other algorithms [4-6]. It has attracted more and more scholars’ attention and is widely used in many engineering fields, such as engineering production scheduling [7], because of its simple concept, few setting parameters, and easy implementation.
Many papers have been proposed to further improve the efficiency of GSA. Rashedi et al. [8] combined binary and gravitational search algorithms and proposed a binary gravitational search algorithm. The particle velocity value is related to the probability of the particle position change, which expands the application scope of the gravitational search algorithm. The authors of [9] and [10] combined the particle swarm algorithm (PSO) with the gravity algorithm, and improving the performance of GSA. Yang et al. [11] proposed immune GSA based on the basic framework of GSA, combined with the immune information processing mechanism of the immune system. To increase population diversity, and avoid premature convergence, relevant scholars introduced the idea of chaos into the GSA: cat chaotic mapping is introduced into GSA in [12], which changed the way the original GSA population was generated, changing random initialization into cat chaotic initialization population, and adopting a little chaos interference to jump out of the local optimum. Gao et al. [13] replaced the original random sequence with the chaotic sequence that was generated by logistic mapping and used chaos as the population local search’s method. A universal GSA that was based on adaptive chaotic mutation was proposed by literature [14]. In this paper, the concepts of average particle distance and chaotic search mutation were introduced into the algorithm. Boundary mutation constraint processing was adopted, and the local exploration ability of the algorithm was enhanced. Xu and Wang [15] proposed gravity search algorithm based on weight. During the iterative process, a weight-related to the mass of contemporary particles is added to the inertial mass of the particle, and the accuracy of the algorithm was improved effectively. Zhang and Gong [16] and Li et al. [17] introduced a differential mutation strategy when updating individual particle positions, both of which showed that the optimization performance of the algorithm was improved by using differential evolution strategy.
Although GSA has shown good performance compared with some traditional methods, it still confronts some problems when solving single objective optimization problems. In this paper, a sine map jumping gravity search algorithm based on asynchronous learning is proposed. The main contributions of this paper are listed as follows:
(1) By introducing learning factors, the diversity of the population is improved and the premature convergence of this algorithm is avoided.
(2) An improved map method based on a sine function is proposed. The sine value of particle velocity is mapped to the probability of particle position change. This enhances the convergence accuracy of this algorithm.
(3) This particle jumping mechanism is adopted. This jumping strategy prevents particles from going down into local optimal solution.
The remainder of this article is shown as follows: the GSA algorithm is given in Section 2. The improved SIN-GSA is presented in Section 3. Simulation experiments and results analysis are presented in Section 4. Finally, conclusions and future research contents are brought in Section 5.
2. Gravity Search Algorithm
The GSA has four elements: agent position, active gravity mass, passive gravity mass, and inertial mass. Consider a system with N agents (masses). The position of [TeX:] $$i^{t h}$$ agent is defined as:
where, [TeX:] $$i=1,2, \ldots, N, X_{i}^{D}$$ is the information of [TeX:] $$i^{t h}$$ agent in the [TeX:] $$d^{t h}$$ dimension. In [TeX:] $$i^{t h}$$ iteration, the force of agent “i” on agent “j” is as follows:
where [TeX:] $$\varepsilon$$ is a small constant, and G(t) is the gravitational constant on time t, which is related to the age of the universe as follows:
where [TeX:] $$G_{0}$$ is the gravitational constant, [TeX:] $$\alpha$$ is the given constant, and [TeX:] $$T$$ is the current iteration number.
With the assumption that the gravitational mass and the inertial mass are equal, the mass of the object can be updated according to appropriate rules. The method of individual mass and inertial mass is calculated as follows:
where [TeX:] $$M_{i i}$$ is the inertial gravitational mass of [TeX:] $$i^{t h}$$ agent, [TeX:] $$f i t_{i}(t)$$ represents the fitness value of the agent at the time t, worst(t) is the fitness value of the agent with the smallest mass, and best(t) is the fitness value of the agent with the largest mass. With the global minimization problem taken as an example, best(t) and worst(t) can be defined as follows:
On the basis of Newton’s second law, the acceleration of agent i in [TeX:] $$d^{t h}$$ dimension at time [TeX:] $$t$$ is as follows:
During the iteration of agents, the speed and position update method of agent [TeX:] $$i$$ can be defined as:
where [TeX:] $$\operatorname{rand}_{i}$$ takes a value in the interval [0,1], and used in the speed update formula to increase the randomness of agent search.
3. Sinusoidal Map Jumping Gravity Search Algorithm based on Asynchronous Learning
3.1 Asynchronous Learning Factors
The update formulas of agent velocity and displacement in the iterative process of GSA are shown as formulas (7) and (8). Each agent depends only on the gravity between the agents for optimization, which is affected by the current position information only, which indicating a lack of memory algorithm. At the beginning of the iteration, the agents are evenly distributed in the search space. As the iteration progresses, the surrounding agents will gather towards this better solution as long as a better solution is found. As the agents continue to gather, in the later stages of the iteration, the agents that gathered around the local optimal solution almost all have the same inertial mass, their attracting and attracting forces are almost equal, the population diversity disappears, and the algorithm will stagnate.
To alleviate the deficiency of the GSA, in which the diversity of the population is reduced in the late stage of iterations, the concept of a learning factor is introduced during the optimization of the GSA. By adjusting the learning factor, the memory and information sharing capabilities of the population agents during the evolution process are adjusted. Through the sharing of the elite individual’s own position information and the exchange and sharing of elite individual information during the population iteration process, the population diversity is improved to avoid premature convergence. Learning factor [TeX:] $$c_{1}$$ represents the agent’s learning from its own evolutionary mechanism, which is called memory, and retains its own individuals as much as possible. Thus, the diversity of the population is maintained and the overall development capability is enhanced based on this strategy. Individuals should enhance their ability to learn and communicate with the best individuals, that is, to share information, and to enhance local exploration capabilities. Therefore, the learning factor [TeX:] $$c_{2}$$ represents the learning of the agent evolution mechanism to the population. The learning factor [TeX:] $$c_{2}$$ can effectively alleviate the stagnation of the GSA. The agents obtain the optimal solution through memory and information sharing. SIN-GSA uses the currently obtained optimal solution to guide the agents with large inertial mass to move toward the global optimal direction, thus preventing all agents from converging toward the optimal solution.
The two learning factors change differently with time during the optimization process, so they are called asynchronously changing learning factors. During the early stage of evolution, the self-learning ability should be stronger to avoid the loss of the optimal solution; in the later stage of the evolution, the population learning ability should be stronger to avoid the local optimal solution, so the formula for the learning factor is as follows:
where, [TeX:] $$c_{i n i}$$ is the initial learning ability, [TeX:] $$c_{f i n}$$ is the learning ability at the end of the iteration, [TeX:] $$t$$ is the current iteration number, and [TeX:] $$T$$ is the maximum iteration number.
3.2 Sine Function Mapping
To further improve the convergence performance of the GSA, a sine function mapping strategy is proposed. With the use of the sine function, the sine value of the agent velocity is mapped to the probability that the agent position will change, and the performance of the algorithm is improved.
The search speed of the agent changes from fast to slow during the algorithm optimization process. When the agents speed is fast, it indicates that the current position of the agent has not reached the optimal position. Thus, the optimal value needs to be found as soon as possible; when the agent’s speed is slow, the position of the agents is close to the optimal position. When the optimal position is reached, the speed of the agent becomes zero. On the basis of these conditions, a sine function mapping strategy is proposed to improve the convergence performance of the GSA. The sine mapping function is shown in formula (11):
where [TeX:] $$v$$ is the velocity value of the agent, and [TeX:] $$f(v)$$ is the sine value of the velocity mapped to the probability of the agent position vector will change. When the velocity value is within the interval [TeX:] $$\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$$ , the sine value of velocity is mapped to the probability of agent position change. In this algorithm, a mandatory position update strategy is adopted. When the absolute value of the speed is larger, the greater probability value is given to the agent, and the convergence speed of the algorithm is thus increased. When the absolute value of the speed is small, a smaller probability is given to the agent and the convergence accuracy of the algorithm is thus improved. When the speed is outside the interval [TeX:] $$\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$$ , the probability of the position changing is 1.
3.3 Jumping Mechanism
In the GSA, the agent adjusts its own speed and position according to the gravity it receives. The agent will be limited by itself and the global optimal. Multiple extreme points are present in the multimodal problem, which is why the agents will gather in the local optimal when they are close to it. When the agents fall into the local optimal, jumping out of the region to explore new unknown regions is difficult. Therefore, the Levy flight mechanism is introduced, and the local optimal agent is given the ability to explore new areas.
Levy flight [18] is a random walk search method that can easily produce drastic changes during the search process, enabling the algorithm to jump out of the local optimal. Retain the position of the optimal agent of the population after the [TeX:] $$t^{t h}$$ iteration, and do a Levy search for it. The path of the Levy flight search is calculated as follows:
Among them, [TeX:] $$x$$ is Levi flight search path, both [TeX:] $$x$$ and [TeX:] $$u$$ follow a normal distribution, where [TeX:] $$u \sim N\left(0, \sigma^{2}\right), v \sim N(0,1) . \sigma$$ as following:
Among them, [TeX:] $$\beta$$ takes the value in (0 ~ 2), [TeX:] $$\Gamma$$ is the gamma function. The search path [TeX:] $$\operatorname{Levy}(\varepsilon)$$ for Levy flight can be determined by the above two formulas.
3.4 Pseudo-Code of SIN-GSA
Algorithm 1 is the pseudo-code of the SIN-GSA.
Sinusoidal map jumping gravity search algorithm based on asynchronous learning
4. Experiment and Analysis
4.1 Test Functions and Evaluation Criteria
To evaluate the performance of SIN-GSA, 18 test functions with different characteristics are selected. The details of the test functions are shown in Table 1. The functions are divided into three groups: [TeX:] $$F_{1}-F_{7}$$ F_1-F_7 are high-dimensional unimodal functions, which are used to test the optimization accuracy of the algorithms. [TeX:] $$F_{8}-F_{13}$$ F_8-F_13 are high-dimensional multimodal functions, which are used to test the global search performance of the algorithms and the ability to avoid premature convergence. [TeX:] $$F_{14}-F_{18}$$ F_14-F_18 are low-dimensional multimodal functions, which are used to test the robustness of the algorithms.
The following performance indicators are mainly involved:
(1) Solution accuracy: When the algorithm reaches a certain number of evaluations, the best accuracy can be obtained. The closer the value of the solution is to the theoretical optimal value, the better.
(2) Convergence speed: The algorithm is measured by the optimal solution that can be obtained under the same evaluation times, or by the evaluation times required to reach the optimal solution.
The algorithms were executed 30 times for each test function to obtain the statistical results. When the max number of iterations is 1000, the mean, best and the standard deviation (Std) of the solutions at the max number of iterations of 1000 are reported.
4.2 Comparison of Convergence Accuracy
The proposed SIN-GSA is compared with GSA [3] and PSO-GSA [10]. In this experiment, the mean, best and Std values were obtained by GSA, PSO-GSA and SIN-GSA. The experimental results are reported in Table 2, and the best results are highlighted in bold.
Experimental results of convergence accuracy
To display the optimization process of the algorithms intuitively, as shown in Fig. 1, the optimization iteration curves of part test functions are provided. In Fig. 1, the abscissa represents the number of iterations, and the ordinate represents the average fitness value (logarithm with e as the base).
The analysis of Table 2 and Fig. 1 indicates that the high-dimensional unimodal function [TeX:] $$\left(F_{1}-F_{7}\right)$$ examines the algorithm’s global search capabilities. Among the seven measured functions, the SIN-GSA can make the five functions converge to the theoretical value of zero. Even if [TeX:] $$F_{5}$$ and [TeX:] $$F_{7}$$ did not converged to the theoretical value of 0, the optimal value, average value and standard deviation value converged to in 1000 iterations can be significantly improved. As shown in Table 2, for the global optimization ability, the proposed SIN-GSA has the strongest.
[TeX:] $$F_{8}-F_{13}$$ are high-dimensional multimodal functions that have many local extremum points, which are used to test the ability of the algorithm to avoid premature convergence. [TeX:] $$F_{9}$$ is a typical nonlinear multi-modal function. There are many local extreme points in its search space, and its peak shape shows a jump shape, which will increase the search difficulty of the algorithm. Table 2 shows that better values were obtained by the proposed algorithm for the benchmark functions [TeX:] $$F_{9}-F_{11}$$ . In comparison, neither the GSA nor the PSO-GSA can obtain the ideal values of these two test functions.
The low-dimensional multimodal functions [TeX:] $$\left(F_{14}-F_{18}\right)$$ have relatively few local extremums, and global search is easier. As shown in Table 2, when the low-dimensional multimodal function is solved, each index of SIN-GSA is the minimum value and the theoretical optimal values can be obtained.
Test functions convergence diagram.
4.3 Comparison of Convergence Speed
When considering the convergence rate, we adopt the accuracy of the solution at the same number of iterations. [TeX:] $$F_{1}-F_{13}$$ are high-dimensional functions. The optimal solutions were counted when the number of iterations is 500, 1000, and 1500. The optimal solutions of the low-dimension functions [TeX:] $$\left(F_{14}-F_{18}\right)$$ are counted when the function evaluation times are 400, 600, and 800. Tables 3 and 4 show the experiment results.
Under the same evaluation times, the convergence accuracy of the proposed SIN-GSA has significantly improved compared with the GSA and PSOGSA. SIN-GSA can converge to the theoretical optimal value faster than the original algorithm. The convergence speed of the proposed algorithm is significantly improved, especially when the population is increased.
Experimental results of convergence speed [TeX:] $$\left(F_{1}-F_{13}\right)$$
Experimental results of convergence speed [TeX:] $$\left(F_{14}-F_{18}\right)$$
5. Conclusion
The SIN-GSA based on asynchronous learning is proposed to address the problems of insufficient convergence accuracy of the GSA. The main work of this paper is summarized as follows. (1) With the introduction of a learning mechanism into GSA, particles evolve themselves while keeping learning from outstanding particles in the population, and they remember their own evolution information and optimal particle evolution information during the evolution process to maintain the memory and sharing of evolutionary information, improve population diversity, and avoid premature convergence. (2) The concept of sine function mapping is introduced into GSA, and the sine function is used to map the change in particle velocity to the probability of position change, giving the particles strong position change information, and improving the algorithm convergence accuracy and speed. (3) With the introduction of the concept of Levy flight in GSA, the Levy flight strategy can make particles shake during the search, change the path of particle search, strengthen the algorithm searches for the local area, jump out of the local optimal area, and avoid falling into the local optimal solution. (4) By selecting representative different peak shape test functions for simulation experiments, and comparing with other improved algorithms, the results show that SIN-GSA has better optimization performance.
In future work, SIN-GSA can be extended to handle combinatorial optimization and constrained optimization problems. In addition, we can also employ SIN-GSA for solving more complex real-world problems.
Acknowledgement
This research is funded by the Jilin City Project of Scientific and Technological Innovation Development (No. 20190302202).