1. Introduction
The constrained optimization problem is a type of mathematics problems that exists widely in engineering applications, such as image processing, network communication, resource allocation optimization, and so on. Traditional deterministic optimization algorithms usually use gradient-based search methods to solve constrained optimization problems. The main disadvantage of these methods is that they need to set a good initial value. It is only applicable to the case where the objective function and the constrained conditions are different, and the solution is obtained mostly a local optimal solution [1,2]. In recent years, constrained optimization methods based on the swarm intelligence algorithms have become the main method to solve the constrained optimization problems because of the following two advantages. The first reason is the analytical form of the objective function should be independent of methods. The second is that it can converge to the global optimum with a large probability and has strong robustness [3,4].
However, almost all swarm intelligence algorithms are designed to solve unconstrained optimization problems. They lack a clear and effective mechanism to constrain the population search within the feasible domain, so that the constrained optimization problems cannot be directly solved, and must be supplemented with certain constrained processing techniques. Therefore, many experts and scholars are working on the integration of constrained processing techniques, evolutionary strategies, and the combination of constrained processing techniques and evolutionary strategies to improve the ability of solving constrained optimization problems based on swarm intelligence algorithms [5-7].
Constrained processing technologies as an important part of the constrained optimization algorithms develop rapidly, such as the special operator method to the Deb’s rule and constrained. Researches that combine the above constrained processing technologies and swarm intelligence algorithms have shown significant results. However, the accuracy and robustness of the constrained optimization algorithms still need to be further improved. A modified artificial bee colony algorithm (MABC) is proposed in [8]. In this paper, artificial bee colony (ABC) uses Deb’s rules consisting of three simple heuristic rules and a probabilistic selection scheme for feasible solutions based on their fitness values and infeasible solutions based on their constrained violation degrees. The results show that it has poor results for some functions. The authors of [9] propose DE that combines the Differential Evolution algorithm with constrained, but DE has more function evaluation times. The authors of [10] propose an optimization algorithm based on adaptive constrained. They introduce a new individual comparison criterion and setting method to improve the search efficiency and robustness of the algorithm to a certain extent. However, it has a poor effect on solving complex high-dimensional constrained optimization problems with more equality constraints. Interior penalty rules based on evolutionary algorithm (IPES) is proposed in [11], in which interior penalty functions are used to evaluate feasible solutions and constrained violation degrees. Experiments show that the convergence accuracy of IPES is poor. An improved artificial bee colony (I-ABC) algorithm for constrained optimization problems is proposed in [12]. The better infeasible solutions and the periodic boundary processing method are used to enhance the algorithm’s evolution ability. However, the experiments on the benchmark functions show that the optimization ability of I-ABC needs to be improved. In summary, the constrained conditions of the constrained optimization problems and the topological structure of the feasible domain and infeasible domain are very complex. Therefore, the constrained optimization algorithms are required to improve the ability of exploration and maintaining population diversity, and the processing ability of constrained processing technologies for infeasible solutions also needs to be improved.
In this paper, in order to solve constrained optimization problems better, an improved symbiotic organisms search algorithm with mixed strategy based on adaptive constrained (_SOSMS) is proposed. Adjusting the parameter setting method according to the real-time information of the evolutionary process to enhance the use of infeasible solutions and supply population diversity. The individual updating strategy and evolutionary strategies of symbiotic organisms search (SOS) are improved to improve the search ability, and the selection of optimal individuals is adjusted according to the proportion of feasible individuals and infeasible individuals. Finally, numerical experiments on 13 benchmark functions show that not only is _SOSMS able to converge to the global optimal solution, but also it has better robustness.
The remaining parts of this paper are arranged as follows: Section 2 describes the constrained optimization problems. Section 3 introduces SOS algorithm. Section 4 introduces an improved SOS algorithm for the constrained optimization problems. Section 5 is the experiments and discussion. Section 6 is the summary of the full text.
2. Constrained Optimization Problems
2.1 Problem Description
Without losing the generality, the minimized constrained optimization problem can be simply expressed as Eq. (1).
where x is an n-dimensional variable whose value is greater than and less than [TeX:] $$l_{i}$$ .[TeX:] $$f(x)$$ is the objective function. [TeX:] $$g_{i}$$ is the inequality constraints, and m is the number of inequality constraints. [TeX:] $$h_{j}$$ is the equality constraints, and n is the number of equality constraints. The optimal solution of this problem is the minimum value of the objective function under the premise of satisfying all constraints.
2.2 Constrained
constrained is one of the best constrained processing methods. It uses parameter to distinguish individuals. To extend the search of population by allowing more infeasible individuals with less constrained violation degrees to participate in the evolution, therefor the search ability of the algorithm is enhanced.
Constrained violation degree which represents the degree of an solution violates the constrained conditions is calculated as Eq. (2):
In solving the constrained optimization problems, equality constraints are usually transformed into inequality constraints by Eq. (3):
where generally takes a smaller number, and is constantly set to 0.0001 as literature [13].
constrained shows better results than the other constrained processing methods in dealing with relationship between feasible individuals and infeasible individuals. Its individual selection strategy is as follows. When the constrained violation degrees of two individuals are all greater than , the individual with smaller constrained violation degree is selected. When the constrained violation degrees of two individuals are all smaller than , the individual with smaller fitness value is selected. When the constrained violation degree of one individual is smaller than , and the other is greater than , the individual whose constrained violation degree is smaller than is selected. The method of setting parameter descripted in [14] is shown in Eqs. (4), (5), and (6).
where [TeX:] $$X_{\theta}$$ is the -th individual in ascending order of constrained violation degrees of all individuals. S represents the population size. T is the maximum iteration number. t is a fixed number which is generally set by experiments. [TeX:] $$\Phi$$ is constrained violation degree.
Eq. (4) shows that parameter will decrease with the increase of the number of iterations until it reaches zero. In the pre-evolutionary phase, infeasible individuals with large violation degrees are allowed to participate in evolution. As the iteration progresses, the number of feasible individuals in the evolution population increases, and keeps getting smaller. Constrained violation degree that allows the infeasible solution to participate in evolution gradually decreases until all individuals in the evolution population are feasible solutions.
3. Symbiotic Organisms Search
As a new swarm intelligence optimization algorithm, SOS algorithm was proposed in 2014 [15]. Three kinds of update mechanisms are established in SOS by simulating the interaction between different organisms in nature, including mutualism phase, commensalism phase, and parasitism phase. To perform three update mechanisms in a loop until the termination condition is reached, then the optimal solution is output. Update mechanisms are shown as follows.
3.1 Mutualism Phase
In this phase, a biological individual [TeX:] $$X_{i}$$ interacts with another individual [TeX:] $$X_{j},$$ which is selected randomly from the population [TeX:] $$(j \neq i),$$ both of them promote their development under the guidance of the current optimal individual, then produce new individuals [TeX:] $$X_{\text {inew}} \text {and } X_{\text {jnew}},$$ the update methods are shown as Eqs. (7) and (8).
where rand (0,1) is a random number between [0,1], [TeX:] $$X_{b e s t}$$ represents the individual with smallest fitness value. [TeX:] $$B F_{I} \text { and } B F_{2}$$ are determined randomly as either 1 or 2. [TeX:] $$X_{\text {inew}}$$ is the updated result of [TeX:] $$X_{i}$$ according to formula (8). Preserve the one with better fitness value and so do [TeX:] $$X_{\text {jnev}} \text {and } X_{j}.$$
3.2 Commensalism Phase
The individual [TeX:] $$X_{i}$$ Xi in the population carries out the Commensalism phase according to the Eq. (9)
where rand (-1,1) is a random number between [-1,1]. The remaining variables have the same meaning as Mutualism phase. Comparing [TeX:] $$X_{\text {inew}} \text {with } X_{i}$$ and retaining the one with better fitness value.
3.3 Parasitic Phase
Individuals update mechanism in the parasitic phase is shown as follows. Parasite_Vector is created in the search space by duplicating individual [TeX:] $$X_{i},$$ then modify the randomly selected dimensions using a random number. [TeX:] $$X_{j}(j \neq i)$$ that is randomly selected compares with Parasite_Vector. If Parasite_Vector has a better fitness value, it can be saved. Otherwise, keeping [TeX:] $$X_{j}$$ unchanged.
The flowchart of SOS is shown in Fig. 1.
4. Symbiotic Organisms Search Algorithm for Constrained Optimization Problems
SOS is proposed for unconstrained optimization problem. In order to solve constrained optimization problems, an improved symbiotic organisms search algorithm with mixed strategy based on adaptive constrained (_SOSMS) is proposed in this paper.
4.1 Constrained
The framework of _SOSMS is shown as follows.
Step 1: Initialize parameters, including the size of population, searching range, and so on. And generate initial population randomly.
Step 2: Calculate the fitness and constrained violation degree of each individual.
Step 3: Select the optimal individual and update the individuals according to the mutualism strategy as Section 4.3
Step 4: Update the individuals according to the commensalism strategy as Section 4.3.
Step 5: Update the individuals according to the parasitic phase as Section 3.
Step 6: Select the individuals to form evolutionary population as Section 4.2.2.
Step 7: If the termination criterion is not satisfied, go to Step 3. Otherwise, stop and output the optimal solution.
The details of the method of selecting optimal individual of individuals update section of Step 3, Step 4 and Step 5 are shown in Section 4.3.2. The individual comparison method in the update strategy is carried out in the Section 4.2. The details of individuals’ renewal in mutualism phase and commensalism phase are as follows in the Section 4.3.1.
4.2 Improved Constrained
constrained consists of setting parameter and individual selection strategy, which are improved as follows.
4.2.1 Adaptive constrained
Many studies have shown that the optimal solution of the constrained optimization problems often exist at the boundary of the feasible domain, and infeasible solutions which is regarded as poor individuals cannot be saved. Actually, the individuals with small constrained violation degrees are the link between feasible and infeasible regions. Allowing the individuals with small constrained violation degrees to participate in evolution can enhance boundary search capabilities and provide direction information for the evolution of the algorithm. However, too much infeasible individuals or the individuals with big constrained violation degrees can also affect evolution. Therefore, the reasonable disposal of feasible and infeasible solutions plays an important role in evolution. In previous studies, the method of setting parameter depends only on the number of evolution iterations. It does not make full use of the available information of feasible individuals and infeasible individuals with small constrained violation degrees in the iteration process, which affects the evolutionary efficiency of the algorithm to some extent. Based on this, this paper presents a new method of setting parameter as shown in Eq. (10).
where [TeX:] $$G_{\max } \text { and } G_{m i n}$$ Gmin are the maximum and minimum value of constrained violation degrees of all individuals. G' is the average value of all individuals’ constrained violations degrees. t is the number of current iteration, and T is the maximum number of iterations. is the proportion of feasible individuals in the current iterations.
As shown in Eq. (10), the improved method of setting parameter is adaptively changed according to the constrained violation degrees and the number of iterations. Not only rely on the number of iterations, but also make full use of the effective information of infeasible solutions. Through adaptive adjustment, the balance between feasible individuals and infeasible individuals and the relationship between constrained violation degrees and fitness are enhanced, so the search ability of the algorithm is improved.
4.2.2 Individual selection strategy
In order to ensures that some better infeasible individuals are selected to make the algorithm move closer to the feasible domain faster, a new individual selection strategy of constrained is proposed in this paper according to three possible cases of the merged population combining new population and original population. The population includes all individuals are infeasible, feasible individuals and infeasible individuals exist simultaneously, and all individuals are feasible. The details are shown as follows.
(1) All individuals are infeasible. All individuals are ranked in ascending order according to the constrained violation degrees, and the top N individuals with the smaller constrained violation degrees constitute a new evolution population.
(2) Feasible individuals and infeasible individuals exist simultaneously. It is common knowledge that how to deal with the relationship between infeasible individuals and feasible individuals is the key to solve the constrained optimization problems. Based on the above viewpoint, both of the fitness value and constrained violation degree are taken into account to select the better individuals when the merged population include feasible individuals and infeasible individuals. The details are as follows.
Because the individual’s fitness value and constraint violation degree lack of consistency measure, so they need to be normalized. The fitness value and constrained violation degree of individual [TeX:] $$X_{i}$$ are normalized by Eq. (11) and Eq. (12) respectively, and sum up the normalized fitness value and the constrained violation degree as Eq. (13). Then sort the summed results in ascending order, and the top N individuals constitute a new evolution population.
(3) All individuals are feasible. In this case, constrained optimization problem becomes unconstrained optimization problem. Therefore, the N individuals with the smaller fitness value are selected to constitute a new evolution population.
4.3 Improved Mutualism Phase and Commensalism Phase
4.3.1 Improved the formulas of individual renewal
In SOS, the individuals need to learn from the optimal individuals in mutualism and commensalism phases as Eq. (8) and Eq. (9). However the convergence speed can be accelerated, it reduces the diversity of the population to a certain extent. Considering when there are too many infeasible individuals in population, it is necessary to make the individuals move to the feasible domain. In this paper, in order to enhance the population exploitation ability, the differential vector is added to Eq. (8) and Eq. (9) in mutualism phase and commensalism phase to form Eq. (14) and Eq. (15), respectively.
where [TeX:] $$X_{\text {best }}$$ represents the optimal individual. X_c represents the individual with smallest constrained violation degree. [TeX:] $$X_{j} \text { and } X_{-} r$$ are random individuals [TeX:] $$(r \neq i \neq j).$$
According to the above search strategy, the selected individuals which have the smaller constrained violation degree or fitness represent the excellent evolutionary information of the current population. Learning from them ensure that the population can always move closer to the optimal location, and make the infeasible individuals transform into feasible individuals gradually.
4.3.2 The selection of optimal individuals
As described in Section 4.3.1, Eq. (14) and Eq. (15) need to select the optimal individual [TeX:] $$X_{\text {best}}.$$ Considering the individuals in current population may exist three cases, which all individuals are infeasible, feasible individuals and infeasible individuals exist simultaneously, and all individuals are feasible, [TeX:] $$X_{\text {best }}$$ can be selected according to the following three methods.
(1) All individuals are infeasible. In this case, the individuals should be guided to close to the feasible domain as soon as possible. The constrained violation degree represents the relationship between the individual and the feasible domain. The smaller the constrained violation degree is, the closer the individual is to the feasible region. Therefore the individual with the smallest constrained violation degree will be selected as [TeX:] $$X_{\text {best }}.$$
(2) Feasible individuals and infeasible individuals exist simultaneously. The individual with the smallest fitness can provide direction information of population evolution. While the individual with small constrained violation degree usually carry better location information, it can enhance the boundary search capability of the algorithm. Based on the above considerations, [TeX:] $$X_{\text {best}}$$ is selected by Eq. (16) in this case.
where X(min(G)) represents the individual with the smallest constrained violation degree, and X(min(f)) represents the individual with the smallest fitness value in the current iteration.
(3)All individuals are feasible individuals. When all individuals are feasible, constrained optimization problem is transformed into unconstrained optimization problem. The individual with smallest fitness value is selected as [TeX:] $$X_{\text {best}}.$$
The pseudo code of the method of selecting the optimal individual [TeX:] $$X_{\text {best }}$$ is shown below.
In the above pseudo code, percent is equal to the number of feasible individuals over the number of all individuals, and p1 is a random number. The first 3 lines correspond to the situation that all individuals are infeasible, Lines 4 to 10 correspond to that feasible and infeasible individuals exist simultaneously, and lines 11 to 13 correspond to that all individuals are feasible.
Details of 13 benchmark functions
5. Experiments and Discussion
5.1 Test Functions
The performance of proposed _SOSMS is tested on 13 well-known benchmark functions [16]. Table 1 exhibits the details of the 13 benchmark functions. In Table 1, D is the dimension of variables, is the estimated ratio between the feasible region and the search space, LI is the number of linear inequality constrains, NI is the number of nonlinear inequality constrains, NE is the number of nonlinear equality constrains, and a is the number of constrained active at the optimal solution.
5.2 Experimental Setup
In order to verify the performance of _SOSMS, _SOSMS is compared with _SOS, MABC [8], DE [9], IPES [11], and I-ABC [12]. _SOS is an improved SOS algorithm based on improved constrained of Section 4.2. The parameters of _SOSMS are set as follows: =0.0001, n=1.1, p1=0.8, p2=0.9. For each test function, each algorithm runs 30 times independently and the size of population of each algorithm is 50. DE is stopped when function evaluation times reach [TeX:] $$2.0 \times 10^{5}$$ , and the other algorithms are stopped when function evaluation times reach [TeX:] $$2.4 \times 10^{4}.$$ All the experiments are executed on Windows 8.1 with Interl Core i5-3337u CPU 1.8 GHz and 4.0 GB RAM.
5.3 Experimental Results
The experimental results of _SOSMS and other algorithms are summarized in Table 2. The result of best, worst, mean and variance represent the best value, the worst value, the average value and standard deviation of 30 independent running results respectively. Where the first three indicators are used to compare the convergence accuracy, and the variance is used to compare the stability.
Performance comparison of algorithms
From Table 2, some conclusion can be obtained. Firstly, for all benchmark functions, _SOSMS achieves better results than _SOS. It shows that the improvement of evolutionary strategy of SOS in Section 4.2 is effective. Secondly, _SOSMS can find the optimal values of 12 test functions except g02. DE did not find the theoretical optimal value only on g13. However fitness evaluation times of DE is significantly more than _SOSMS. I-ABC finds theoretical optimal values only on 9 test functions. Compared with I-ABC, _SOSMS only obtains poor results on g02 function. MABC can find the theoretical optimal value of 6 benchmark functions. In summary, _SOSMS proposed in this paper is significantly superior to the other four algorithms in the ability of finding theoretical optimal values. Thirdly, except for the function g10, the variance of _SOSMS is almost equal to 0. In addition, for most benchmark functions, the variance obtained by _SOSMS is less than other algorithms. It shows that the proposed algorithm has better stability than other algorithms.
The number of fitness evaluations
In order to verify the convergence speed of _SOSMS, _SOSMS is compared with MABC which has better optimization performance among the above contrast algorithms. These two algorithms run 30 times independently. The results are shown in Table 3. FES represents the average value of minimum function evaluation times that are needed to find the theoretical optimum value for each algorithm, and SR represents the success rate of reaching the theoretical optimal value in 30 independent experiments. NA shows that the algorithm does not find the theoretical optimal value when function evaluation times reach [TeX:] $$2.4 \times 10^{4}$$
From Table 3, _SOSMS can find the optimal value on 12 benchmark functions, the success rate of _SOSMS on the 10 test functions is 100%. But MABC can find the optimal value on 9 benchmark functions, and the success rate of MABC on only 5 test functions is 100%. It shows that _SOSMS is superior to MABC in robustness. In addition, except g02 and g11, function evaluation times that is needed to find the theoretical optimum value by _SOSMS is significantly less than MABC, which shows that _SOSMS is superior to MABC in convergence speed.
Through the above experiments, we can find that _SOSMS has obvious advantages in the optimization ability and convergence speed.
6. Conclusions
In this paper, an improved Symbiotic Organisms Search algorithm with mixed strategy based on adaptive constrained method (_SOSMS) is proposed. Firstly, an adaptive constrained is proposed to take full advantage of infeasible individuals, which can improve searching capabilities in the search space and avoid algorithms falling into local optimum. Secondly, the evolutionary strategy of SOS is improved to use the optimal individual for accelerating the convergence speed and search capabilities. Experiments on 13 benchmark functions show that _SOSMS has better performance than the other algorithms.
Acknowledgement
This work was supported in part by the National Natural Science Foundation of China (No. 61501107), the Education Department of Jilin province science and technology research project of “13th Five-Year” in2016 (No. 95), and the Project of Scientific and Technological Innovation Development of Jilin (No. 201750219, 201750227).