Fill This Form To Receive Instant Help
Homework answers / question archive / Only Cuckoo Search Algorithm to be applied on 20 Challenges from the below list
Only Cuckoo Search Algorithm to be applied on 20 Challenges from the below list. No other thing is to be done.
Optimization Functions
For this project, 40 test challenging benchmark functions (Liang, Qu et al. 2013, Liang, Qu et al. 2013, Wahab, Nefti-Meziani et al. 2015) are collected. You are required to implement nature-inspired algorithm to find the optimal value of any 20 optimization functions of your choice from the list given below for your class project.
There are several characteristics of the test functions that feature the complexity of the functions’ landscape like modality, separability, scalability (Dieterich and Hartke 2012). Modality is defined by the number of ambiguous peaks in the function landscape. A function is called separable if the variables of the solution are independent and can be optimized separately. On the other hand, the inseparable class of functions are highly difficult to solve as the variables are interrelated and affect each other. The functions that are extendable to arbitrary dimensionality are known as scalable function, otherwise are called non-scalable. The scalability often introduces high extent of complexity to the search space such as some test function landscapes become multimodal from unimodal when the number of dimensions is scaled up.
There are some additional features that can make a function landscape even more challenging while searching for minima such as flat surface (less informative area), basins and narrow curved valley. However, some algorithms are found to exploit several shortcomings (Liang, Suganthan et al. 2005) of popularly used test functions such as the global optimum with same values for different variables (Zhong, Liu et al. 2004), global optimum lying in the center of the landscape, at the origin (Zhong, Liu et al. 2004), along the axes (Leung and Wang 2001, Van den Bergh and Engelbrecht 2004), on the bounds (Coello, Pulido et al. 2004) etc. Thereafter, the conventional test functions are rotated and/or shifted before using them to test optimization algorithms (Liang, Suganthan et al. 2005, Qin, Huang et al. 2009, Liang, Qu et al. 2013, Liang, Qu et al. 2013, Yu and Li 2015). In addition to using rotated and shifted functions, we hybridized functions according to (Liang, Qu et al. 2013) to generate highly complex real-world optimization problems to test the extended-capacity of the algorithms. We collected the rotation matrix, shifting vector and shuffled-indices vector for rotation, shifting and hybridization, respectively from a shared link of data used in CEC 2014 .
Among 40 functions, 27 are classical functions whereas 13 functions are rotated and/or shifted and hybridized to construct modified functions. Depending on the three basic properties (modality, separability, and scalability) mentioned above, we classify the 27 functions under consideration into six groups (Group I through VI). The rest of the 13 modified functions are categorized into three groups according to their complexities. The definition of the functions under different groups with their corresponding search ranges ([lb,ub]) of solution variable (X), global optimum (X^*) and the function value at global optimum, ?f(X?^*) are listed as following:
Group I: Unimodal, Separable and Scalable functions
Sphere function, f_Sphere (X)=∑_(i=1)^d?x_i^2
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_Sphere (X^* )=0
Cigar or Bent Cigar function, f_Cigar (X)=x_1^2+10^6 ∑_(i=2)^d?x_i^2
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_Cigar (X^* )=0
Discus function, f_Discus (X)=10^6 x_1^2+∑_(i=2)^d?x_i^2
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_Discus (X^* )=0
Rotated Hyper-ellipsoid (RHE) function, f_RHE (X)=∑_(i=1)^d?∑_(j=1)^i?x_j^2
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_RHE (X^* )=0
Group II: Unimodal, Inseparable and Non-scalable
Zettl function, f_Zettl (X)=1/4 x_1+(x_1^2-2x_1+x_2^2 )^2
[lb,ub]=[-5,5]^d, X^*=[-0.0299,0]^T, f_Zettl (X^* )=-0.003791237
Leon function, f_Leon (X)=100(x_2-x_1^2 )^2+(1-x_1 )^2
[lb,ub]=[-1.2,1.2]^d, X^*=?[1,1]?^T, f_Leon (X^* )=0
Easom function, f_Easom (X)=-cos(x_1 )cos(x_2 ) e^([-?(x_1-π)?^2-?(x_2-π)?^2])
[lb,ub]=[-100,100]^d, X^*=?[π,π]?^T, f_Easom (X^* )=-1
Group III: Unimodal, Inseparable and Scalable
Zakharov function, f_Zakharov (X)=∑_(i=1)^d?x_i^2 +(1/2 ∑_(i=1)^d??ix_i ?)^2+(1/2 ∑_(i=1)^d??ix_i ?)^4
[lb,ub]=[-5,10]^d, X^*=?[0,…,0]?^T, f_Zakharov (X^* )=0
Schwefel 1.2 function, f_Schwefel1.2 (X)=∑_(i=1)^d?(∑_(j=1)^i?x_j )^2
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_Schwefel1.2 (X^* )=0
Schwefel 2.2 function, f_Schwefel2.2 (X)=∑_(i=1)^d?|x_i | +∏_(i=1)^d?|x_i |
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_Schwefel2.2 (X^* )=0
Group IV: Multimodal, Separable and Scalable
Rastrigin function
f_Rastrigin (X)=10d+∑_(i=1)^d?[x_i^2-10cos(2πx_i )]
[lb,ub]=[-5.2,5.2]^d, X^*=?[0,…,0]?^T, f_Rastrigin (X^* )=0
Schwefel 2.26 function
f_Schwefel2.6 (X)=418.9829d-∑_(i=1)^d?? x_i sin(√(|x_i | )) ?
[lb,ub]=[-500,500]^d, X^*=?[420.9687,…,420.9687]?^T, f_Schwefel2.6 (X^* )=0
Michalewicz function
f_Michalewicz (X)=-∑_(i=1)^d?sin(x_i ) ?sin?^2s ((ix_i^2)/π)
steepness,s=10
[lb,ub]=?[0,π]?^d, X^*=?[2.20,1.57]?^T,d=2
f_Michalewicz (X^* )=-1.8013,-4.687658,-9.66015, d=2,5,10
Styblinski-Tang (ST) function
f_ST (X)=1/2 ∑_(i=1)^d?(x_i^4-16x_i^2+5x_i )
[lb,ub]=?[-5,5]?^d, X^*=?[-2.903534,…,-2.903534]?^T,
f_ST (X^* )=-39.16599d
Group V: Multimodal, Inseparable and non-Scalable
Schaffer F2 function
f_ScafferF2 (X)=0.5+(?sin?^2 (x_1^2-x_2^2 )-0.5)/[1+0.001(x_1^2+x_2^2 )]^2
[lb,ub]=?[-100,100]?^d, X^*=?[0,0]?^T, f_ScafferF2 (X^* )=0
Schaffer F6 function
f_ScafferF6 (X)=0.5+(?sin?^2 (√(x_1^2+x_2^2 ))-0.5)/[1+0.001(x_1^2+x_2^2 )]^2
[lb,ub]=?[-100,100]?^d, X^*=?[0,0]?^T, f_ScafferF6 (X^* )=0
Bird function
f_Bird (x)=(x_1-x_2 )^2+sin(x_1 ) e^([1-cos(x_2 )]^2 )+ cos(x_2 ) e^([1-sin(x_1 )]^2 )
[lb,ub]=?[-2π,2π]?^d, X^*=?[4.701056,3.152946]?^T, ?X^*=[-1.582142,-3.130247]?^T, f_Bird (x^* )=-106.7645367198034
Levy 13 function
f_Levy13 (X)=?sin?^2 (3πx_1 )+(x_1-1)^2 [1+?sin?^2 (3πx_2 )]
+ (x_2-1)^2 [1+?sin?^2 (2πx_2 )]
[lb,ub]=?[-10,10]?^d, X^*=?[1,1]?^T, f_Levy13 (X^* )=0
Carrom Table (CT) function
f_CarromTable (X)=-1/30 e^2|1-√(x_1^2+x_2^2 )/π| ?cos?^2 (x_1 ) ?cos?^2 (x_2 )
[lb,ub]=?[-10,10]?^d, X^*=?[±9.646157,±9.646157]?^T
f_CarromTable (X^* )=-24.1568155
Group VI: Multimodal, Inseparable and Scalable
Ackley function
f_Ackely (X)=-ae^((-b√(1/d ∑_(i=1)^d?x_i^2 )) )-e^((1/d ∑_(i=1)^d?cos(cx_i ) ) )+a+e^1
a=20,b=0.2,c=2π
[lb,ub]=?[-32,32]?^d, X^*=?[0,…,0]?^T, f_Ackley (X^* )=0
Rosenbrock function
f_Rosenbrock (X)=∑_(i=1)^(d-1)?[100(x_(i+1)-x_i^2 )^2+(x_i-1)^2 ]
[lb,ub]=?[-30,30]?^d, X^*=?[1,…,1]?^T, f_Rosenbrock (X^* )=0
Griewank function
f_Griewank (X)=∑_(i=1)^d?(x_i^2)/4000-∏_(i=1)^d?cos(x_i/√i) +1
[lb,ub]=[-600,600]^d, X^*=?[0,…,0]?^T, f_Griewank (X^* )=0
Sine Envelope Sine Wave (SESW) function
f_SESW (X)=∑_(i=1)^(d-1)?((?sin?^2 (√(x_1^2+x_2^2 ))-0.5)/[1+0.001(x_1^2+x_2^2 )]^2 +0.5)
[lb,ub]=?[-100,100]?^d, X^*=?[0,…,0]?^T, f_SESW (X^* )=0
Trigonometric function
f_Trigonometric (X)=∑_(i=1)^d?[m+i(1-cosx_i )-sinx_i-∑_(j=1)^d??cosx_j ?]^2
[lb,ub]=?[-1000,1000]?^d, X^*=?[0,…,0 ]?^T, f_Trigonometric (X^* )=0
Levy function
f_Levy (X)=?sin?^2 (πy_1 )+∑_(i=1)^(d-1)??[(y_i-1)^2 (1+10(?sin?^2 y_(i+1) ))]+?
(y_n-1)^2 (1+?sin?^2 (2πy_n )),y_i=1+1/4 (x_i+1)
[lb,ub]=?[-50,50]?^d, X^*=?[-1,…,-1 ]?^T, f_Levy (X^* )=0
Schaffer F7 function
f_ScafferF7 (X)=[1/((d-1) ) ∑_(i=1)^(d-1)?(√(y_i )+sin(50?y_i?^0.2 ) √(y_i )) ]^2,y_i=√(x_i^2+x_(i+1)^2 )
[lb,ub]=?[-100,100]?^d, X^*=?[0,…,0 ]?^T, f_ScafferF7 (X^* )=0
Lunacek Function
f_Lunacek (x)=[min({∑_(i=1)^d?(x_i-μ_1 )^2 },{1.d+s∑_(i=1)^d?(x_i-μ_2 )^2 })]+
10∑_(i=1)^d??1-cos2π(x_i-μ_1 ) ?
μ_1=2.5,s=1-1/(2√(d+20)-8.2),μ_2=-√((μ_1^2-1)/s)
[lb,ub]=?[-10,10]?^d, X^*=?[μ_1,…,μ_1 ]?^T, f_Lunacek (X^* )=0
Group VII: Rotated functions
Rotation of the landscape of the functions using an orthogonal rotation matrix (R) (Salomon 1996) can prevent the optimum from lying along the coordinate axes. Here, we rotate 7 scalable (2 unimodal and 5 multimodal) functions.
Rotated Cigar function, f_(R-Cigar) (Z)=f_Cigar (Z), Z=R×X
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_(R-Cigar) (X^* )=0
Rotated Discus function, f_(R-Discus) (Z)=f_Discus (Z), Z=R×X
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_(R-Discus) (X^* )=0
Rotated Rastrigin function
f_(R-Rastrigin) (Z)=f_Rastrigin (Z), Z=R×(X×((5.2)⁄(100)))
[lb,ub]=[-5.2,5.2]^d, X^*=?[0,…,0]?^T, f_(R-Rastrigin) (X^* )=0
Rotated Ackley function
f_(R-Ackley) (Z)=f_Ackley (Z) , Z=R×(X×((32)⁄(100)))
[lb,ub]=[-32,32]^d, X^*=?[0,…,0]?^T, f_(R-Ackley) (X^* )=0
Rotated Griewank function
f_(R-Griewank) (Z)=f_Griewank (Z) , Z=R×(X×((600)⁄(100)))
[lb,ub]=[-600,600]^d, X^*=?[0,…,0]?^T, f_(R-Griewank) (X^* )=0
Rotated Schaffer F7 function
f_(R-SchafferF7) (Z)=f_SchafferF7 (Z), Z=R×X
[lb,ub]=[-100,100]^d, X^*=?[0,…,0]?^T, f_(R-SchafferF7) (X^* )=0
Rotated Lunacek function
f_(R-Lunacek) (Z)=f_Lunacek (Z), Z=R×(X×((10)⁄(100)))
[lb,ub]=[-10,10]^d, X^*=?[μ_1,…,μ_1 ]?^T, f_(R-Lunacek) (X^* )=0
Group VIII: Shifted and Rotated functions
Shifting operation moves the global optimum to a new random position (o_new) from the old optimum (o_old) using Eq. 1 to avoid the limitation of having same values of the variables at optima or having optimum at the center.
f(Z),Z=R×(X-o_new+o_old)
(1)
Shifted and Rotated Rastrigin function
f_(SR-Rastrigin) (Z)=f_Rastrigin (Z),
Z=R×((X-o_new+o_old)×((5.2)⁄(100)))
[lb,ub]=[-5.2,5.2]^d, X^*=?[0,…,0]?^T, f_(SR-Rastrigin) (X^* )=0
Shifted and Rotated Ackley function
f_(SR-Ackley) (Z)=f_Ackley (Z),
Z=R×((X-o_new+o_old)×((32)⁄(100)))
[lb,ub]=[-32,32]^d, X^*=?[0,…,0]?^T, f_(SR-Ackley) (X^* )=0
Shifted and Rotated Lunacek function
f_(SR-Lunacek) (Z)=f_Lunacek (Z),
Z=R×((X-o_new+o_old)×((10)⁄(100)))
[lb,ub]=[-10,10]^d, X^*=?[μ_1,…,μ_1 ]?^T, f_(SR-Lunacek) (X^* )=0
Group IX: Hybrid functions
Hybrid function defined by Eq. 2 represents real-world optimization problems where different subcomponents of the variables can have different characteristics. For these functions, the variables are randomly shuffled and divided into subcomponents and then different basic functions are used for different subcomponents. In our hybrid functions, we have further rotated the basic functions.
f(Z)=f_1 (?R_1 Z?_1 )+f_2 (R_2 Z_2 )+?+f_n (?R_n Z?_n) (2)
Here,
Z=[Z_1,Z_2,…,Z_n]
Z_1=[x_(S_1 ),…,x_(S_(d_1 ) ) ], Z_2=[x_(S_(d_1+1) ),…,x_(S_(d_1+d_2 ) ) ],… Z_n=[x_(S_∑_(i=1)^(n-1)??d_i+1? ),…,x_(S_d ) ]
n is number of basic functions
S=random_permutation(1:d)
f_i is the ith basic function
d_i is the dimension of the ith basic function, ∑_(i=1)^n?d_i =d
p_i is the percentage of the variables for ith basic function
d_1=⌈p_1×d⌉, d_2=⌈p_2×d⌉, …, d_n=d-∑_(i=1)^(n-1)?d_i
Hybrid function – 1
f_(Hybrid-1) (Z)=f_Rastrigin (R_1 Z_1 )+f_Cigar (R_2 Z_1 )+f_Griewank (R_n Z_1 )
[p_1,p_2,p_3 ]=[1⁄3,1⁄3,1⁄3] for d = 30
[p_1,p_2,p_3 ]=[2⁄5,1⁄5,2⁄5] for d = 50
[lb,ub]=[-100,100]^d, f_(Hybrid-1) (X^* )=0
Hybrid function – 2
f_(Hybrid-2) (Z)=f_Sphere (R_1 Z_1 )+f_Ackley (R_2 Z_1 )+f_SchafferF7 (R_n Z_1 )
[p_1,p_2,p_3 ]=[1⁄3,1⁄3,1⁄3] for d = 30
[p_1,p_2,p_3 ]=[2⁄5,1⁄5,2⁄5] for d = 50
[lb,ub]=[-100,100]^d, f_(Hybrid-2) (X^* )=0
Hybrid function – 3
f_(Hybrid-3) (Z)=f_Discus (R_1 Z_1 )+f_Rosenbrock (R_2 Z_1 )+f_Griewank (R_n Z_1 )
[p_1,p_2,p_3 ]=[1⁄3,1⁄3,1⁄3] for d = 30
[p_1,p_2,p_3 ]=[2⁄5,1⁄5,2⁄5] for d = 50
[lb,ub]=[-100,100]^d, f_(Hybrid-3) (X^* )=0
References:
Liang, J., et al. (2013). "Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization." Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore.
Liang, J., et al. (2013). "Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization." Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, Technical Report 201212.
Wahab, M. N. A., et al. (2015). "A Comprehensive Review of Swarm Optimization Algorithms." PloS One.
Dieterich, J. M. and B. Hartke (2012). "Empirical review of standard benchmark functions using evolutionary global optimization." arXiv preprint arXiv:1207.4318.
Liang, J.-J., et al. (2005). Novel composition test functions for numerical global optimization. Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005., IEEE.
Zhong, W., et al. (2004). "A multiagent genetic algorithm for global numerical optimization." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 34(2): 1128-1141.
Leung, Y.-W. and Y. Wang (2001). "An orthogonal genetic algorithm with quantization for global numerical optimization." IEEE Transactions on Evolutionary Computation 5(1): 41-53.
Van den Bergh, F. and A. P. Engelbrecht (2004). "A cooperative approach to particle swarm optimization." IEEE Transactions on Evolutionary Computation 8(3): 225-239.
Coello, C. A. C., et al. (2004). "Handling multiple objectives with particle swarm optimization." IEEE Transactions on Evolutionary Computation 8(3): 256-279.
Qin, A. K., et al. (2009). "Differential evolution algorithm with strategy adaptation for global numerical optimization." IEEE Transactions on Evolutionary Computation 13(2): 398-417.
Yu, J. J. Q. and V. O. K. Li (2015). "A Social Spider Algorithm for Global Optimization." Neural and Evolutionary Computing 30(C): 614 - 627.