Fill This Form To Receive Instant Help
Homework answers / question archive / Optimization Functions For this project, 40 test challenging benchmark functions (Liang, Qu et al
Optimization Functions
For this project, 40 test challenging benchmark functions (Liang, Qu et al. 2013, Liang, Qu et al. 2013, Wahab, Nefti-Meziani et al. 2015) are collected. You are required to implement nature-inspired algorithm to find the optimal value of any 20 optimization functions of your choice from the list given below for your class project.
There are several characteristics of the test functions that feature the complexity of the functions’ landscape like modality, separability, scalability (Dieterich and Hartke 2012). Modality is defined by the number of ambiguous peaks in the function landscape. A function is called separable if the variables of the solution are independent and can be optimized separately. On the other hand, the inseparable class of functions are highly difficult to solve as the variables are interrelated and affect each other. The functions that are extendable to arbitrary dimensionality are known as scalable function, otherwise are called non-scalable. The scalability often introduces high extent of complexity to the search space such as some test function landscapes become multimodal from unimodal when the number of dimensions is scaled up.
There are some additional features that can make a function landscape even more challenging while searching for minima such as flat surface (less informative area), basins and narrow curved valley. However, some algorithms are found to exploit several shortcomings (Liang, Suganthan et al. 2005) of popularly used test functions such as the global optimum with same values for different variables (Zhong, Liu et al. 2004), global optimum lying in the center of the landscape, at the origin (Zhong, Liu et al. 2004), along the axes (Leung and Wang 2001, Van den Bergh and Engelbrecht 2004), on the bounds (Coello, Pulido et al. 2004) etc. Thereafter, the conventional test functions are rotated and/or shifted before using them to test optimization algorithms (Liang, Suganthan et al. 2005, Qin, Huang et al. 2009, Liang, Qu et al. 2013, Liang, Qu et al. 2013, Yu and Li 2015). In addition to using rotated and shifted functions, we hybridized functions according to (Liang, Qu et al. 2013) to generate highly complex real-world optimization problems to test the extended-capacity of the algorithms. We collected the rotation matrix, shifting vector and shuffled-indices vector for rotation, shifting and hybridization, respectively from a shared link of data used in CEC 2014[1].
Among 40 functions, 27 are classical functions whereas 13 functions are rotated and/or shifted and hybridized to construct modified functions. Depending on the three basic properties (modality, separability, and scalability) mentioned above, we classify the 27 functions under consideration into six groups (Group I through VI). The rest of the 13 modified functions are categorized into three groups according to their complexities. The definition of the functions under different groups with their corresponding search ranges (lb, ub
) of solution variable (X
), global optimum (X*
) and the function value at global optimum, f(X*
) are listed as following:
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fSphereX*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fCigarX*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fDiscusX*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fRHEX*=0
lb, ub=-5, 5d
, X*=-0.0299, 0T
, fZettlX*=-0.003791237
lb, ub=-1.2, 1.2d
, X*=[1, 1]T
, fLeonX*=0
lb, ub=-100, 100d
, X*=[π, π]T
, fEasomX*=-1
lb, ub=-5, 10d
, X*=[0, …, 0]T
, fZakharovX*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fSchwefel1.2X*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fSchwefel2.2X*=0
fRastriginX=10d+i=1dxi2-10cos2πxi
lb, ub=-5.2, 5.2d
, X*=[0, …, 0]T
, fRastriginX*=0
fSchwefel2.6X=418.9829d-i=1d xisinxi
lb, ub=-500, 500d
, X*=[420.9687, …, 420.9687]T
, fSchwefel2.6X*=0
fMichalewiczX=-i=1dsinxisin2sixi2π
steepness, s=10
lb, ub=[0, π]d
, X*=[2.20, 1.57]T, d=2
fMichalewiczX*=-1.8013
,-4.687658,-9.66015
, d=2, 5, 10
fSTX=12i=1dxi4-16xi2+5xi
lb, ub=[-5, 5]d
, X*=[-2.903534, …, -2.903534]T
,
fSTX*=-39.16599d
fScafferF2X=0.5+sin2x12-x22-0.51+0.001x12+x222
lb, ub=[-100, 100]d
, X*=[0, 0]T
, fScafferF2X*=0
fScafferF6X=0.5+sin2x12+x22-0.51+0.001x12+x222
lb, ub=[-100, 100]d
, X*=[0, 0]T
, fScafferF6X*=0
fBirdx=x1-x22+sinx1e1-cosx22+ cosx2e1-sinx12
lb, ub=[-2π, 2π]d
, X*=[4.701056, 3.152946]T
, X*=[-1.582142, -3.130247]T
, fBirdx*=-106.7645367198034
fLevy13X=sin23πx1+x1-121+sin23πx2
+ x2-121+sin22πx2
lb, ub=[-10, 10]d
, X*=[1, 1]T
, fLevy13X*=0
fCarromTableX=-130e21-x12+x22πcos2x1cos2x2
lb, ub=[-10, 10]d
, X*=[±9.646157, ±9.646157]T
fCarromTableX*=-24.1568155
fAckelyX=-ae-b1di=1dxi2-e1di=1dcoscxi+a+e1
a=20,b=0.2,c=2π
lb, ub=[-32, 32]d
, X*=[0, …, 0]T
, fAckleyX*=0
fRosenbrockX=i=1d-1100xi+1-xi22+xi-12
lb, ub=[-30, 30]d
, X*=[1, …, 1]T
, fRosenbrockX*=0
fGriewankX=i=1dxi24000-i=1dcosxii+1
lb, ub=-600, 600d
, X*=[0, …, 0]T
, fGriewankX*=0
fSESWX=i=1d-1sin2x12+x22-0.51+0.001x12+x222+0.5
lb, ub=[-100, 100]d
, X*=[0, …, 0]T
, fSESWX*=0
fTrigonometricX=i=1dm+i1-cosxi-sinxi-j=1dcosxj2
lb, ub=[-1000, 1000]d
, X*=[0, …, 0 ]T
, fTrigonometricX*=0
fLevyX=sin2πy1+i=1d-1yi-121+10sin2yi+1+
yn-121+sin22πyn, yi=1+14xi+1
lb, ub=[-50, 50]d
, X*=[-1,…, -1 ]T
, fLevyX*=0
fScafferF7X=1d-1i=1d-1yi+sin50yi0.2yi2, yi=xi2+xi+12
lb, ub=[-100, 100]d
, X*=[0, …, 0 ]T
, fScafferF7X*=0
fLunacekx=mini=1dxi-μ12,1.d+si=1dxi-μ22+
10i=1d1-cos2πxi-μ1
μ1=2.5, s=1-12d+20-8.2,μ2=-μ12-1s
lb, ub=[-10, 10]d
, X*=[μ1,…, μ1 ]T
, fLunacekX*=0
Rotation of the landscape of the functions using an orthogonal rotation matrix (R
) (Salomon 1996) can prevent the optimum from lying along the coordinate axes. Here, we rotate 7 scalable (2 unimodal and 5 multimodal) functions.
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fR-CigarX*=0
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fR-DiscusX*=0
fR-RastriginZ=fRastriginZ
, Z=R×(X×(5.2100))
lb, ub=-5.2, 5.2d
, X*=[0, …, 0]T
, fR-RastriginX*=0
fR-AckleyZ=fAckleyZ
, Z=R×(X×(32100))
lb, ub=-32, 32d
, X*=[0, …, 0]T
, fR-AckleyX*=0
fR-GriewankZ=fGriewankZ
, Z=R×(X×(600100))
lb, ub=-600, 600d
, X*=[0, …, 0]T
, fR-GriewankX*=0
fR-SchafferF7Z=fSchafferF7Z
, Z=R×X
lb, ub=-100, 100d
, X*=[0, …, 0]T
, fR-SchafferF7X*=0
fR-LunacekZ=fLunacekZ
, Z=R×(X×(10100))
lb, ub=-10, 10d
, X*=[μ1,…, μ1 ]T
, fR-LunacekX*=0
Shifting operation moves the global optimum to a new random position (onew
) from the old optimum (oold
) using Eq. 1 to avoid the limitation of having same values of the variables at optima or having optimum at the center.
|
fZ, Z=R×(X-onew+oold)
|
(1) |
fSR-RastriginZ=fRastriginZ
,
Z=R×((X-onew+oold)×(5.2100))
lb, ub=-5.2, 5.2d
, X*=[0, …, 0]T
, fSR-RastriginX*=0
fSR-AckleyZ=fAckleyZ
,
Z=R×((X-onew+oold)×(32100))
lb, ub=-32, 32d
, X*=[0, …, 0]T
, fSR-AckleyX*=0
fSR-LunacekZ=fLunacekZ
,
Z=R×((X-onew+oold)×(10100))
lb, ub=-10, 10d
, X*=[μ1,…, μ1 ]T
, fSR-LunacekX*=0
Hybrid function defined by Eq. 2 represents real-world optimization problems where different subcomponents of the variables can have different characteristics. For these functions, the variables are randomly shuffled and divided into subcomponents and then different basic functions are used for different subcomponents. In our hybrid functions, we have further rotated the basic functions.
fZ=f1R1Z1+f2R2Z2+…+fn(RnZn)
|
(2) |
Here, Z=[Z1, Z2, …, Zn]
|
|
Z1=xS1, …, xSd1
|
|
n
|
|
S=random_permutation(1:d)
|
|
fi
|
|
di
|
|
pi
|
|
d1=p1×d
|
|
fHybrid-1Z=fRastriginR1Z1+fCigarR2Z1+fGriewankRnZ1
p1,p2,p3=[13,13,13]
for d = 30
p1,p2,p3=[25,15,25]
for d = 50
lb, ub=-100, 100d
, fHybrid-1X*=0
fHybrid-2Z=fSphereR1Z1+fAckleyR2Z1+fSchafferF7RnZ1
p1,p2,p3=[13,13,13]
for d = 30
p1,p2,p3=[25,15,25]
for d = 50
lb, ub=-100, 100d
, fHybrid-2X*=0
fHybrid-3Z=fDiscusR1Z1+fRosenbrockR2Z1+fGriewankRnZ1
p1,p2,p3=[13,13,13]
for d = 30
p1,p2,p3=[25,15,25]
for d = 50
lb, ub=-100, 100d
, fHybrid-3X*=0