In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article is to share with you about how Python&Matlab implements the gray wolf optimization algorithm. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1 the basic idea of grey wolf optimization algorithm
Grey wolf optimization algorithm is a kind of swarm intelligent optimization algorithm, which is unique in that a small number of gray wolves with absolute discourse power lead a group of gray wolves to their prey. Before we understand the characteristics of the gray wolf optimization algorithm, it is necessary to understand the hierarchy in the gray wolf pack.
Gray wolves are generally divided into four levels: the first level of gray wolves is represented by α, the second class of gray wolves is represented by β, the third stage of gray wolves is represented by delta, and the fourth level of gray wolves is represented by ω. According to the above grades, grey wolf α has absolute control over gray wolf β, δ and ω, grey wolf ω has absolute control over grey wolf δ and ω, and grey wolf δ has absolute control over grey wolf ω.
2 the prey process of gray wolves
The GWO optimization process includes the steps of social hierarchy, tracking, encircling and attacking prey of gray wolves, the details of which are as follows.
2.1 Social stratification
When designing GWO, we first need to build a hierarchical model of gray wolf society. The fitness of each individual of the population was calculated, and the three gray wolves with the best fitness were marked as α, β and δ, and the remaining gray wolves were marked as ω. In other words, the social rank in the gray wolf group is α, β, δ and ω from high to low. The optimization process of GWO is mainly guided by the best three solutions (α, β, δ) in each generation.
2.2 surround the prey
Grey wolf groups gradually approach and surround their prey through the following formulas:
In the formula, t is the current iterative algebra, An and C are coefficient vectors, Xp and X are prey position vectors and gray wolf position vectors, respectively. The calculation formulas of An and C are as follows:
In the formula, an is the convergence factor, which decreases linearly with the number of iterations from 2 to a uniform distribution between 0 and r 2.
2.3 Hunting
The Xi of other gray wolves in the pack updates their positions according to the positions of α, β and 100, Xa, XB and Xo:
In the formula, Da, Dβ and D6 denote the distance between a, β and 5, respectively; Xa, Xβ and X6 represent the current position of a, β and 5, respectively; C1 and C2, C3 is a random vector, and X is the current position of the gray wolf.
The position update formula of the gray wolf individual is as follows:
2.4 attack prey
In the process of building the attack prey model, according to the formula in 2), the decrease of a value will cause the value of A to fluctuate accordingly. In other words, An is a random vector on the interval [- aMagina] (Note: in the first paper of the original author, this is [- 2aMagne2a], which is corrected to [- aMagina] in the later paper), where a decreases linearly during iteration. When An is in the interval [- 1pc1], the next position of the Search Agent can be anywhere between the current gray wolf and its prey.
2.5 looking for prey
Grey wolves mainly rely on α, β, δ information to find prey. They begin to disperse to search for prey location information, and then focus on attacking prey. For the establishment of the decentralized model, the search agent is kept away from the prey by | A | > 1, which enables GWO to conduct a global search. Another search coefficient in GWO algorithm is C. As can be seen from the formula in 2.2, C vector is a vector made up of random values in the interval range [0d2]. This coefficient provides random weights for prey to increase (| C | > 1) or decrease (| C | < 1). This helps GWO to show random search behavior in the process of optimization to prevent the algorithm from falling into local optimization. It is worth noting that C is not a linear decline, C is a random value in the iterative process, the coefficient is conducive to the algorithm to jump out of the local, especially in the later stage of the iteration is particularly important.
3 implementation steps and program block diagram 3.1 steps
Step1: population initialization: including the population number N, the maximum number of iterations Maxlter, the regulation parameter amam A Magi C.
Step2: initialize the position X of the gray wolf randomly according to the upper and lower bounds of the variable.
Step3: calculate the fitness value of each wolf, and save the position information of the wolf with the best adaptation value X α, the position information of the wolf with the second best fitness value as X β, and the position information of the gray wolf with the third best fitness as X γ in the population.
Step4: updates the location of individual X of the gray wolf.
Step5: update parameters An and C
Step6: calculate the fitness of each wolf and update the optimal position of the three wolves.
Step7: determine whether to reach the maximum number of iterations Maxlter. If so, the algorithm stops and returns the value of Xa as the final optimal solution, otherwise it goes to Step4.
3.2 Program block diagram
4 Python code implementation # = import conduit library = import randomimport numpy # complete code see Wechat official account: beauty of Power system and algorithm # input keywords: grey Wolf algorithm def GWO (objf, lb, ub, dim, SearchAgents_no, Max_iter): # = = initialize alpha, beta, and delta_pos= Alpha_pos = numpy.zeros (dim) # location. The list that forms 30 is Alpha_score = float ("inf") # this means "positive and negative infinity", all numbers are less than + inf Positive infinity: float ("inf"); negative infinity: float ("- inf") Beta_pos = numpy.zeros (dim) Beta_score = float ("inf") Delta_pos = numpy.zeros (dim) Delta_score = float ("inf") # float () function is used to convert integers and strings into floating-point numbers. # = list list type = if not isinstance (lb, list): # function: to determine whether an object is a known type. The first parameter (object) is the object, and the second parameter (type) is the type name. If the type of the object is the same as that of parameter 2, return True lb = [lb] * dim # to generate [100100]. .100] 30 if not isinstance (ub, list): ub = [ub] * dim # = initialize the position of all wolves = Positions = numpy.zeros ((SearchAgents_no, dim)) for i in range (dim): # forms the number of 5x 30 [- 100s] Positions [:, I] = numpy.random.uniform (0,1, SearchAgents_no) * (ub [I]-lb [I]) + lb [I] # forms [number of 5 0-1] * 100-(- 100)-100 Convergence_curve = numpy.zeros (Max_iter) # = iterative optimization = for l in range (0, Max_iter): # iterative 1000 for i in range (0) SearchAgents_no): # 5 # = returns search agents that exceed the boundaries of the search space = for j in range (dim): # 30 Positions [I, j] = numpy.clip (Positions [I, j], lb [j], ub [j]) # clip this function limits the elements in the array to between a_min (- 100) and a_max (100) Anything greater than a_max makes it equal to a_max, less than a_min makes it equal to a_min. # = in the above loop, Alpha, Beta, Delta= a = 2-l * ((2) / Max_iter); # a decreases linearly from 2 to 0 for i in range (0, SearchAgents_no): for j in range (0, dim): R1 = random.random () # R1 is a random number in [0Magne1] mainly generates a 0-1 random floating point number. R2 = random.random () # R2 is a random number in [0 1] A1 = 2 * a * R1-a; # Equation (3.3) C1 = 2 * R2; # Equation (3.4) # D_alpha indicates the distance between candidate wolf and Alpha wolf D_alpha = abs (C1 * Alpha_ POS [j]-Positions [I, j]) The # abs () function returns the absolute value of a number. Alpha_ POS [j] denotes Alpha location, Positions [iMagin j]) candidate wolf location X1 = Alpha_ POS [j]-A1 * Delimalpha; # X1 represents the next generation gray wolf location vector R1 = random.random () R2 = random.random () A2 = 2 * a * R1-a derived from alpha # C2 = 2 * R2; D_beta = abs (C2 * Beta_ POS [j]-Positions [I, j]); X2 = Beta_ POS [j]-A2 * Delimeta; R1 = random.random () R2 = random.random () A3 = 2 * a * R1-a C3 = 2 * R2; D_delta = abs (C3 * Delta_ POS [j]-Positions [I, j]); X3 = Delta_ POS [j]-A3 * Delimdelta; Positions [I, j] = (X1 + X2 + X3) / 3 # candidate wolf location is updated to the next generation gray wolf address based on Alpha, Beta and Delta. Convergence_ curve [l] = Alpha_score; if (l% 1 = = 0): print ([results of iterations of'+ str (l) +'+ str (Alpha_score)]); # result of each iteration # = function = def F1 (x): s=numpy.sum (x) Return s # = main program = func_details = ['F1,-100,100,30] function_name = func_details [0] Max_iter = 100 iterations lb =-10 lower bound ub = 10 last dim = 3 Wolf's search range SearchAgents_no = number of wolves searched x = GWO (F1, lb, ub, dim, SearchAgents_no, Max_iter)
5 Matlab implementation% main program GWOclearclose allclc%% complete code see Wechat official account: beauty of Power system and algorithm% input keywords: grey Wolf algorithm SearchAgents_no = 30;% population size dim = 10;% Particle Dimension Max_iter = 1000;% iterations ub = 5; lb =-5;% initialize the position of the three wolves Alpha_pos=zeros (1dim); Alpha_score=inf; Beta_pos=zeros (1dim); Beta_score=inf Delta_pos=zeros (1Magnedim); Delta_score=inf; Convergence_curve = zeros (Max_iter,1);% start loop for l=1:Max_iter for i=1:size (Positions,1)%% return search agents beyond the boundaries of the search space Flag4ub=Positions (iMagne:) > ub; Flag4lb=Positions (imaine:) Beta_score & & fitness
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.