In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
Today, I will talk to you about how to use Python to achieve a simple genetic algorithm from scratch. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
Genetic algorithm is a stochastic global optimization algorithm. Together with artificial neural networks, it may be one of the most popular and well-known biological heuristic algorithms. The algorithm is an evolutionary algorithm, which performs the optimization process inspired by the theory of evolutionary biology through natural selection, binary representation and simple operators based on genetic recombination and genetic mutation.
Genetic algorithm.
Genetic algorithm is a random global search optimization algorithm. It is inspired by the theory of evolutionary biology of natural selection. Specifically, it is a new comprehensive method that combines the understanding of genetics with theory.
The algorithm uses analogues of genetic representation (bit string), fitness (function evaluation), gene recombination (bit string crossover) and mutation (flip bit). The working principle of the algorithm is to first create a fixed size random bit string. Repeat the main cycle of the algorithm for a fixed number of iterations, or until no further improvement is seen in the best solution for a given number of iterations. An iteration of the algorithm is like an evolutionary generation.
First, the overall bit string (candidate solution) is evaluated using the objective function. The objective function of each candidate solution is evaluated as the suitability of the solution, which can be minimized or maximized. Then, choose their parents according to their health status. A given candidate solution can be used as a parent zero or more times. A simple and effective selection method includes randomly selecting k candidates from the population and selecting members from the groups with the best adaptability. This is called tournament selection, where k is a super parameter and is set to a value such as 3. This simple method simulates a more cost-effective proportional option for adaptation.
Parents are used as the basis for generating candidates for the next generation, and one parent is needed for every position in the population.
The parents are then paired and used to create two children. Use the crossover operator to perform the reorganization. This involves selecting a random split point on the in-place string and then creating a child object with bits from the first parent to the split point to the second parent to the end of the string. Then, reverse the process for the second child.
For example, two parents:
Parent1 = 00000
Parent2 = 11111
May result in two intersecting children:
Son 1 = 00011
Children 2 = 11100
This is called a single point of crossover, and the operator has many other variants.
The crossover probability is applied to each parent's probability, which means that in some cases, the parent's copy will be used as a child rather than a recombination operator. Crossover is controlled by hyperparameters that are set to a large value, such as 80% or 90%. The variation involves flipping positions in sub-candidate solutions that have been created. Typically, the mutation rate is set to 1 / L, where L is the length of the bit string.
For example, if the problem uses a bit string with 20 bits, the good default mutation rate would be (1 to 20) = 0.05 or 5%.
This defines a simple genetic algorithm process. This is a large research field, and many extensions have been made to the algorithm.
Now that we are familiar with the simple genetic algorithm process, let's take a look at how to implement it from scratch.
Zero-based genetic algorithm
In this section, we will develop the implementation of genetic algorithms. The first step is to create a random bit string. We can use Boolean values True and False, string values "0" and "1", or integer values 0 and 1. In this case, we will use integer values. We can use the randint () function to generate an array of integer values in a range, and we can specify a range that starts at 0 and is less than 2, such as 0 or 1. For simplicity, we also represent candidate solutions as lists rather than NumPy arrays. You can create an initial random bit string padding as follows, where "n_pop" is a hyperparameter that controls the size of the padding, and "n_bits" is a hyperparameter that defines the median of a single candidate solution:
# initial population of random bitstring pop = [randint (0,2, n_bits). Tolist () for _ in range (n_pop)]
Next, we can enumerate a fixed number of algorithm iterations, in which case the iteration is controlled by a hyperparameter named "n_iter".
... # enumerate generations for gen in range (n_iter):...
The first step of the iteration of the algorithm is to evaluate all candidate solutions.
We will use a function named Objective () as the general objective function and call it to get the fitness score, which we will minimize.
# evaluate all candidates in the population scores = [objective (c) for c in pop]
We can then select the parent that will be used to create the child.
The tournament selection process can be implemented as a function that takes the population and returns a selected parent. Use the default parameter to fix the k value to 3, but you can try different values as needed.
# tournament selection def selection (pop, scores, pop 3): # first random selection selection_ix = randint (len (pop)) for ix in randint (0, len (pop), Kmuri 1): # check if better (e.g. Perform a tournament) if scores [ix]
< scores[selection_ix]: selection_ix = ix return pop[selection_ix] 然后,我们可以为总体中的每个位置调用一次此函数,以创建父母列表。 # select parents selected = [selection(pop, scores) for _ in range(n_pop)] 然后,我们可以创建下一代。 这首先需要执行交叉的功能。此功能将占用两个父级和交叉率。交叉率是一个超参数,它确定是否执行交叉,如果不进行交叉,则将父级复制到下一代。这是一个概率,通常具有接近1.0的较大值。 下面的crossover()函数使用范围为[0,1]的随机数来实现交叉以确定是否执行了交叉,然后如果要进行交叉则选择有效的分割点。 # crossover two parents to create two children def crossover(p1, p2, r_cross): # children are copies of parents by default c1, c2 = p1.copy(), p2.copy() # check for recombination if rand() < r_cross: # select crossover point that is not on the end of the string pt = randint(1, len(p1)-2) # perform crossover c1 = p1[:pt] + p2[pt:] c2 = p2[:pt] + p1[pt:] return [c1, c2] 我们还需要执行突变的功能。该过程简单地以" r_mut"超参数控制的低概率翻转位。 # mutation operator def mutation(bitstring, r_mut): for i in range(len(bitstring)): # check for a mutation if rand() < r_mut: # flip the bit bitstring[i] = 1 - bitstring[i] 然后,我们可以遍历父级列表并创建要用作下一代的子级列表,根据需要调用交叉和变异函数。 # create the next generation children = list() for i in range(0, n_pop, 2): # get selected parents in pairs p1, p2 = selected[i], selected[i+1] # crossover and mutation for c in crossover(p1, p2, r_cross): # mutation mutation(c, r_mut) # store for next generation children.append(c) 我们可以将所有这些结合到一个名为generic_algorithm()的函数中,该函数采用目标函数的名称和搜索的超参数,并返回在搜索过程中找到的最佳解决方案。 # genetic algorithm def genetic_algorithm(objective, n_bits, n_iter, n_pop, r_cross, r_mut): # initial population of random bitstring pop = [randint(0, 2, n_bits).tolist() for _ in range(n_pop)] # keep track of best solution best, best_eval = 0, objective(pop[0]) # enumerate generations for gen in range(n_iter): # evaluate all candidates in the population scores = [objective(c) for c in pop] # check for new best solution for i in range(n_pop): if scores[i] < best_eval: best, best_eval = pop[i], scores[i] print(">% d, new best f (% s) =% .3f "% (gen, pop [I], score [I]) # select parents selected = [selection (pop, scores) for _ in range (n_pop)] # create the next generation children = list () for i in range (0, n_pop, 2): # get selected parents in pairs p1, p2 = selected [I], selected [iTun1] # crossover and mutation for c in crossover (p1, p2) R_cross): # mutation mutation (c, r_mut) # store for next generation children.append (c) # replace population pop = children return [best, best_eval]
Now that we have developed the implementation of genetic algorithm, let's explore how to apply it to the objective function.
Genetic algorithm of OneMax
In this section, we apply genetic algorithms to optimization problems based on binary strings. This problem is called OneMax, and the binary string is evaluated based on the number of ones in the string. For example, a 20-bit bit string has a score of 20 for an all-1 string. Assuming that we have implemented a genetic algorithm to minimize the objective function, we can add a negative sign to this evaluation so that a large positive value becomes a large negative value. The following onemax () function does this and takes the bit string of the integer value as input and returns the negative sum of the values.
# objective function def onemax (x): return-sum (x)
Next, we can configure the search.
The search will run 100 iterations, and we will use 20 bits in the candidate solution, which means that the best fitness is-20.0.
The total population will be 100, and we will use a 90% crossover rate and a 5% mutation rate. After some trial and error, this configuration is selected.
# define the total iterations n_iter = 100 # bits n_bits = 20 # define the population size n_pop = 100 # crossover rate r_cross = 0.9 # mutation rate r_mut = 1.0 / float (n_bits)
You can then invoke the search and report the best results.
# perform the genetic algorithm search best, score = genetic_algorithm (onemax, n_bits, n_iter, n_pop, r_cross, r_mut) print ('calling') Print ('f (% s) =% f'% (best, score))
Taken together, a complete example of applying genetic algorithm to the OneMax objective function is listed below.
# genetic algorithm search of the onemax optimization problem from numpy.random import randint from numpy.random import rand # objective function def onemax (x): return-sum (x) # tournament selection def selection (pop, scores, Kraft 3): # first random selection selection_ix = randint (len (pop)) for ix in randint (0, len (pop), kmur1): # check if better (e.g. Perform a tournament) if scores [ix]
< scores[selection_ix]: selection_ix = ix return pop[selection_ix] # crossover two parents to create two children def crossover(p1, p2, r_cross): # children are copies of parents by default c1, c2 = p1.copy(), p2.copy() # check for recombination if rand() < r_cross: # select crossover point that is not on the end of the string pt = randint(1, len(p1)-2) # perform crossover c1 = p1[:pt] + p2[pt:] c2 = p2[:pt] + p1[pt:] return [c1, c2] # mutation operator def mutation(bitstring, r_mut): for i in range(len(bitstring)): # check for a mutation if rand() < r_mut: # flip the bit bitstring[i] = 1 - bitstring[i] # genetic algorithm def genetic_algorithm(objective, n_bits, n_iter, n_pop, r_cross, r_mut): # initial population of random bitstring pop = [randint(0, 2, n_bits).tolist() for _ in range(n_pop)] # keep track of best solution best, best_eval = 0, objective(pop[0]) # enumerate generations for gen in range(n_iter): # evaluate all candidates in the population scores = [objective(c) for c in pop] # check for new best solution for i in range(n_pop): if scores[i] < best_eval: best, best_eval = pop[i], scores[i] print(">% d, new best f (% s) =% .3f "% (gen, pop [I], score [I]) # select parents selected = [selection (pop, scores) for _ in range (n_pop)] # create the next generation children = list () for i in range (0, n_pop, 2): # get selected parents in pairs p1, p2 = selected [I], selected [iTun1] # crossover and mutation for c in crossover (p1, p2) R_cross): # mutation mutation (c, r_mut) # store for next generation children.append (c) # replace population pop = children return [best, best_eval] # define the total iterations n_iter = 100 # bits n_bits = 20 # define the population size n_pop = 100 # crossover rate r_cross = 1.0 / float (n_bits) # perform the genetic algorithm search best, score = genetic_algorithm (onemax N_bits, n_iter, n_pop, r_cross, r_mut) print ('toll') Print ('f (% s) =% f'% (best, score))
Running the example will report the best results found along the way, and then give the final best solution at the end of the search, which we hope is the best solution.
Note: your results may be different due to the randomness of the algorithm or evaluation program, or due to differences in numerical accuracy. Consider running the example several times and compare the average results.
In this case, we can see that search has found the best solution after about eight generations.
> 0, new best f ([1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1]) =-14.000 > 0, new best f ([1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0]) =-15.000 > 1, new best f ([1, 1, 1, 0, 1, 1, 1, 1, 1 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1]) =-16.000 > 2, new best f ([0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 1]) =-19.000 > 8, new best f ([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) =-20.000 Done! F ([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) =-20.000000
Genetic algorithm for continuous function Optimization
Optimizing OneMax functionality is not very interesting. We are more likely to want to optimize continuous functions. For example, we can define an x ^ 2 minimization function that accepts input variables and has an optimal value when f (0mem0) = 0.
# objective function def objective (x): return x [0] * * 2.0 + x [1] * * 2.0
We can use genetic algorithms to minimize this feature. First, we must define the bounds of each input variable.
# define range for input bounds = [[- 5.0,5.0], [- 5.0,5.0]]
We take the "n_bits" superparameter as the number of digits for each input variable of the objective function and set it to 16 bits.
# bits per variable n_bits = 16
This means that given two input variables, our actual bit string will have (16 * 2) = 32 bits.
# mutation rate r_mut = 1.0 / (float (n_bits) * len (bounds))
Next, we need to make sure that the initial fill creates a large enough random bit string.
# initial population of random bitstring pop = [randint (0,2, n_bits*len (bounds)). Tolist () for _ in range (n_pop)]
Finally, before using the objective function to evaluate each bit string, we need to decode these bit strings into numbers.
We can do this by first decoding each substring to an integer, and then scaling the integer to the desired range. This provides a range of value vectors that can then be provided to the target function for evaluation.
The following decode () function does this with the bounds of the function, the number of bits of each variable, and a bit string as input, and returns a list of decoded real values.
# decode bitstring to numbers def decode (bounds, n_bits, bitstring): decoded = list () largest = 2**n_bits for i in range (len (bounds)): # extract the substring start, end = I * n_bits, (I * n_bits) + n_bits substring = bitstring [start:end] # convert bitstring to a string of chars chars = '.join ([str (s) for s in substring]) # convert string to integer intinteger = int (chars) 2) # scale integer to desired range value = bounds [I] [0] + (integer/largest) * (bounds [I] [1]-bounds [I] [0]) # store decoded.append (value) return decoded
We can then call it at the beginning of the algorithm loop to decode the population, and then evaluate the decoded version of the population.
# decode population decoded = [decode (bounds, n_bits, p) for p in pop] # evaluate all candidates in the population scores = [objective (d) for d in decoded]
Taken together, a complete example of a genetic algorithm for continuous function optimization is listed below.
# genetic algorithm search for continuous function optimization from numpy.random import randint from numpy.random import rand # objective function def objective (x): return x [0] * * 2.0 + x [1] * * 2.0 # decode bitstring to numbers def decode (bounds, n_bits, bitstring): decoded = list () largest = 2**n_bits for i in range (len (bounds)): # extract the substring start, end = I * n_bits (I * n_bits) + n_bits substring = bitstring [start:end] # convert bitstring to a string of chars chars = '.join ([str (s) for s in substring]) # convert string to integer intinteger = int (chars) 2) # scale integer to desired range value = bounds [I] [0] + (integer/largest) * (bounds [I] [1]-bounds [I] [0]) # store decoded.append (value) return decoded # tournament selection def selection (pop, scores, kryp3): # first random selection selection_ix = randint (len (pop)) for ix in randint (0, len (pop)) KMur1): # check if better (e.g. Perform a tournament) if scores [ix]
< scores[selection_ix]: selection_ix = ix return pop[selection_ix] # crossover two parents to create two children def crossover(p1, p2, r_cross): # children are copies of parents by default c1, c2 = p1.copy(), p2.copy() # check for recombination if rand() < r_cross: # select crossover point that is not on the end of the string pt = randint(1, len(p1)-2) # perform crossover c1 = p1[:pt] + p2[pt:] c2 = p2[:pt] + p1[pt:] return [c1, c2] # mutation operator def mutation(bitstring, r_mut): for i in range(len(bitstring)): # check for a mutation if rand() < r_mut: # flip the bit bitstring[i] = 1 - bitstring[i] # genetic algorithm def genetic_algorithm(objective, bounds, n_bits, n_iter, n_pop, r_cross, r_mut): # initial population of random bitstring pop = [randint(0, 2, n_bits*len(bounds)).tolist() for _ in range(n_pop)] # keep track of best solution best, best_eval = 0, objective(pop[0]) # enumerate generations for gen in range(n_iter): # decode population decoded = [decode(bounds, n_bits, p) for p in pop] # evaluate all candidates in the population scores = [objective(d) for d in decoded] # check for new best solution for i in range(n_pop): if scores[i] < best_eval: best, best_eval = pop[i], scores[i] print(">% d, new best f (% s) =% f "% (gen, decoded [I], score [I]) # select parents selected = [selection (pop, scores) for _ in range (n_pop)] # create the next generation children = list () for i in range (0, n_pop, 2): # get selected parents in pairs p1, p2 = selected [I], selected [iTun1] # crossover and mutation for c in crossover (p1, p2) R_cross): # mutation mutation (c, r_mut) # store for next generation children.append (c) # replace population pop = children return [best, best_eval] # define range for input bounds = [[- 5.0,5.0], [- 5.0,5.0] ] # define the total iterations n_iter = 100 # bits per variable n_bits = 16 # define the population size n_pop = 100 # crossover rate r_cross = 0.9 # mutation rate r_mut = 1.0 / (float (n_bits) * len (bounds)) # perform the genetic algorithm search best, score = genetic_algorithm (objective, bounds, n_bits, n_iter, n_pop, r_cross, r_mut) print ('calling') Decodedecoded = decode (bounds, n_bits, best) print ('f (% s) =% f'% (decoded, score))
Running the example reports the best decoding result and the best decoding solution at the end of the run.
Note: your results may be different due to the randomness of the algorithm or evaluation program, or due to differences in numerical accuracy. Consider running the example several times and compare the average results.
In this case, we can see that the algorithm finds an input that is very close to f (0. 010. 0) = 0. 0.
> 0, new best f ([- 0.78506469265625,-0.807647705078125]) = 1.268621 > 0, new best f ([0.3858947753906250.342864990234375]) = 0.266471 > 1, new best f ([- 0.342559814453125,-0.1068115234375]) = 0.128756 > 2, new best f ([- 0.038909912109375, 0.30242919921875]) = 0.092977 > 2, new best f ([0.1457214356875, 0.1849365234375]) = 0.055436 > 3, new best f ([0.14404296875,-0.029754638671875]) = 0.021634 > 5 New best f ([0.066680908203125,0.096435546875]) = 0.013746 > 5, new best f ([- 0.036468505859375,-0.10711669921875]) = 0.012804 > 6, new best f ([- 0.0383890999912109375,-0.099639892578125]) = 0.011442 > 7, new best f ([- 0.033111572656250.09674072265625]) = 0.010455 > 7, new best f ([0.036468505859375, 0.05584716796875]) = 0.004449 > 10, new best f ([0.058746337890625,0.008087158203125]) = 0.003517 > 10 New best f ([- 0.031585693359375,0.008087158203125]) = 0.001063 > 12, new best f ([0.022125244140625,0.008087158203125]) = 0.000555 > 13, new best f ([0.022125244140625,0.00701904296875]) = 0.000539 > 13, new best f ([- 0.013885498046875,0.008087158203125]) = 0.000258 > 16, new best f ([- 0.0114440917675,0.00518798828125]) = 0.000158 > 17, new best f ([- 0.0115966796875,0.000927534375]) = 0.000135 > 17 New best f ([- 0.004730224609375,0.00335693359375]) = 0.000034 > 20, new best f ([- 0.004425048828125,0.00274658203125]) = 0.000027 > 21, new best f ([- 0.002288818359375,0.00091552734375]) = 0.000006 > 22, new best f ([- 0.001983642578125,0.00091552734375]) = 0.000005 > 22, new best f ([- 0.001983642578125,0.0006103515625]) = 0.000004 > 24, new best f ([- 0.001373291015625,0.001068115234375]) = 0.000003 > 25 New best f ([- 0.001373291015625,0.00091552734375]) = 0.000003 > 26, new best f ([- 0.00137329101015625,0.0006103515625]) = 0.000002 > 27, new best f ([- 0.001068115234375,0.0006103515625]) = 0.000002 > 29, new best f ([- 0.0001525878906250.00091552734375]) = 0.000001 > 33, new best f ([- 0.0006101035156250.0]) = 0.000000 > 34, new best f ([- 0.000152587890625,0.00030517578125]) = 0.000000 > 43 New best f ([- 0.00030517578125,0.0]) = 0.000000 > 60, new best f ([- 0.000152587890625,0.000152587890625]) = 0.000000 > 65, new best f ([- 0.0001525878906250.0]) = 0.000000 Done! F ([- 0.000152587890625,0.0]) = 0.000000 after reading the above, do you have any further understanding of how to use Python to implement simple genetic algorithms from scratch? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.