In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to implement ant colony algorithm in Python. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
1. Introduction
The intelligence shown by various biological groups in nature has been widely concerned by scholars in recent decades. Scholars have proposed a swarm intelligence algorithm by simulating the group behavior of simple organisms. Among them, ant colony optimization algorithm (Ant Colony Optimization, ACO), which simulates the foraging process of ant colony, and particle swarm optimization algorithm (Particle Swarm Optimization, PSO), which simulate the movement mode of bird colony, are the two most important swarm intelligence algorithms.
Ant Colony algorithm (ACA) is a new bionic evolutionary algorithm originated from the natural biological world. It was proposed by Italian scholars M. Dorigo, V. Maniezzo and A.Colorni in the early 1990s by simulating the collective pathfinding behavior of ants in nature. Ants have the ability to find the shortest path from the nest to the food source without any hint, and can adaptively search new paths and generate new choices as the environment changes. The fundamental reason is that when ants are looking for food, they can release a special secretion-pheromone (also known as external hormone), which will volatilize gradually over time. The probability of later ants choosing this path is proportional to the intensity of pheromones on this path at that time. When more and more ants pass on a path, more and more pheromones are left behind, and then the ants are more likely to choose the path, thus increasing the intensity of pheromones on the path. The strong pheromones will attract more ants, thus forming a positive feedback mechanism. Through this positive feedback mechanism, ants can finally find the shortest path.
The earliest ant colony algorithm is ant system (Ant System,AS). Researchers improve the ant system according to different improvement strategies and develop different versions of ant colony algorithm, which are successfully applied to the field of optimization. This method is used to solve traveling Salesman (TSP) problem, assignment problem and Job Shop scheduling (job-shop) problem, and good experimental results are obtained. Ant colony algorithm has the characteristics of distributed computing, non-central control and indirect communication between distributed individuals, so it is easy to combine with other optimization algorithms. It shows the ability to solve complex problems through the cooperation between simple individuals. It has been widely used to solve optimization problems. Ant colony algorithm is relatively easy to implement, and the algorithm does not involve complex mathematical operations, and its processing process does not require high software and hardware of the computer, so the research on it is of great significance both in theory and practice.
At present, many researchers and research institutions at home and abroad have carried out research on the theory and application of ant colony algorithm, and ant colony algorithm has become a hot topic in the field of international computational intelligence. Although ant colony algorithm has not formed a strict theoretical basis at present, as a new evolutionary algorithm, it has shown strong vitality in intelligent optimization and other fields.
2 Ant colony algorithm theory
Ant colony algorithm is a bionic algorithm which simulates the path-finding way of ants in nature. In the process of movement, ants can leave pheromones on the path they pass to transmit information, and ants can perceive this substance in the process of movement, and use it to guide their own direction of movement. Therefore, the collective behavior of the ant colony composed of a large number of ants shows a phenomenon of positive feedback: the more ants walk on a certain path, the greater the probability that the newcomers will choose the path.
3 theoretical diagram of algorithm
The main results are as follows: (1) in nature, the food source of ants is always randomly scattered around the ant nest, and the shortest path to the food source can always be found after ant colony coordination, division of labor and cooperation. In reality, we can observe a large number of ants forming a nearly straight path between the nest and the food source, rather than curves, circles and other shapes, as shown in figure (a).
(2) Ant colony can not only complete complex tasks, but also adapt to the change of environment. For example, when obstacles suddenly appear on the movement route of ant colony, the distribution of each ant is uniform at the beginning, regardless of the length of the path, ants always choose each path according to the same probability, as shown in figure (b).
(3) in the process of movement, ants can leave pheromones on the path they pass through, and can sense the existence and intensity of this substance, so as to guide their own direction of movement, ants tend to move in the direction of high pheromone concentration. If more pheromones are left on the shorter path in the same time, the number of ants choosing the shorter path will increase, as shown in figure (c).
(4) it is not difficult to see that due to the positive feedback phenomenon of the collective behavior of the ant colony composed of a large number of ants, the more ants walk on a certain path, the greater the probability that the later will choose the path. Ant individual quality inspection is through this information exchange mechanism to search for food, and finally along the shortest path, as shown in figure (d).
4 artificial ant colony optimization process
Based on the optimal path selection problem when the real ant colony is looking for food, an artificial ant colony can be constructed to solve the optimization problem, such as the TSP problem. In the artificial ant colony, the work unit with simple function is regarded as ant. The similarity between the two is that both give priority to the path with high concentration of pheromone. The pheromone concentration of the shorter path is high, so it can finally be selected by all ants, that is, the final optimization result. The difference between the two is that the artificial ant colony has a certain memory ability and can remember the nodes that have been visited. At the same time, when the artificial ant colony chooses the next path, it consciously looks for the shortest path according to a certain algorithm law, rather than blindly. In the case of TSP, for example, you can know in advance the distance from the current city to the next destination.
In the artificial ant colony algorithm for the TSP problem, it is assumed that m ants move between the adjacent nodes of the graph, so that the solution of the problem can be obtained asynchronously. The one-step transfer probability of each ant is determined by two kinds of parameters on each edge of the graph: one is pheromone value, also known as pheromone trace, and the other is visibility, that is, a priori value.
Pheromones can be updated in two ways: one is volatilization, that is, the pheromones on all paths are reduced at a certain rate, simulating the process of volatilization of pheromones in natural ant colonies over time; the other is enhancement. Add pheromones to the edge of the evaluation value "good" (ants walk by).
The movement of the ant to the next goal is realized by a random principle, that is, using the information stored by the current node, the probability of the next reachable node is calculated, and the next step is moved according to this probability, so reciprocating. Closer and closer to the optimal solution.
In the process of searching, or after finding a solution, the ant will evaluate the optimization of the solution or part of the solution, and save the evaluation information in the pheromone of the relevant connection.
This algorithm is different from the traditional programming mode, its advantage is that it avoids lengthy programming and planning, and the program itself is based on random operation of certain rules to find the best configuration. That is, when the program first finds the target, the path may not be optimal. However, the program can constantly modify the original route through the pheromone principle of ants looking for food, making the whole route shorter and shorter, that is, the longer the execution time of the program (that is, not too few iterations in the program. At the same time, to ensure a certain number of ants), the path obtained is more likely to be close to the optimal path. This looks very similar to what we have seen in the process of summing up the best path from countless examples. In fact, it seems to be a self-learning process of the program.
The essence of this optimization process is:
Selection mechanism: the more pheromones in the path, the greater the probability of being selected.
Update mechanism: the pheromones on the path will increase as the ants pass by, and at the same time gradually evaporate and disappear over time.
Coordination mechanism: ants actually communicate with each other and work together through secretions.
Ant colony algorithm makes full use of the optimization mechanism of selection, update and coordination, that is, it finally finds the optimal solution through information exchange and mutual cooperation between individuals, so that it has a strong ability to find a better solution.
In fact, every ant does not need to know the information of the whole world as we think. In fact, they only care about a very small range of immediate information, and use a few simple rules to make decisions based on this local information. However, when there are countless ants in the cluster, complex behavior will be highlighted. This is the law of scientific explanation of artificial life and complexity! So what are these simple rules? The following details are described:
1. Scope:
The range observed by the ant is a grid world, and the ant has a parameter of velocity radius (usually 3), so the range it can observe is 3 to 3 square worlds, and the distance it can move is within this range.
2. Environment:
The environment of ants is a virtual world, in which there are obstacles, other ants, and pheromones. There are two kinds of pheromones, one is the food pheromone spilled by the ant who finds food, and the other is the pheromone spilled by the ant who finds the nest. Each ant can only perceive the environmental information within its range. The environment makes pheromones disappear at a certain rate.
3. Foraging rules:
Look for food within the range that each ant can perceive, and if so, go straight to it. Otherwise, see if there is a pheromone, and compare which point has the most pheromones within the perceptible range, so that it will go to the place with more pheromones, and each ant will make mistakes with a small probability, so it does not move to the point with the most pheromones. The ant's rules for finding a nest are the same as above, except that it responds to the pheromones of the nest, but not to food pheromones.
4. Movement rules:
Each ant moves in the direction of the most pheromones, and, when there is no pheromone around, the ant will move in the direction of its original motion, and there is a small random disturbance in the direction of motion. In order to prevent the ant from turning in circles, it will remember which points it has recently passed, and if it finds that the next point to go has already passed recently, it will try to avoid it.
5. Obstacle avoidance rules:
If the direction in which the ant is moving is blocked by an obstacle, it will randomly choose the other direction, and if it is guided by pheromones, it will follow the rules of foraging.
6. Spreading pheromone rules:
Each ant sends out the most pheromones when it finds food or nest, and as it goes far away, it spreads fewer and fewer pheromones.
According to these rules, there is no direct relationship between ants, but each ant interacts with the environment, and each ant is actually associated with each other through the pheromone link. For example, when an ant finds food, it does not directly tell other ants that there is food, but spreads pheromones to the environment. When other ants pass near it, they feel the presence of pheromones and then find food according to the guidance of pheromones.
So how on earth do ants find food?
When no ants find food, the environment has no useful pheromones, so why do ants find food relatively effectively? This is due to the movement rules of ants, especially when there is no pheromone. First of all, it should be able to maintain a certain inertia as far as possible, so that the ant can move forward as far as possible (at first, the front is a random fixed direction), rather than spinning or shaking needlessly in the same place; second, the ant should have a certain degree of randomness. although it has a fixed direction, it can not move in a straight line like particles, but there is a random interference. This makes the ant movement with a certain purpose, try to maintain the original direction, but there are new explorations, especially when it encounters obstacles, it will immediately change direction, which can be seen as a process of choice. that is, the obstacles of the environment make one direction of the ant correct, while the other direction is wrong. This explains why individual ants can still find well-hidden food in complex maps such as mazes. Of course, when one ant finds food, the other ants will soon find food along the pheromone.
How do ants find the shortest path?
This is due to pheromones as well as to the environment, specifically the computer clock. It is obvious that there will be more ants passing through places with more pheromones, so more ants will gather. Suppose there are two roads leading from the nest to food, and at first there are as many ants on both roads (or it doesn't matter that there are more ants on the longer road). When ants arrive at the end point along a road, they will come back immediately. In this way, the time for ants to go back and forth on a short road is short, which means that the frequency of repetition is fast, so there are more ants passing in a unit time. Naturally, more pheromones will be spilled, and more ants will naturally be attracted to them, thus spilling more pheromones. The long road is just the opposite, so more and more ants gather on the shorter path, and the shortest path is nearly found. Some people may ask the question of the local shortest path and the global shortest path. In fact, ants are getting closer to the global shortest path. Why? This is because ants will make mistakes, that is, they will find another way according to a certain probability of not going to places with high pheromones, which can be understood as a kind of innovation, if this innovation can shorten the way, then according to the principle just described, more ants will be attracted.
5 basic ant colony algorithm and its flow 5.1 Ant colony algorithm formula
Ant colony algorithm is actually a combination of positive feedback principle and heuristic algorithm. When choosing the path, the ant uses not only the pheromone on the path, but also the reciprocal of the distance between cities as a heuristic factor. The experimental results show that the ant-cycle model has better performance than the ant-quantity and ant-density models. This is because the ant-cycle model updates the amount of pheromones on the path with global information, while the ant-quantity and ant-density models use local information.
5.2 Ant Colony algorithm Program Summary
(1) Parameter initialization
In order to find the shortest way money, each parameter of the program needs to be initialized, such as ant colony size m, pheromone importance factor α, heuristic function importance factor β, pheromone generation factor, the maximum number of iterations ddcs_max, the initial iteration value is ddcs=1.
(2) construct the solution space.
Each ant is randomly placed at a different starting place, and the next visit location is calculated according to the formula for the ant visit behavior, until all the ants have visited all the locations.
(3) updating pheromones
The total path length Lk of each ant is calculated, the optimal path in the current cycle is recorded, and the pheromone concentration on the connection path between locations is updated according to the formula.
(4) judging termination
Before the number of iterations reaches the maximum, clear the record of the ants and return to step 2.
5.3 flowchart
6 case realization 6.1 case 1
Solve the minimum value of the function: where the range of x is [- 5jue 5] and the range of y is [- 5jue 5]. This is a function with multiple local extremes.
6.2 Python to implement import numpy as npfrom tqdm import tqdm# progress bar setting import matplotlib.pyplot as pltimport matplotlib as mplimport matplotlib Matplotlib.use ('TkAgg') mpl.rcParams [' font.sans-serif'] = ['SimHei'] # specify the default font mpl.rcParams [' axes.unicode_minus'] = False # solve the problem that the saved image is a minus sign-'displayed as a square' = ant colony algorithm to find the extreme value of the function = # = fitness function = def func (XMague y): value = 20*np.power (xsquire Art 2)-np.power (1squares) 2)-3*np.power (1roomy 2) + 0.3 return value#= initialization parameter = G_max=200 20 # Ant number G_max=200 # maximum number of iterations Rho=0.9 # pheromone evaporation coefficient P0,0000.2 # transfer probability constant XMAX= 5 # search variable x maximum value XMIN=-5 # search variable x minimum Value YMAX= 5 # search variable y maximum value YMIN=-5 # search variable y minimum value X=np.zeros (shape= (m) 2)) # Ant Colony shape= (20,2) Tau=np.zeros (shape= (m,)) # pheromone P=np.zeros (shape= (Grangmax Magazine m)) # State transition Matrix fitneess_value_list= [] # iterative recording of the optimal objective function value # = = randomly setting the initial position of the ant = = for i in range (m): # traversing each ant X [iMaging0] = np.random.uniform (XMIN,XMAX,1) [0] # initialization x [I] 1] = np.random.uniform (YMIN,YMAX,1) [0] # initialize y Tau [I] = func (X [iMagin0], X [iMagin1]) step=0.1 # Local search step for NC in range (G_max): # traverse each generation of lamda=1/ (NC+1) BestIndex=np.argmin (Tau) # optimal index Tau_best= Tau [BestIndex] # optimal pheromone # = calculated state transition probability = = for i in range (m): # traverse each ant P [NC I] = np.abs ((Tau_best- Tau [I])) / np.abs (Tau_best) + 0.01# distance of the optimal pheromone # = location update = for i in range (m): # traversing each ant # = local search = if P [NC,i] XMAX: temp1 = XMAX if temp2
< XMIN: temp2 =XMIN if temp2 >XMAX: temp2 = XMAX # = = determine whether the ant moves (choose better = = if func (temp1, temp2) < func (X [I, 0], X [I, 1]): X [I, 0] = temp1 X [I 1] = temp2 # = update pheromone = for i in range (m): # traverse each ant Tau [I] = (1-Rho) * Tau [I] + func (X [I, 0], X [I] ) # (1-Rho) * Tau [I] Information retained after evaporation index=np.argmin (Tau) # minimum value index value=Tau [index] # minimum value fitneess_value_list.append (func (X [index,0], X [index,1])) # record the optimal objective function value # = = print result = = min_index=np.argmin (Tau) # optimal value index minX= X [min _ index 0] # optimal variable xminY= X [min _ index,1] # optimal variable yminValue=func (X [min _ index,0], X [min _ index,1]) # optimal objective function value print ('optimal variable Xminute minminminendendure') print ('optimal variable yawning minminminyendurance') print ('optimal objective function value', minValue) plt.plot (fitneess_value_list) Label=' iteration curve') plt.legend () plt.show () 6.3.Results optimal variable x 5.0optimal variable y 5.0optimal objective function-123.7
6.4 case 2
6.5 Python implementation # = Import related libraries = = import pandas as pdimport numpy as npfrom tqdm import tqdm# progress bar set import matplotlib.pyplot as pltimport matplotlib Matplotlib.use ('TkAgg') from pylab import * mpl.rcParams [' font.sans-serif'] = ['SimHei'] mpl.rcParams [' axes.unicode_minus'] = False # = = definition function = # = objective function = def calc_f (X): "" calculates the fitness value of the particle, that is, the objective function value The dimension of X is size * 2 "" A = 10 pi = np.pi x = X [0] y = X [1] return 2 * A + x * 2-A * np.cos (2 * pi * x) + y * * 2-A * np.cos (2 * pi * y) # = penalty function = def calc_e (X): "" calculate the penalty term of the ant The dimension of X is size * 2 "ee = 0" calculate the penalty item of the first constraint "" E1 = X [0] + X [1]-6 ee + = max (0, E1) "calculate the penalty term of the second constraint"e2 = 3 * X [0]-2 * X [1]-5 ee + = max (0) E2) return ee # = = selection between children and parents = def update_best (parent,parent_fitness,parent_e,child,child_fitness,child_e): "" for different problems Choose the threshold of penalty item reasonably. In this example, the threshold is 0.00001: param parent: parent individual: param parent_fitness: parent adaptation value: param parent_e: parent punishment item: param child: child individual: param child_fitness child adaptation value: param child_e: child penalty item: return: better among parents and children, fitness, Penalty item # rule 1 If neither parent nor child violates the constraint Then take the if parent_e YMAX with low fitness: temp2 = np.random.uniform (YMIN, YMAX, 1) [0] # initialize y # = judge whether the ant moves (choose better) = # = offspring ants = = children=np.array ([temp1 Temp2]) # progeny individual ant children_fit=calc_f (children) # offspring objective function value children_e=calc_e (children) # offspring penalty item parent=X [I] # paternal individual ant parent_fit=calc_f (parent) # paternal objective function value parent_e=calc_e (parent) # paternal penalty item pbesti, pbest_fitness, pbest_e = update_best (parent, parent_fit, parent_e Children, children_fit,children_e) X [I] = pbesti return X # = pheromone update = def Update_information (Tau X): "": param Tau: pheromone: param X: ant Colony: return: Tau pheromone "for i in range (m): # traversing each ant Tau [I] = (1-Rho) * Tau [I] + calc_f (X [I]) # (1-Rho) * Tau [I] retained after the information evaporates = main function = = def main (): X Tau=initialization () # initialize ant colony X and pheromone Tau for NC in tqdm (range (G_max)): # traverse each generation BestIndex = np.argmin (Tau) # optimal index Tau_best = Tau [BestIndex] # optimal pheromone # calculate state transition probability for i in range (m): # traverse each ant P [NC I] = np.abs ((Tau_best-Tau [I])) / np.abs (Tau_best) + 0.01# that is, the distance from the optimal pheromone # = location update = X=position_update (NC,P,X) # X.Shape = (20,2) # = update pheromone = Tau=Update_information (Tau X) # = record optimal objective function value = index = np.argmin (Tau) # minimum value index value = Tau [index] # minimum value fitneess_value_list.append (calc_f (X [index])) # record optimal objective function value # = print result = min_index = np.argmin (Tau) # optimal value index minX = X [min _ index 0] # optimal variable x minY = X [min _ index, 1] # optimal variable y minValue = calc_f (X [min _ index]) # optimal objective function value print ('optimal variable x _ min, minX, end='') print (' optimal variable yearly, minY, end='\ n') print ('optimal objective function value', minValue) print ('penalty term corresponding to the optimal variable' Calc_e (X [index _ index]) # = Visualization = plt.plot (fitneess_value_list, label=' iteration Curve') plt.legend () plt.show () if _ name__=='__main__': main () 6.6 results
| 100% | █ | 200 amp 200 [00:00]
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.