In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to use Python annealing algorithm". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
I. Introduction
Annealing algorithm is self-evident, that is, steel in the quenching process of temperature loss into a stable state process, thermodynamics (internal energy) higher the atomic state is more unstable, and temperature has a physical process of radiation cooling to the low temperature region, when the internal energy of the substance is no longer reduced, the atomic state of the substance gradually becomes a stable ordered state, which has certain reference significance for us to find the optimal solution from random complex problems, this process into an algorithm, see other materials for details.
II. Calculation equation
The equation we want to calculate is f(x) = (x - 2) * (x + 3) * (x + 8) * (x - 9), which is a quartic equation in one variable, which we call a higher order equation. Of course, the opening of this function is upward, so we may not find the maximum point in an infinitely long interval, so we try to solve the minimum point in a shorter interval, and we become the optimal solution.
Solution 1:
There is no doubt that mathematical methods can be solved by multiple derivatives, but this process is more complicated and easy to make mistakes, so I won't go into details. Readers have time to try to solve it themselves.
Solution 2:
This solution is brute force solution, we only solve the optimal solution on the interval [-10,10] here, directly randomly 200 points, and then divide by 10(so you can get non-integer abscissa), and then calculate its ordinate f(x), min{f(x)}, use the index method of list to find the corresponding position of the minimum value, and then draw a graph to take a look.
Direct Sticker Code:
import random
import matplotlib.pyplot as plt
list_x = []
# for i in range(1):
# #print(random.randint(0,100))
# for i in range(0,100):
# print("sss",i)
#
# list_x.append(random.randint(0,100))
for i in range(-100,100):
list_x.append(i/10)
print("abscissa is: ",list_x)
print(len(list_x))
list_y = []
for x in list_x:
# print(x)
#y = x*x*x - 60*x*x -4*x +6
y = (x - 2) * (x + 3) * (x + 8) * (x - 9)
list_y.append(y)
print("ordinate is: ",list_y)
#It is verified that the result 6.5 calculated here and the optimal solution 1549 are both correct
print("Min is: ",min(list_y))
num = min(list_y)
print("optimal solution: ",list_y.index(num)/10)
print("No.",list_y.index(num)/10-10,"get minimum value of positions")
plt.plot(list_x, list_y, label='NM')
#plt.plot(x2, y2, label='Second Line')
plt.xlabel ('X ') #abscissa title
plt.ylabel ('Y ') #ordinate title
#plt.title ('Interesting Graph\nCheck it out',loc="right") #Image Title
#plt.title('Interesting Graph\nCheck it out')
plt.legend() #Displays settings for Fisrt Line and Second Line(label)
plt.savefig('C:/Users/zhengyong/Desktop/1.png')
plt.show()
The following results were obtained:
So we find that the coordinates of the optimal solution are (6.5,-1549.6875), and the result is put here first, and then we use the annealing algorithm to see if we can solve it.
Solution 3:
Let's look at a diagram (the diagram obtained by the method in Solution 2) and then talk about the core idea of annealing algorithm.
First of all, we randomly choose a random solution between [-10.10] as the initial solution space. For example, we randomly choose the highest point in [-2.5.2.5], which is point 1(abscissa x1). It has the value y1 of the ordinate of, and then we randomly add or subtract a value from the abscissa of this point.(Note that the size of this value is very important, we call it random shift value first), add or subtract to get the new abscissa value x2, and then calculate the corresponding ordinate (y2) of this abscissa, compare the size of the previous ordinate, here set
delta = y2-y1, find that no matter what is smaller than the original ordinate (provided that the random shift value is small enough), then we assign the newly obtained x2 to x1, then the current x2 value is passed to x1, x1 is the original random value, this process can be repeated iter_num times, the size depends on its own interval.
The whole process above is carried out at one temperature. After this process is over, we update the temperature again with the temperature update formula, and then repeat the above steps.
A common formula used for temperature updating is T(t)= aT0 (t-1), where 0.85 ≤ a ≤ 0.99. It can also be calculated by the corresponding thermal energy attenuation formula, T(t)=T0/(1+lnt),t= 1, 2, 3,…, which are simple state update methods.
That is, no matter what random number you choose, I can always move in the direction of optimization (provided that it is non-optimal).
Second, point 2 is the same, the difference is that it is a local optimal solution, so what is the mechanism for jumping out of this local optimal solution?
If the initial point is (x3,y3), and then we use the above method to get (x4, y4), the delta we get at point 2 must be greater than 0, so what do we do? When it is greater than 0, we always have a certain probability to accept this point that does not seem optimal, called Metropolis criterion, which is specifically like this:
Here E is y, T is the current temperature, delta less than 0 is 100% accept the new value, or accept according to this probability, when the iteration is many times, each time the step to the right is accumulated to point 1, he may find the final optimal solution, the step is accumulated but the probability is accumulated, which means that this probability is very small, but once the iteration number is long, it will definitely run to the optimal solution.
Optimal, point 3 doesn't explain ha, same as above.
Then we have the code:
#The annealing algorithm rewritten by oneself calculates the equation (x - 2) * (x + 3) * (x + 8) * (x - 9)
Class is useless
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import pyplot as plt
#Set basic parameters
#T Initial temperature, T_stop, iter_num Number of iterations per temperature, Q Number of temperature decays
class Tuihuo_alg():
def __init__(self,T_start,iter_num,T_stop,Q,xx,init_x):
self.T_start = T_start
self.iter =iter_num
self.T_stop = T_stop
self.Q = Q
self.xx = xx
self.init_x = init_x
# def cal_x2y(self):
# return (x - 2) * (x + 3) * (x + 8) * (x - 9)
if __name__ == '__main__':
def cal_x2y(x):
#print((x - 2) * (x + 3) * (x + 8) * (x - 9))
return (x - 2) * (x + 3) * (x + 8) * (x - 9)
T_start = 1000
iter_num = 1000
T_stop = 1
Q = 0.95
K = 1
l_boundary = -10
r_boundary = 10
#Initial Value Zhengzhou Abortion Hospital http://www.029xads.com/
xx = np.linspace(l_boundary, r_boundary, 300)
yy = cal_x2y(xx)
init_x =10 * ( 2 * np.random.rand() - 1)
print("init_x:",init_x)
t = Tuihuo_alg(T_start,iter_num,T_stop,Q,xx,init_x)
val_list = [init_x]
while T_start>T_stop:
for i in range(iter_num):
init_y = cal_x2y(init_x)
#This interval (2 * np.random.rand()-1) itself is (-1,1), so adding is a random addition or subtraction process
new_x = init_x + (2 * np.random.rand() - 1)
if l_boundary
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.