Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to write python simple batch gradient descent code

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

Python simple batch gradient drop code how to write, I believe that many inexperienced people do not know what to do, so this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.

Simple batch gradient descent code

It involves formulas.

Alpha stands for hyperparameters and is set externally. If it is too large, there will be concussion, and if it is too small, the learning speed will become slower, so alpha should be constantly adjusted and improved.

Pay attention to the change of the plus and minus sign before 1Aga

The meaning of Xj is a sample of j dimensions.

Here is the code section

The data in import numpy as np# is the same as the data in linear_model x = np.array ([4L8, 5LJ 10]) y = np.array ([20LEC50, 30jp30]) # univariate linear regression, that is, h_theta (x) = y = theta0 + theta1*x# initialization coefficient. At the beginning, it is necessary to initialize theta0 and theta1theta0,theta1 = 0. the initial gradient descent method also has alpha as a hyperparameter, initializing the number of samples with 0.01alpha = 0.01mm in advance. In the gradient descent formula, xm = len (x) # sets the stop condition, that is, when the gradient drops to meet the experimental requirements, it can be stopped. # scenario 1: set the number of iterations, such as stopping after 5000 iterations. # (here is 2) scenario 2: set epsilon to calculate the error of mse (mean square error, one of the linear regression indicators). If the error of mse "= epsilon, that is, stop # after changing the number of epsilon, the smaller the number of iterations is, the more accurate the result is. Epsilon = 0.0000000 cycles setting error error0,error1 = 0Magneto calculate iterations cnt = 0def h_theta_x (x): return theta0+theta1*x# then starts various iterations # "iterations with while" while True: cnt+=1 diff= [0c0] # this is the gradient Iterate after setting two gradients The gradient is cleared each time and then iterated for i in range (m): diff [0] + = (y [I]-h_theta_x (x [I])) * 1 diff [1] + = (y [I]-h_theta_x (x [I])) * x [I] theta0 = theta0 + alpha * diff [0] / m theta1 = theta1 + alpha * diff [1] / m # output theta value # "% s" indicates that the output string is output. Format print ("theta0:%s,theta1:%s"% (theta0,theta1)) # calculate mse for i in range (m): error1 + = (y [I]-h_theta_x (x [I])) * * 2 error1/=m if (abs (error1-error0))

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report