Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use the logical regression Random gradient Descent method in python

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly shows you "how to use the random gradient descent method of logical regression in python". The content is simple and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "how to use the random gradient method of logical regression in python".

Random gradient descent method

Stochastic gradient descent method (Stochastic Gradient Decent)

SGD) is an improved algorithm for the computational efficiency of the full batch gradient descent method. In essence, we expect that the result of the random gradient descent method is similar to that of the full-batch gradient descent method; the advantage of SGD is that the gradient can be calculated faster.

Code

The random gradient descent method (Stochastic Gradient Decent, SGD) is an improved algorithm for the computational efficiency of the full batch gradient descent method. In essence, we expect the results of the random gradient descent method to be similar to those of the full-batch gradient descent method; the advantage of SGD is that it calculates gradients faster. Import pandas as pdimport numpy as npimport osos.getcwd () # F:\\ pythonProject3\\ data\\ data\\ train.csv# dataset_path ='..'# this is a full batch gradient descent (full-batch gradient descent) application. # this question is a regression question # We give a large Q & A community in the United States from October 1, 2010 to November 30, 2016, # the number of new questions and answers per day. The # task is to forecast the number of new questions and answers per day on the Q & A site from December 1, 2016 to May 1, 2017. Train = pd.read_csv ('..\\ train.csv') # Import data # train = pd.read_csv ('train.csv') test = pd.read_csv ('..\ test.csv') submit = pd.read_csv ('..\\ sample_submit.csv') path2=os.path.abspath ('.) print ("path2@", path2) path3=os.path.abspath ('..') print ("path3@") Path3) print (train) # initial setting beta = [1mai 1] # initial point alpha = 0.2 # Learning rate That is, step size tol_L = 0.1 # threshold That is, precision # normalizes x, train is the two-dimensional table max_x = max (train ['id']) # max_x of the training data x = train [' id'] / max_x # all id is divided by the questions column in the two-dimensional table max_xy = train ['questions'] # train assigned to ytype (train [' id']) print ("train ['id'] #\ n" Train ['id']) print ("type (train [' id']) #\ n\ n", x) print ("max_x#", max_x) # to calculate the direction def compute_grad_SGD (beta, x, y):'': param beta: is the initial point: param x: is an independent variable: param y: true or true: return: gradient array''grad = [0 0] r = np.random.randint (0, len (x)) # randomly generate a number between 0-len (x) grad [0] = 2. * np.mean (beta [0] + beta [1] * x [r]-y [r]) # find beta [1Jue 1] The gradient of the first number grad [1] = 2. * np.mean (x * (beta [0] + beta [1] * x-y)) # find beta [1L1], the gradient return np.array (grad) # of the second number in order to calculate where the next point is, def update_beta (beta, alpha, grad):'': param beta: first point, initial point: param alpha: learning rate Time step: param grad: gradient: return:''new_beta = np.array (beta)-alpha * grad return new_beta# defines the function for calculating RMSE # root mean square error (RMSE) def rmse (beta, x, y): squared_err = (beta [0] + beta [1] * x-y) * * 2 # beta [0] + beta [1] * x is the predicted value, y is the true value Res = np.sqrt (np.mean (squared_err)) return res# for first calculation grad = compute_grad_SGD (beta, x, y) # call the calculation gradient function, calculate the gradient loss = rmse (beta, x, y) # call the loss function, calculate the loss beta = update_beta (beta, alpha, grad) # Update the next point loss_new = rmse (beta, x, y) # call the loss function Calculate the next loss # start iteration I = 1while np.abs (loss_new-loss) > tol_L: beta = update_beta (beta, alpha, grad) grad = compute_grad_SGD (beta, x, y) if I% 100 = 0: loss = loss_new loss_new = rmse (beta, x, y) print ('Round% s Diff RMSE% s% (I) Abs (loss_new-loss) I + = 1print ('Coef:% s\ nIntercept% slots% (beta [1], beta [0])) res = rmse (beta, x, y) print (' Our RMSE:% s'%res) from sklearn.linear_model import LinearRegressionlr = LinearRegression () lr.fit (train [['id']]) Train [['questions']]) print (' Sklearn Coef:% slots% lr.coef0] [0]) print ('Sklearn Coef:% slots% lr.coef0]) res = rmse ([936.051219649, 2.19487084], train [' id'], y) print ('Sklearn RMSE:% s'%res) are all the contents of this article entitled "how to use the Random gradient descent method of logical regression in python" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report