Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Regression based on Pytorch Neural Network

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to achieve Regression based on Pytorch neural network. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.

1. Introduction

We have introduced the basic knowledge of neural network before, the main function of neural network is prediction and classification, now let's build the first neural network for fitting regression.

two。 Neural Network Construction 2.1 preparation work

To build a fitting neural network and draw drawings, we need to use several libraries of python.

Import torchimport torch.nn.functional as Fimport matplotlib.pyplot as plt x = torch.unsqueeze (torch.linspace (- 5,5100), dim=1) y = x.pow (3) + 0.2 * torch.rand (x.size ())

Since it is a fitting, of course we need some data, I chose in the interval

100 evenly spaced points within and arrange them into a cubic function image.

2.2 build a network

We define a class, which inherits a module encapsulated in torch. We first determine the number of neurons in the input layer, hidden layer and output layer respectively. After inheriting the parent class, we use the .nn.Linear () function in torch to make a linear transformation from the input layer to the hidden layer. The hidden layer is also linearly transformed into the output layer predict, and then we define the forward propagation function forward (), using relu () as the activation function. Finally, output the predict () result.

Class Net (torch.nn.Module): def _ init__ (self, n_feature, n_hidden, n_output): super (Net, self). _ _ init__ () self.hidden = torch.nn.Linear (n_feature, n_hidden) self.predict = torch.nn.Linear (n_hidden, n_output) def forward (self X): X = F.relu (self.hidden (x)) return self.predict (x) net = Net (1,20,1) print (net) optimizer = torch.optim.Adam (net.parameters (), lr=0.2) loss_func = torch.nn.MSELoss ()

The framework of the network is built, and then we pass in the number of neurons corresponding to the three layers and then define the optimizer, here I choose Adam and random gradient descent (SGD), because it is the optimized version of SGD, the effect is better than SGD in most cases, we have to input the parameters of this neural network (parameters), and define the learning rate (learning rate), the learning rate is usually selected less than 1, need to rely on experience and continuous debugging. Finally, we choose the mean square deviation method (MSE) to calculate the loss (loss).

2.3 training network

Next, we will train the neural network we have built. I trained 2000 rounds (epoch). First update the result prediction and then calculate the loss, then clear the zero gradient, and then optimize it according to loss back propagation (backward). Finally, find out the best fitting curve.

For t in range (2000): prediction = net (x) loss = loss_func (prediction, y) optimizer.zero_grad () loss.backward () optimizer.step () 3. Effect.

Use the code of the drawing below to show the effect.

For t in range (2000): prediction = net (x) loss = loss_func (prediction, y) optimizer.zero_grad () loss.backward () optimizer.step () if t% 5 = 0: plt.cla () plt.scatter (x.data.numpy (), y.data.numpy (), 10) plt.plot (x.data.numpy (), prediction.data.numpy (), 'rmurf' Lw=2) plt.text (2,-100, 'Loss=%.4f'% loss.data.numpy (), fontdict= {' size': 10, 'color':' red'}) plt.pause (0.1) plt.ioff () plt.show ()

The final result:

4. Complete code import torchimport torch.nn.functional as Fimport matplotlib.pyplot as plt x = torch.unsqueeze (torch.linspace (- 5,5100), dim=1) y = x.pow (3) + 0.2 * torch.rand (x.size ()) class Net (torch.nn.Module): def _ init__ (self, n_feature, n_hidden, n_output): super (Net, self). _ _ init__ () self.hidden = torch.nn.Linear (n_feature) N_hidden) self.predict = torch.nn.Linear (n_hidden, n_output) def forward (self, x): X = F.relu (self.hidden (x)) return self.predict (x) net = Net (1,20,1) print (net) optimizer = torch.optim.Adam (net.parameters () Lr=0.2) loss_func = torch.nn.MSELoss () plt.ion () for t in range (2000): prediction = net (x) loss = loss_func (prediction, y) optimizer.zero_grad () loss.backward () optimizer.step () if t% 5 = = 0: plt.cla () plt.scatter (x.data.numpy (), y.data.numpy (), plt.plot (x.data.numpy ()) Prediction.data.numpy (), 'rmurables, lw=2) plt.text (2,-100,' Loss=%.4f'% loss.data.numpy (), fontdict= {' size': 10, 'color':' red'}) plt.pause (0.1) plt.ioff () plt.show () this is the end of the article on "how neural networks based on Pytorch implement Regression". Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report