In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-09-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to solve the problem that the gradient of pytorch loss is none after back propagation". The content of the explanation in this article is simple and clear, and it is easy to learn and understand. please follow the editor's train of thought to slowly deepen, together to study and learn "how to solve the problem of pytorch loss gradient to none after back propagation"!
Error code: the output grad is nonea = torch.ones ((2,2), requires_grad=True) .to (device) b = a.sum () b.backward () print (a.grad) because .to (device) is an operation, an is no longer a leaf node.
The modified code is:
A = torch.ones ((2,2), requires_grad=True) c = a.to (device) b = c.sum () b.backward () print (a.grad)
Similar error:
Self.miu = torch.nn.Parameter (torch.ones (self.dimensional)) * 0.01
Should be
Self.miu = torch.nn.Parameter (torch.ones (self.dimensional) * 0.01)
Add: pytorch gradient returns bug of none
If pytorch2.4.0 uses the view method, the reshape method
Even if requires_grad is set for tensor, after backpropagation, x returns no grad gradient, which is none
I wonder if this bug is available in other versions.
Add: attention points of gradient back propagation in PyTorch
In an iterative cycle
The position of the optimizer.zero_grad () statement is relatively random, as long as it is placed before loss.backward (). Its function is to return the gradient to zero, otherwise it will be accumulated in each iteration.
The function of loss.backward () is to backpropagate and calculate the gradient, and the function of optimizer.step () is to update the parameters automatically by the optimizer.
Optimizer.zero_grad () loss.backward () optimizer.step () Thank you for your reading, the above is the content of "how to solve the problem of gradient to none after back propagation of pytorch loss". After the study of this article, I believe you have a deeper understanding of how to solve the problem of gradient to none after back propagation of pytorch loss, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
The market share of Chrome browser on the desktop has exceeded 70%, and users are complaining about
The world's first 2nm mobile chip: Samsung Exynos 2600 is ready for mass production.According to a r
A US federal judge has ruled that Google can keep its Chrome browser, but it will be prohibited from
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
About us Contact us Product review car news thenatureplanet
More Form oMedia: AutoTimes. Bestcoffee. SL News. Jarebook. Coffee Hunters. Sundaily. Modezone. NNB. Coffee. Game News. FrontStreet. GGAMEN
© 2024 shulou.com SLNews company. All rights reserved.