Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use with torch.no_grad () in pytorch

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to use with torch.no_grad () in pytorch". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor learn how to use with torch.no_grad () in pytorch.

1. About with

With is the context manager in python. Simply understand that when you want to make a fixed entry and return operation, you can put the corresponding operation in the statement required by with. For example, the writing of the file (need to open and close the file) and so on.

The following is an example of using with for file writing.

With open (filename 'w') as sh: sh.write ("#! / bin/bash\ n") sh.write ("# $- N" +' IC'+altas+str (patientNumber) + altas+'\ n') sh.write ("# $- o" + pathSh+altas+'log.log\ n') sh.write ("# $- e" + pathSh+altas+'err.log\ n') sh.write ( 'source ~ / .bashrc\ n') sh.write ('. "/ home/kjsun/anaconda3/etc/profile.d/conda.sh"\ n') sh.write ('conda activate python27\ n') sh.write (' echo "to python"\ n') sh.write ('echo "finish"\ n') sh.close ()

After with, you can run the statement after with and return the result to the variable after as (sh), and then the code block operates on close.

two。 About with torch.no_grad ():

When using pytorch, not all operations require computation graph generation (construction of the calculation process for gradient back propagation, etc.). For the calculation operation of tensor, the default is to build the calculation graph, in which case, you can use with torch.no_grad (): to force the content after that not to build the calculation graph.

The following are the cases of use and non-use:

(1) use with torch.no_grad ():

With torch.no_grad (): for data in testloader: images, labels = data outputs = net (images) _, predicted = torch.max (outputs.data, 1) total + = labels.size (0) correct + = (predicted = = labels). Sum (). Item () print ('Accuracy of the network on the 10000 test images:% d%') print (outputs)

Running result:

Accuracy of the network on the 10000 test images: 55%

Tensor ([- 2.9141,-3.8210, 2.1426, 3.0883, 2.6363, 2.6878, 2.8766, 0.3396)

-4.7505,-3.8502]

[- 1.4012,-4.5747, 1.8557, 3.8178, 1.1430, 3.9522,-0.4563, 1.2740

-3.7763,-3.3633]

[1.3090, 0.1812, 0.4852, 0.1315, 0.5297,-0.3215,-2.0045, 1.0426

-3.2699,-0.5084]

[- 0.5357,-1.9851,-0.2835,-0.3110, 2.6453, 0.7452,-1.4148, 5.6919

-6.3235,-1.6220]])

The outputs at this time has no attributes.

(2) do not use with torch.no_grad ():

And the corresponding cases of non-use

For data in testloader: images, labels = data outputs = net (images) _, predicted = torch.max (outputs.data, 1) total + = labels.size (0) correct + = (predicted = = labels). Sum (). Item () print ('Accuracy of the network on the 10000 test images:% d%') print (outputs)

The results are as follows:

Accuracy of the network on the 10000 test images: 55%

Tensor ([- 2.9141,-3.8210, 2.1426, 3.0883, 2.6363, 2.6878, 2.8766, 0.3396)

-4.7505,-3.8502]

[- 1.4012,-4.5747, 1.8557, 3.8178, 1.1430, 3.9522,-0.4563, 1.2740

-3.7763,-3.3633]

[1.3090, 0.1812, 0.4852, 0.1315, 0.5297,-0.3215,-2.0045, 1.0426

-3.2699,-0.5084]

[- 0.5357,-1.9851,-0.2835,-0.3110, 2.6453, 0.7452,-1.4148, 5.6919

-6.3235,-1.6220], grad_fn=)

As you can see, there is a grad_fn= attribute at this time, which means that the result of the calculation is in a calculation graph, and operations such as gradient reverse transmission can be carried out. However, there is actually no difference between the two calculations.

Attached: pytorch uses model testing using with torch.no_grad ():

When using pytorch, not all operations require computation graph generation (construction of the calculation process for gradient back propagation, and so on). For the calculation operation of tensor, the default is to build the calculation graph, in which case, you can use with torch.no_grad (): to force the content after that not to build the calculation graph.

With torch.no_grad (): for data in testloader: images, labels = data outputs = net (images) _, predicted = torch.max (outputs.data, 1) total + = labels.size (0) correct + = (predicted = = labels). Sum (). Item () print ('Accuracy of the network on the 10000 test images:% d%') print (outputs)

Running result:

Accuracy of the network on the 10000 test images: 55%

Tensor ([- 2.9141,-3.8210, 2.1426, 3.0883, 2.6363, 2.6878, 2.8766, 0.3396)

-4.7505,-3.8502]

[- 1.4012,-4.5747, 1.8557, 3.8178, 1.1430, 3.9522,-0.4563, 1.2740

-3.7763,-3.3633]

[1.3090, 0.1812, 0.4852, 0.1315, 0.5297,-0.3215,-2.0045, 1.0426

-3.2699,-0.5084]

[- 0.5357,-1.9851,-0.2835,-0.3110, 2.6453, 0.7452,-1.4148, 5.6919

-6.3235,-1.6220]])

At this point, I believe you have a deeper understanding of "how to use with torch.no_grad () in pytorch". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report