Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How does PyTorch show the order of calls

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "how to show the calling order of PyTorch". In the daily operation, I believe many people have doubts about how to show the calling order of PyTorch. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to show the calling order of PyTorch". Next, please follow the editor to study!

Summary: on the surface of the code experiment, the order of back propagation calculation of gradient is opposite to that of forward calculation. This can be observed by the execution order of hook functions and inferred from the order of contents in the list of saved gradients.

The code experiment shows:

Import torchprint (torch.__version__) # 1.2.0+cu92torch.manual_seed (seed=20200910) gradients = list () #-# def grad_hook_x0 (grad): print ("\ nExecutes a custom hook function for x0...") print ("saves the gradient of x0. ") gradients.append (grad) print (" end of hook function execution for "x0.\ n") # return gradx0 = torch.randn (2prime3 Print ('x0:\ n`) # x0.shape: torch.Size ([2, 3, 4 pencils 5 pens 6 pas 7]) # print ('x0:\ n') X0) x0.register_hook (grad_hook_x0) #-# def grad_hook_x1 (grad): print ("\ nExecutes a custom hook function for x1...") print ("Save the gradient of x1...") gradients.append (grad) print ("hook function of x1 End of count execution.\ n ") # return gradx1 = torch.sum ((4 * x0 + 18.0) Dim= (0Jing 1)) x1.retain_grad () print ('x1.shapeghuanju, x1.shape) # x1.shape: torch.Size ([4,5,6,7]) # print (' x1:\ n' X1) x1.register_hook (grad_hook_x1) #-# def grad_hook_x2 (grad): print ("\ nexecute a custom hook function for x2.") print ("Save the gradient of x2...") gradients.append (grad) print ("hook function of x2." End of count execution.\ n ") # return gradx2 = torch.sum (x1) Dim= (1Magazine 2) * 10.0x2.retain_grad () print ('x2.shapex2.shape) # x2.shape: torch.Size ([4,7]) # print (' x2:\ n' X2) x2.register_hook (grad_hook_x2) #-# def grad_hook_loss (grad): print ("\ nExecutes a custom hook function for loss...") print ("Save the gradient of loss...") gradients.append (grad) print ("hook function of loss End of count execution.\ n ") # return gradloss = torch.mean (x2) loss.retain_grad () print ('loss.shape:' Loss.shape) # loss.shape: torch.Size ([]) print ('loss:',loss) # loss: tensor (32403.7344, grad_fn=) loss.register_hook (grad_hook_loss) #-- # loss.backward () # this line of code will execute the registered hook function tensors_list = [loss X2, x1, x0] print ('print related information, length of gradients list is:', len (gradients)) print ('print relevant information, length of tensors_list list is:', len (tensors_list)) for g, t in zip (gradients, tensors_list): print (torch.equal (g, t.grad), g.shape==t.grad.shape==t.shape, g.shape, t.grad.shape, t.shape)

Console output result:

It took 869 milliseconds to try a new cross-platform PowerShell to load personal and system profiles. (base) PS C:\ Users\ chenxuqi\ Desktop\ News4cxq\ test4cxq > conda activate ssd4pytorch2_2_0 (ssd4pytorch2_2_0) PS C:\ Users\ chenxuqi\ Desktop\ News4cxq\ test4cxq > &'D:\ Anaconda3\ envs\ ssd4pytorch2_2_0\ python.exe''c:\ Users\ chenxuqi\ .vscode\ extensions\ ms-python.python-2021.1.502429796\ lib\ python\ debugpy\ launcher' '58682' -' c:\ Users\ chenxuqi\ Desktop\ News4cxq\ test4cxq\ testHook.py'1.2.0+cu92x0.shape: torch.Size ([2 3, 4, 5, 6, 7]) x1.shape: torch.Size ([4,5,6,7]) x2.shape: torch.Size ([4,7]) loss.shape: torch.Size ([]) loss: tensor (32403.7344, grad_fn=) executes custom hook functions for loss. Save the gradient of loss. Loss's hook function execution ends. Execute custom hook functions for x2. Save the gradient of x2. The execution of the hook function of x2 ends. Execute a custom hook function for x1. Save the gradient of x1. The execution of the hook function of x1 ends. Execute a custom hook function for x0. Save the gradient of x 0. The end of the hook function execution of x 0. Print related information, the length of gradients list is: 4 print related information, the length of tensors_list list is: 4True True torch.Size ([]) True True torch.Size ([4,7]) True True torch.Size ([4,5,6,7]) torch.Size ([4,5,6,7]) torch.Size ([4,5,6]) 7]) True True torch.Size ([2,3,4,5,6,7]) (ssd4pytorch2_2_0) PS C:\ Users\ chenxuqi\ Desktop\ News4cxq\ test4cxq > so far The study on "how to show the calling order of PyTorch" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report