In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "the printing results of the parameter information inside the Pytorch optimizer". In the daily operation, I believe that many people have doubts about the printing results of the parameter information inside the Pytorch optimizer. The editor consulted all kinds of materials and sorted out the simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about the printing results of the parameter information inside the Pytorch optimizer. Next, please follow the editor to study!
Code:
Import timeimport torchimport torch.optim as optimclass Model (torch.nn.Module): def _ init__ (self): super (Model,self). _ _ init__ () self.conv1=torch.nn.Sequential (# enter torch.Size ([64,1,28,28]) torch.nn.Conv2d The main input parameters are the number of input channels, the number of # output channels, the convolution kernel size, the convolution kernel movement step and the adding value. # output dimension = 1 + (input dimension-convolutional kernel size + 2*padding) / convolutional kernel step # output torch.Size ([64,64,28,28]) torch.nn.ReLU (), # output torch.Size ([64,64,28,28]) torch.nn.Conv2d (64padding1), # output torch.Size ([64Power128,28,28]) torch.nn.ReLU (), torch.nn.MaxPool2d (stride=2) Kernel_size=2) # the main input parameters are pooling window size, pooling window movement step and Padding value # output torch.Size ([64,128,14,14]) self.dense=torch.nn.Sequential (# input torch.Size ([64,144,148,128]) torch.nn.Linear (14140128), # class torch.nn.Linear (in_features) Out_features,bias = True) # output torch.Size ([64, 1024]) torch.nn.ReLU (), torch.nn.Dropout (pendant 0.5), # torch.nn.Dropout class is used to prevent convolution neural network from being over-fitted during training. Its working principle is simply in the process of model training. # return some parameters of the convolution neural network model to zero with a certain random probability, in order to achieve the purpose of # to reduce the neural connection between the two adjacent layers. The purpose of this is to make the model trained by our final training # not over-dependent on the weight parameters of each part, so as to prevent # from overfitting. For the torch.nn.Dropout class, we can set the size of the random probability value # and use the default probability value of 0.5 if nothing is set. Torch.nn.Linear (1024pion10) # output torch.Size ([64,10]) def forward (self,x): # torch.Size ([64,1,28,28]) x = self.conv1 (x) # output torch.Size ([64,128,14,14]) x = x.view (- 1Power140128) # view () function is to splice a multi-line Tensor into one line Torch.Size ([64,140128]) x = self.dense (x) # output torch.Size ([64,10]) return xmodel = Model () lr= 0.005optimizer = optim.Adam (model.parameters (), lr=lr) for param_group in optimizer.param_groups:print (param_group.keys ()) # print (type (param_group)) print ([type (value) for value in param_group.values ()]) print ('View Learning rate:', param_group ['lr'])
Print the result display:
Copyright (C) Windows PowerShell Microsoft Corporation. All rights reserved. It took 925ms to try a new cross-platform PowerShell https://aka.ms/pscore6 to load personal and system profiles. (base) PS C:\ Users\ chenxuqi\ Desktop\ New folder > &'D:\ Anaconda3\ envs\ ssd4pytorch2_2_0\ python.exe''c:\ Users\ chenxuqi\ .vscode\ extensions\ ms-python.python-2020.11.371526539\ pythonFiles\ lib\ python\ debugpy\ launcher' '60077' -' c:\ Users\ chenxuqi\ Desktop\ New folder\ tt.py' dict_keys (['params',' lr', 'betas') 'eps',' weight_decay', 'amsgrad']) [,] View learning rate: 0.005 (base) PS C:\ Users\ chenxuqi\ Desktop\ New folder > conda activate ssd4pytorch2_2_0 (ssd4pytorch2_2_0) PS C:\ Users\ chenxuqi\ Desktop\ New folder > so far The study on "printing results of each parameter information within the Pytorch optimizer" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.