In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "how to set random seeds in PyTorch". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Import torchimport torch.nn as nnimport matplotlib.pyplot as pltfrom tools import set_seedfrom torch.utils.tensorboard import SummaryWriterset_seed (1) # set random seed n_hidden = 200max_iter = 2000disp_interval = 200lr_init = 0.01def gen_data (num_data=10, xseed = (- 1,1)): W = 1.5train_x = torch.linspace (* x_range, num_data) .unsqueeze _ (1) train_y = w*train_x + torch.normal (0,0.5) Size=train_x.size () test_x = torch.linspace (* x_range, num_data) .unsqueeze _ (1) test_y = w*test_x + torch.normal (0,0.3, size=test_x.size ()) return train_x, train_y, test_x, test_ytrain_x, train_y, test_x, test_y = gen_data (num_data=10, x _ range = (- 1) 1) class MLP (nn.Module): def _ init__ (self, neural_num): super (MLP, self). _ _ init__ () self.linears = nn.Sequential (nn.Linear (1, neural_num), nn.ReLU (inplace=True), nn.Linear (neural_num, neural_num), nn.ReLU (inplace=True) Nn.Linear (neural_num, neural_num), nn.ReLU (inplace=True), nn.Linear (neural_num, 1),) def forward (self, x): return self.linears (x) net_n = MLP (neural_num=n_hidden) net_weight_decay = MLP (neural_num=n_hidden) optim_n = torch.optim.SGD (net_n.parameters (), lr=lr_init Momentum=0.9) optim_wdecay = torch.optim.SGD (net_weight_decay.parameters (), lr=lr_init, momentum=0.9, weight_decay=1e-2) loss_fun = torch.nn.MSELoss () # mean square loss writer = SummaryWriter (comment='test', filename_suffix='test') for epoch in range (max_iter): pred_normal, pred_wdecay = net_n (train_x), net_weight_decay (train_x) loss_n, loss_wdecay = loss_fun (pred_normal Train_y), loss_fun (pred_wdecay, train_y) optim_n.zero_grad () optim_wdecay.zero_grad () loss_n.backward () loss_wdecay.backward () optim_n.step () # Parameter update optim_wdecay.step () if (epoch + 1)% disp_interval = = 0: for name Layer in net_n.named_parameters (): # # writer.add_histogram (name +'_ grad_normal', layer.grad, epoch) writer.add_histogram (name +'_ data_normal', layer, epoch) for name, layer in net_weight_decay.named_parameters (): writer.add_histogram (name +'_ grad_weight_decay', layer.grad) Epoch) writer.add_histogram (name +'_ data_weight_decay', layer, epoch) test_pred_normal, test_pred_wdecay = net_n (test_x), net_weight_decay (test_x) plt.scatter (train_x.data.numpy (), train_y.data.numpy (), cations bluetooth, swarm 50, alpha=0.3, label='trainc') plt.scatter (test_x.data.numpy () Test_y.data.numpy (), plt.plot (test_x.data.numpy (), test_pred_normal.data.numpy (), lw=3, label='no weight decay') plt.plot (test_x.data.numpy (), test_pred_wdecay.data.numpy (), lw=3, label='weight decay') plt.text -1.5,'no weight decay loss= {: .6f} '.format (loss_n.item ()), fontdict= {' size': 15, 'color':' red'}) plt.text (- 0.25,- 2, 'weight decay loss= {: .6f}' .format (loss_wdecay.item ()), fontdict= {'size': 15 'color':' red'}) plt.ylim (- 2.5,2.5) plt.legend () plt.title ('Epoch: {}' .format (epoch + 1)) plt.show () plt.close () job
1. What line is the implementation code of weight decay in pytorch's SGD? What is its corresponding mathematical formula?
2. In PyTorch, what does the weight scale of Dropout do during training?
1. Weight decayoptim_wdecay = torch.optim.SGD (net_weight_decay.parameters (), lr=lr_init, momentum=0.9, weight_decay=1e-2) optim_wdecay.step () 2.dropout expectation
Dropout is randomly inactivated, and the hidden unit is discarded with a certain probability, which is stretched by dividing the probability of 1murp by 1murp, that is, the calculation of the output unit does not depend on the discarded hidden layer unit.
This is the end of the content of "how to set up random seeds in PyTorch". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.