In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "how to understand pyTorch weight attenuation and L2 norm regularization in Python". In daily operation, I believe many people have doubts about how to understand pyTorch weight attenuation and L2 norm regularization in Python. Xiaobian consulted all kinds of data and sorted out simple and useful operation methods. I hope it will be helpful for you to answer the doubts of "how to understand pyTorch weight attenuation and L2 norm regularization in Python"! Next, please follow the editor to study!
Let's do a high-dimensional linear experiment.
Suppose our real equation is:
Suppose the feature number is 200,20 training samples and 20 test samples are each.
Simulated dataset num_train,num_test = 1010 numb features = 200true_w = torch.ones ((num_features,1), dtype=torch.float32) * 0.01true_b = torch.tensor (0.5) samples = torch.normal (0L1, (num_train+num_test,num_features)) noise = torch.normal (0L01L, (num_train+num_test,1)) labels= samples.matmul (true_w) + true_b + noisetrain_samples, train_labels= samples [: num_train] Labels [: num_train] test_samples, test_labels = samples [num _ train:], labels [num _ train:] define loss functiondef loss_function with regular terms (predict,label,w,lambd): loss = (predict-label) * * 2 loss = loss.mean () + lambd * (wreckage 2). Mean () return loss drawing method def semilogy Legend): plt.figure (figsize= (3Magazine 3)) plt.xlabel (x_label) plt.ylabel (y_label) plt.semilogy (xylene Val Val) if x2_val and y2_val: plt.semilogy (x2 Val Val y2valval) plt.legend (legend) plt.show () fitting and drawing def fit_and_plot (train_samples,train_labels,test_samples,test_labels,num_epoch) Lambd): W = torch.normal (0Power1, (train_samples.shape [- 1], 1), requires_grad=True) b = torch.tensor (0.recordgradTrue) optimizer = torch.optim.Adam ([wrecoverb], lr=0.05) train_loss = [] test_loss = [] for epoch in range (num_epoch): predict = train_samples.matmul (w) + b epoch_train_loss = loss_function (predict,train_labels,w) Lambd) optimizer.zero_grad () epoch_train_loss.backward () optimizer.step () test_predict = test_sapmles.matmul (w) + b epoch_test_loss = loss_function (test_predict,test_labels,w,lambd) train_loss.append (epoch_train_loss.item ()) test_loss.append (epoch_test_loss.item ()) semilogy (range (1) Train_loss,'epoch','loss',range (1), test_loss, ['train','test'])
It can be found that in the model with regular terms, the loss on the test set does decrease.
At this point, the study on "how to understand pyTorch weight attenuation and L2 norm regularization in Python" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.