Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementation of Pytorch Multilayer Perceptron

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains the "Pytorch multilayer perceptron implementation method", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's train of thought slowly in depth, together to study and learn "Pytorch multilayer perceptron implementation method"!

Import torchfrom torch import nnfrom torch.nn import initimport numpy as npimport sysimport torchvisionfrom torchvision import transformsnum_inputs=784num_outputs=10num_hiddens=256mnist_train = torchvision.datasets.FashionMNIST (root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor ()) mnist_test = torchvision.datasets.FashionMNIST (root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor ()) batch_size= 256train_iter = torch.utils.data.DataLoader (mnist_train, batch_size=batch_size, shuffle=True) test_iter = torch.utils.data.DataLoader (mnist_test, batch_size=batch_size Shuffle=False) def evalute_accuracy (data_iter,net): acc_sum,n=0.0,0 for XMagi y in data_iter: acc_sum+= (net (X) .argmax (dim=1) = = y). Float (). Sum (). Item () n+=y.shape [0] return acc_sum/ndef train (net,train_iter,test_iter,loss,num_epochs,batch_size,params=None,lr=None) Optimizer=None): for epoch in range (num_epochs): train_l_sum,train_acc_sum,n=0.0,0.0,0 for XMagi y in train_iter: y_hat=net (X) l=loss (y_hat) Y) .sum () if optimizer is not None: optimizer.zero_grad () elif params is not None and params [0] .grad is not None: for param in params: param.grad.data.zero_ () l.backward () optimizer.step () # "simplicity of softmax regression The section "implementation" will use train_l_sum+=l.item () train_acc_sum+= (y_hat.argmax (dim=1) = = y). Sum (). Item () n+=y.shape [0] test_acc=evalute_accuracy (test_iter Net) Print ('epoch% d, loss% .4f, train acc% .3f, test acc% .3f'% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) class Faltten (nn.Module): def _ init__ (self): super (Faltten, self). _ init__ () def forward (self,x): return x.view (x.shape [0]) -1) net = nn.Sequential (Faltten (), nn.Linear (num_inputs,num_hiddens), nn.ReLU (), nn.Linear (num_hiddens,num_outputs)) for params in net.parameters (): init.normal_ (params,mean=0,std=0.01) batch_size=256loss=torch.nn.CrossEntropyLoss () optimizer=torch.optim.SGD (net.parameters (), lr=0.5) num_epochs=5train (net,train_iter,test_iter,loss,num_epochs,batch_size,None,None Optimizer) Thank you for your reading The above is the content of "the implementation method of Pytorch multilayer perceptron". After the study of this article, I believe you have a deeper understanding of the implementation method of Pytorch multilayer perceptron, and the specific use needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report