In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to find the accuracy of the model in Pytorch. I hope you will get something after reading this article. Let's discuss it together.
Method 1: directly calculate the accuracy in the epoch process
Introduction: this code is intercepted in LeNet5.
Def train_model (model,train_loader): optimizer = torch.optim.Adam (model.parameters ()) loss_func = nn.CrossEntropyLoss () EPOCHS = 5 for epoch in range (EPOCHS): correct = 0 for batch_idx, in enumerate (train_loader): optimizer.zero_grad () # does this mean to take only training data? How are X_batch and y_batch separated? # A: X_batch and y_batch correspond one to one, but the order is out of order Refer to torch.utils.data.ipynb output = model (X_batch.float ()) # X_batch.float () what does it mean loss = loss_func (output,y_batch) loss.backward () optimizer.step () # Total correct predictions # the first 1 represents the maximum value of each row The second 1 represents the index that takes only the maximum value # these two lines of code are the places where predicted = torch.max (output.data) 1) [1] correct + = (predicted = = y_batch). Sum () # print (correct) if batch_idx% 100 = 0: print ('Epoch: {} [{} / {} ({: .0f}%)] Loss: {: .6f} Accuracy: {: .3f}' .format (epoch,batch_idx * len (X_batch), len (train_loader.dataset) 100.*batch_idx / len (train_loader), loss.data.item (), float (correct*100) / float (BATCH_SIZE) * (batch_idx+1)) if _ _ name__ = ='_ main__': myModel = LeNet5 () print (myModel) train_model (myModel,train_loader) evaluate (myModel,test_loader,BATCH_SIZE) method II: build functions Then call the function in epoch
Introduction: this code is the analysis and interception of Titanic (Titanic) data.
Epochs = 10log_step_freq = 30 dfhistory = pd.DataFrame (columns = ['epoch','loss',metric_name,'val_loss','val_'+metric_name]) print (' Start Training...') nowtime = datetime.datetime.now (). Strftime ('% Y-%m-%d% H15% MRV% S') print ('='* 8 +'% s'%nowtime) for epoch in range (1): # 1. Training cycle net.train () loss_sum = 0.0 metric_sum = 0.0 step = 1 for step, (features,labels) in enumerate (dl_train,1): # gradient zero optimizer.zero_grad () # forward propagation loss predictions = net (features) loss = loss_func (predictions,labels) metric = metric_func (predictions) Labels) # back propagation gradient loss.backward () optimizer.step () # print batch level log loss_sum + = loss.item () metric_sum + = metric.item () if step%log_step_freq = 0: print (('[Step =% d] loss:% .3f) '+ metric_name+':% .3f%')% (step,loss_sum/step,100*metric_sum/step)) # 2, verify loop net.eval () val_loss_sum = 0.0 val_metric_sum = 0.0 val_step = 1 for val_step, (features,labels) in enumerate (dl_valid) 1): # turn off gradient calculation with torch.no_grad (): pred = net (features) val_loss = loss_func (pred,labels) val_metric = metric_func (labels,pred) val_loss_sum + = val_loss.item () val_metric_sum + = val_metric.item () # 3, logging info = (epoch Loss_sum/step,100*metric_sum/step, val_loss_sum/val_step,100*val_metric_sum/val_step) dfharmy.locus [epoch-1] = info # print epoch level log print (('EPOCH =% djord loss =% .3f,' + metric_name+'=% .3f%) Val_loss =% .3f'+ 'val_'+metric_name+'=% .3f%%') info) nowtime = datetime.datetime.now () .strftime ('% Y-%m-%d% H15% MRV% S') print (''+'='* 8 +'% s'%nowtime) print ('Finishing Training...')
Add: Pytorch to achieve Top1 accuracy and Top5 accuracy
It has not been clear what Top1 and Top5 are before, but in fact, it is very simple, that is, there are two measures, among which Top1 is the ordinary Accuracy,Top5 which is more "strict" than Top1.
Specifically, for example, it needs to be divided into 10 categories, and each time the output of the classifier is a probability value that adds up to 1. Top1 is the correct frequency of the classification corresponding to the largest probability value of these ten values, while Top5 sorts out the first five probability values from large to small, and then check whether there is that correct classification in the first five categories, and then calculate the frequency.
The Pytorch implementation is as follows: def evaluteTop1 (model, loader): model.eval () correct = 0 total = len (loader.dataset) for XMagi y in loader: XMagie y = x.to (device), y.to (device) with torch.no_grad (): logits = model (x) pred = logits.argmax (dim=1) correct + = torch.eq (pred) Y) .sum (). Float (). Item () # correct + = torch.eq (pred, y). Sum () .item () return correct / totaldef evaluteTop5 (model, loader): model.eval () correct = 0 total = len (loader.dataset) for xMagy in loader: XMagi y = x.to (device) Y.to (device) with torch.no_grad (): logits = model (x) maxk = max (1Magazine 5)) y_resize = y.view (- 1 True) _, pred = logits.topk (maxk, 1, True, True) correct + = torch.eq (pred, y_resize). Sum (). Float (). Item () return correct / total
Note:
Y_resize = y.view (- 1) is a very critical step. In the operation of correct, the key is to match the pred and y_resize dimensions, and the original y is [128,128], which is the size of batch.
The dimension of pred is [128jue 10], assuming that CIFAR10 is very similar; therefore, y must be transformed into a dimension of [128jue 1], but not directly y.view (128jue 1), because when traversing the entire dataset
The size of the last batch is not 128, so the first size in view () is set to-1 unknown, and just make sure the second size is 1
Supplement: specific usage of topk function
Pytorch-- topk () torch.topk (input, k, dim=None, largest=True, sorted=True, out=None)-> (Tensor, LongTensor)
Returns the k maximum values in the input tensor input along a given dim dimension.
If dim is not specified, it defaults to the last dimension of input.
If largest is False, the smallest k values are returned.
Returns a tuple (values,indices), where indices is the subscript of the test element in the original input tensor input.
Setting the Boolean value sorted to _ True_, will ensure that the k values returned are sorted.
Parameters.
Input (Tensor)-input tensor
K (int)-k in "top-k"
Dim (int, optional)-the dimension of sorting
Largest (bool, optional)-Boolean value that controls whether the maximum or minimum value is returned
Sorted (bool, optional)-Boolean value that controls whether the returned values are sorted
Out (tuple, optional)-optional output tensor (Tensor, LongTensor) output buffer
Example
Suppose the output of the neural network is as follows, which is divided into two categories. Batch_size=4
Import torch output = torch.tensor ([[- 5.4783, 0.2298], [- 4.2573,-0.4794], [- 0.1070,-5.1511], [- 0.1785,-4.3339]])
The operation to get its top1 value is as follows:
Maxk = max ((1,)) # take the accuracy of top1, if the accuracy of top1 and top5 is changed to max (1mem5) _, pred = output.topk (maxk, 1, True, True)
Among the topk parameters, maxk is the accuracy of top1, dim=1 is the value by row, and largest=1 is the maximum value.
The results are as follows
_ tensor ([[0.2298], [- 0.4794], [- 0.1070], [- 0.1785]]) predtensor ([[1], [1], [0], [0]])
_ is the value of top1, and pred is the index of the maximum value (size=4*1). It is generally transposed to compare with the real value.
After reading this article, I believe you have a certain understanding of "how to find the model accuracy in Pytorch". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.