In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the Pytorch training model after the output of how to calculate F1-Score and AUC, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
1. Calculate F1-Score
For two categories, assuming that the size of batch size is 64, then the output of a batch of the model should be torch.size ([64]), so the first thing to do is to get the maximum index value of each row of the two-dimensional matrix, then add the label to a list, and finally calculate it using the toolkit for calculating F1 in sklearn. The code is as follows
Import numpy as npimport sklearn.metrics import f1_scoreprob_all = [] lable_all = [] for I, (data,label) in tqdm (train_data_loader): prob = model (data) # indicates the predicted output of the model prob = prob.cpu (). Numpy () # first transfer prob to CPU, and then to numpy If you train on CPU, you don't have to convert to CPU first. Prob_all.extend (np.argmax (prob,axis=1)) # find the maximum index of each row label_all.extend (label) print ("F1-Score: {: .4f}" .format (f1_score (label_all,prob_all) 2, calculate AUC
When calculating AUC, the roc_auc_score () method in sklearn is used this time.
Enter parameters:
Y_true: the real label. Shape (n_samples,) or (n_samples, n_classes). Shapes of two categories (nasty samples 1), and shapes of multi-label cases (n_samples, n_classes).
Y_score: target score. Shape (n_samples,) or (n_samples, n_classes). Second, the shape of the classification situation (nasty samples1), "the score must be the score of the class with a larger label", popular understanding: the second column of the model score. For example: the score entered by the model is an array [0.98361117 0.01638886], and the index is its category, where the "score of the larger label class" refers to the score of index 1: 0.01638886, that is, the predicted score of the positive example.
When average='macro': two classifies, this parameter can be ignored. For multi-classification, 'micro': calculate global metrics by treating each element of the label metric matrix as a tag. Macro': calculate the metrics for each tag and find their unweighted average. This does not take into account the imbalance of the label.' Weighted': calculate the metrics for each tag, find their average, and weight them based on support (the number of real instances of each tag).
Sample_weight=None: sample weight. Shape (n_samples,), default = none.
Max_fpr=None:
Multi_class='raise': (the problem of multi-classification will be explained in the next article)
Labels=None:
Output:
Auc: is the value of a float.
Import numpy as npimport sklearn.metrics import roc_auc_scoreprob_all = [] lable_all = [] for I, (data,label) in tqdm (train_data_loader): prob = model (data) # represents the predicted output of the model prob_all.extend (prob [:, 1] .CPU (). Numpy ()) # prob [:, 1] returns the number of the second column in each row. According to the parameters of this function, the score of the larger label class represented by y_score So it is the value corresponding to the largest index, rather than the maximum index value label_all.extend (label) print ("AUC: {: .4f}" .format (roc_auc_score (label_all,prob_all)
Add: some pits in the pytorch training model
1. Image reading
The results of the images read by opencv's python and C++ are different because python and C++ use different versions of opencv, so they use different decoding libraries, resulting in different reading results.
two。 Image transformation
The image resize operation of PIL and pytorch is different from the resize result of opencv, which will lead to the use of PIL in training and opencv in prediction, and the results are very different, especially in detection and segmentation tasks.
3. Numerical calculation.
In the torch.exp calculation of pytorch and the exp calculation of C++, there will be 10e-3 errors in the value of 10e-6. Special attention should be paid to high precision calculations, such as
Two inputs 5.601597, 5.601601, after exp calculation, become 270.85862343143174,270.85970686809225
Thank you for reading this article carefully. I hope the article "how to calculate F1-Score and AUC after the Pytorch training model is output" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.