Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the index of Classification algorithm?

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "what is the index of Classification algorithm". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

The common classification (Classification) algorithm indicators mainly include accuracy (Accuracy), accuracy and recall, ROC curve and AUC space.

Classification is a kind of important problem in machine learning, and many important algorithms are solving classification problems, such as decision tree, support vector machine and so on.

Common classification models include logical regression, decision tree, naive Bayesian, SVM, neural network and so on. The evaluation indexes of the model include the following:

TPR, FPR&TNR (confusion matrix)

What is the confusion matrix (Confusionmatrix). The name is really good, and beginners are easily confused by the matrix. Figure a below is the famous confusion matrix, while figure b below is some well-known evaluation indicators derived from the confusion matrix.

In the two-classification problem, instances are divided into positive class (positive) or negative class (negative). For a dichotomy problem, there are four situations. If an instance is a positive class and is also predicted to be a positive class, it is a real class (True positive). If the instance is a negative class, it is predicted to be a positive class, it is called a false positive class (False positive). Accordingly, if an instance is a negative class, it is predicted to be a negative class, which is called a true negative class (True negative), and a positive class is predicted to be a negative class (false negative).

True Positive (real, TP) is a positive sample predicted by the model; it can be called the correct rate of judgment to be true.

True Negative (true negative, TN) is predicted as a negative sample by the model; it can be called the correct rate of judgment as false.

False Positive (false positive, FP) is predicted as a positive negative sample by the model; it can be called a false alarm rate.

False Negative (false negative, FN) is predicted by the model as a negative positive sample; it can be called a false positive rate.

Evaluation index

True Positive Rate (true rate, TPR) or sensitivity (sensitivity)

TPR = TP / (TP + FN)

Number of predicted results of positive samples / actual number of positive samples

True Negative Rate (true-negative rate, TNR) or specific degree (specificity)

TNR = TN / (TN + FP)

Number of predicted results of negative samples / actual number of negative samples

False Positive Rate (false positive rate, FPR)

FPR = FP / (FP + TN)

The number of negative sample results predicted as positive / the actual number of negative samples

False Negative Rate (false negative rate, FNR)

FNR = FN / (TP + FN)

The number of positive sample results predicted to be negative / the actual number of positive samples

Accuracy (Precision):

P = TP/ (TP+FP); it reflects the proportion of real positive samples in the positive examples determined by the classifier.

Accuracy (Accuracy)

A = (TP + TN) / (Prunn) = (TP + TN) / (TP + FN + FP + TN)

It reflects the ability of the classifier to judge the whole sample-- it can judge the positive as positive and the negative as negative.

Recall rate (Recall), also known as True Positive Rate:

R = TP/ (TP+FN) = 1-FN/T; reflects the proportion of correct examples in the total positive cases.

From sklearn.metrics import confusion_matrix# y_pred is the prediction tag y_pred, yearly truth = [1JEOJEI 0], [0JEOJIZOJEI 0] confusion_matrix (y_true=y_true, y_pred=y_pred) precision rate Precision, recall rate Recall and F1 value

Accuracy (accuracy) and recall are two measures widely used in the field of information retrieval and statistical classification to evaluate the quality of results. The precision is the ratio of the number of relevant documents retrieved to the total number of documents retrieved, which measures the precision of the retrieval system; the recall rate refers to the ratio of the number of relevant documents retrieved and all the relevant documents in the document library, and measures the recall of the retrieval system.

Generally speaking, Precision refers to how many retrieved items (such as documents, web pages, etc.) are accurate, and Recall refers to how many accurate entries have been retrieved. The definitions of the two are as follows:

Precision = the number of correct information extracted / the number of extracted information

Recall = number of correct pieces of information extracted / number of pieces of information in the sample

Comprehensive evaluation index F-measure

There are sometimes contradictions between Precision and Recall indicators, so they need to be considered comprehensively. The most common method is to put forward the concept of F1 value on the basis of Precision and Recall to evaluate Precision and Recall as a whole. F1 is defined as follows:

F1 value = correct rate * recall rate * 2 / (accuracy + recall rate)

F-Measure is the weighted harmonic average of Precision and Recall:

When the parameter α = 1, it is the most common F1. Therefore, F1 combines the results of P and R, and when F1 is higher, it shows that the test method is more effective.

Application scenarios:

Accuracy and recall rate affect each other, ideally both must be high, but in general, the accuracy rate is high, the recall rate is low, the recall rate is low, the accuracy rate is high, of course, if both are low, then what went wrong. When both accuracy and recall are high, the value of F1 will also be high. In the case of high requirements for both, F1 can be used to measure

Earthquake prediction

For earthquake prediction, we hope that the RECALL is very high, that is to say, we want to predict every earthquake. At this point, we can sacrifice PRECISION. It is better to sound 1000 alarms to predict all 10 earthquakes correctly than to be right eight times and miss twice.

Suspect conviction

Based on the principle of blaming a good man, we hope the conviction of the suspect will be very accurate. Just in time sometimes let off some criminals (recall is low), but it is also worth it.

Let me give you an example:

There are 1400 carps, shrimps and soft-shelled turtles in a pond. Now the aim is to catch carp. Cast a wide net and caught 700 carp, 200 shrimp and 100 soft-shelled turtles. So, these indicators are as follows:

Correct rate = 700 / (700 + 200 + 100) = 70%

Recall rate = 700 / 1400 = 50%

F1 value = 70% * 50% * 2 / (70% + 50%) = 58.3%

Take a look at how these indicators will change if you catch all the carp, shrimp and soft-shelled turtles in the pond:

Correct rate = 1400 / (1400 + 300300) = 70%

Recall rate = 1400 / 1400

F1 value = 70% * 100% * 2 / (70% + 100%) = 82.35%

Thus it can be seen that the correct rate is the proportion of the target results in the assessment of the captured results; the recall rate, as the name implies, is the proportion of the recalled target categories from the area of concern; and the F value is the evaluation indicator that combines the two indicators. used to comprehensively reflect the overall indicators.

Of course, we hope that the higher the Precision of the search results, the better, and the higher the Recall, the better, but in fact the two are contradictory in some cases. For example, in extreme cases, if we only get one result, and it is accurate, then the Precision is 100%, but the Recall is very low; and if we return all the results, for example, Recall is 100%, but the Precision will be very low. Therefore, it is necessary to judge whether the desired Precision is higher or the Recall is higher in different occasions. If you are doing experimental research, you can draw Precision-Recall curves to help with the analysis.

The code adds:

From sklearn.metrics import precision_score, recall_score, f1_score# accuracy (number of correct messages extracted / number of extracted messages) print ('Precision:% .3f'% precision_score (y_true=y_test, y_pred=y_pred)) # recall rate (number of correct messages proposed / number of messages in the sample) print ('Recall:% .3f'% recall_score (y_true=y_test) Y_pred=y_pred) # F1-score (correct rate * recall rate * 2 / (correct rate + recall rate)) print ('F1:% .3f'% f1_score (y_true=y_test, y_pred=y_pred)) ROC curve and AUC

AUC is a model classification index, and is only the evaluation index of the two-classification model. AUC is the abbreviation of Area Under Curve, so Curve is ROC (Receiver Operating Characteristic), translated as "receiver operation characteristic curve". In other words, ROC is a curve and AUC is an area value.

The ROC curve should deviate from the reference line as far as possible, as close to the upper left as possible.

The area under the AUC:ROC curve, the reference area is 0.5, the AUC should be greater than 0.5, and the more deviation the better.

Why introduce ROC curve?

Motivation1: in a binary classification model, for the continuous results, it is assumed that a threshold has been determined, such as 0.6. instances greater than this value are classified as positive and less than this are classified as negative. If the threshold is reduced to 0.5, more positive cases can be identified, that is, the ratio of identified positive examples to all positive examples, that is, TPR, will be improved, but at the same time, more negative examples will be regarded as positive examples, that is, FPR will be improved. In order to visualize this change, the ROC,ROC curve can be used to evaluate a classifier.

Motivation2: in the case of class imbalance, such as 90 positive samples and 10 negative samples, all samples are directly classified as positive samples, and the recognition rate is 90%. But this is obviously pointless. This ill-posed problem can no longer be represented by simply judging the advantages and disadvantages of the algorithm by Precision and Recall.

Draw ROC curve

Import matplotlib.pyplot as pltfrom sklearn.metrics import roc_curve, auc# y_test: actual label, dataset_pred: predicted probability value. Fpr,tpr, thresholds = roc_curve (y_test, dataset_pred) roc_auc = auc (fpr,tpr) # drawing, you only need plt.plot (fpr,tpr), and the variable roc_auc only records the value of auc The auc () function can be used to calculate what is the ROC curve of plt.plot (fpr, tpr, lw=1, label='ROC (area =% 0.2f)'% (roc_auc)) plt.xlabel ("FPR (False Positive Rate)") plt.ylabel ("TPR (True Positive Rate)") plt.title ("Receiver Operating Characteristic, ROC (AUC =% 0.2f)"% (roc_auc)) plt.show ()?

ROC (Receiver Operating Characteristic) is translated as "receiver operating characteristic curve". The curve is drawn by two variables 1-specificity and Sensitivity. 1-specificity=FPR, that is, negative positive class rate. Sensitivity is the real class rate, TPR (True positive rate), which reflects the degree of positive class coverage. This combination uses 1-specificity to sensitivity, that is, costs to benefits. Obviously, the higher the benefit and the lower the cost, the better the performance of the model.

In addition, the ROC curve can also be used to calculate the "mean average accuracy" (mean average precision), which is the average accuracy (PPV) obtained when you choose the best result by changing the threshold.

The x-axis is a false positive rate (FPR): the percentage of errors predicted by the classifier among all negative samples

To better understand the ROC curve, we use specific examples to illustrate:

In medical diagnosis, for example, to judge a diseased sample. So the main task is to find out the sick as far as possible, that is, the first indicator, TPR, the higher the better. On the other hand, the sample without disease is misdiagnosed as sick, that is, the second indicator, FPR, the lower the better.

It is not difficult to find that the two indicators restrict each other. If a doctor is sensitive to the symptoms of the disease, and even the slightest symptom is judged to be sick, then his first index should be very high, but the second index will become higher accordingly. In the most extreme cases, he treats all the samples as sick, so the first index reaches 1 and the second index is also 1.

Using FPR as the horizontal axis and TPR as the vertical axis, we get the following ROC space.

We can see that the point in the upper left corner (TPR=1,FPR=0) is a perfect classification, that is, the doctor is skilled and diagnosed correctly. Point A (TPR > FPR), Doctor A's judgment is generally correct. Point B on the midline (TPR=FPR), that is, Doctor B, is all blinded, half right and half wrong, and point C in the lower half plane (TPR)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report