CS代写 MIE1624: Introduction to Data Science and Analytics

MIE1624: Introduction to Data Science and Analytics
Tutorial – Evaluation of Binary Classifiers

(U of T) MIE1624 – Tutorials

Copyright By PowCoder代写 加微信 powcoder

Evaluation of Binary Classifiers
• Binary Classifier: algorithm that categorizes the elements of a given set into two disjoint pre-defined groups.
􏰤 The two categories are considered dichotomous and the elements of the given set are labeled “positive” or “negative”.
• Classification: the output of a classifier on a given set 􏰤 i.e. the number of “positives” & the number of “negatives”.
•Prevalence: how often a classification category occurs in the population
•Example: In sentiment analysis, Twitter data is divided (classified) into “positive” and “negative” tweets.
(U of T) MIE1624 – Tutorials

Confusion Matrix
• A Confusion Matrix is a table that is often used to describe performance of a classification model (i.e. classifier) on a set of test data for which the true values are known.
• Example confusion matrix for binary classification (predicting presence of disease)
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

True/False Positives
• True positives (TP): the elements in the given set that are “yes” and are correctly identified by the classifier as “yes”.
• False negatives (FN): the elements that are “yes”, but are incorrectly classified as “no”. (i.e. Type II error)
• Condition Positive (CP): TP + FN
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

True/False Negatives
• True negatives(TN): the items that are “no” and correctly identified as such by the algorithm.
• False positives(FP): the items that are “no” and incorrectly classified as “yes”
• Condition Negative (CP): TN + FP.
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

(1) Accuracy
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

(2) Sensitivity / Recall / True Positive Rate (TPR)
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

(3) Specificity / True Negative Rate (TNR)
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

Positive and Negative Predictive Values
Actual: NO
Actual: YES
Predicted: NO
Predicted: YES
(U of T) MIE1624 – Tutorials

Confusion Matrix
(U of T) MIE1624 – Tutorials

Receiver Operating Characteristic (ROC)
(U of T) MIE1624 – Tutorials

F-measure / F-score
(U of T) MIE1624 – Tutorials

(U of T) MIE1624 – Tutorials

Choosing the right evaluation metric
• The choice depends on your model’s objective
• Example 1 – “Spam Filter”
• Positive class is “spam”
• We want to optimize for precision or specificity. Why?
• FN (spam goes to inbox) are more acceptable than FP (non-spam email is removed)
• Example 2 – “Fraudulent Transaction Detector” • Positive class is “fraud”
• We optimize for sensitivity. Why?
• FP (normal transactions that are flagged as fraud) are more acceptable than FN (fraud transactions that are not caught)
(U of T) MIE1624 – Tutorials

• Accuracy, Precision/Recall, Sensitivity/Specificity, F-measure etc. suffer from the following problems:
􏰤 The performance results are summarized into one or two numbers -> important information is lost.
􏰤 Do not always apply to multi-class domains.
􏰤 Do not aggregate well when the performance of the classifier is considered over multiple domains.
(U of T) MIE1624 – Tutorials

Precision and Recall for multi-class problems
(U of T) MIE1624 – Tutorials

10-class confusion matrix?
• https://scikit-learn.org/stable/auto_examples/classification/plot_digi ts_classification.html
(U of T) MIE1624 – Tutorials

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com