 Search

Updated: Jul 20, 2020 When we get the data, after data cleaning, pre-processing, and wrangling, the first step we do is to feed it to an outstanding model and of course, get output in probabilities. But hold on! How in the hell can we measure the effectiveness of our model? Better the effectiveness, better the performance, and that are exactly what we want. And it is where the Confusion matrix comes into the limelight. Confusion Matrix is a performance measurement for machine learning classification.

This blog aims to answer the following questions:

1. What the confusion matrix is and why you need it?

2. How to calculate Confusion Matrix for a 2-class classification problem?

Today, let’s understand the confusion matrix once and for all. What is the Confusion Matrix and why you need it?

Well, it is a performance measurement for machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.

It is extremely useful for measuring Recall, Precision, Specificity, Accuracy, and most importantly AUC-ROC Curve.

Let’s understand TP, FP, FN, TN in terms of pregnancy analogy. True Positive:

Interpretation: You predicted positive and it’s true.

You predicted that a woman is pregnant and she actually is.

True Negative:

Interpretation: You predicted negative and it’s true.

You predicted that a man is not pregnant and he actually is not.

False Positive: (Type 1 Error)

Interpretation: You predicted positive and it’s false.

You predicted that a man is pregnant but he actually is not.

False Negative: (Type 2 Error)

Interpretation: You predicted negative and it’s false.

You predicted that a woman is not pregnant but she actually is. Just Remember, We describe predicted values as Positive and Negative and actual values as True and False.

How to Calculate Confusion Matrix for a 2-class classification problem?

Let’s understand the confusion matrix through math. Recall = TP/(TP+FN)

Out of all the positive classes, how much we predicted correctly, should be as high as possible. Precision = TP/(TP+FP)

Out of all the positive classes we have predicted correctly, how many are actually positive.

Accuracy = (TP+TN)/(TP+TN+FP+FN)

Out of all the classes, how much we predicted correctly, which will be, in this case, 4/7. It should be as high as possible.

F-measure = 2*Recall*Precision/(Recall+Precision)

It is difficult to compare two models with low precision and high recall or vice versa. So to make them comparable, we use F-Score. F-score helps to measure Recall and Precision at the same time. It uses Harmonic Mean in place of Arithmetic Mean by punishing the extreme values more.

PYTHON 