Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Neural Narrator
Jun 18, 2024
20 views
How can we evaluate our model after training our model?
#ModelEvaluation #Series #1
Typically in any classification task your model can only achieve two results.
Fortunately incorrect vs correct expands to situations where you have multiple classes. It does not matter if you are trying to predict categories with 8 different types of classes or 8 different categories. Your model fundamentally only has two outputs.
For example in binary classification, spam vs ham i,.e le gitimate message
Since this is classification, it's supervised learning. so we will train the model with around 70% of the data set. Then we will test the model with out testing data.
Note: Raw text is converted into numerical information, i.e vectorization.
Once we train our model, let's the the output.
Model Prediction
At the end we have count of correct matches and counts of incorrect matches.
Important: the most fundamental part, in the real world, not all incorrect or correct matches hold equal value!
which is why we have various classification metrics. It's not enough to understand that you got a particular count of correct vs particular count incorrect. It's various ratios that we need to take into account.
A single metric won't tell a complete story.
Key Classification Metrics:
Often you have a trade off between Recall and Precision.
Recall expresses the ability to find all relevant instances in a data set.
Precision expresses the proportion of the data points our model says was relevant, that actually were relevant.
Precision and Recall typically make more sense in the context of a confusion matrix.
We can organize our predicted values compared to the real values in a confusion matrix.
which we will discuss in the future series.