Grid Search for model tuning

Now that we have the baseline accuracy, let’s build a Logistic regression model with default parameters and evaluate the model.Output :By fitting the Logistic Regression model with the default parameters, we have a much ‘better’ model..The accuracy is 94.7% and at the same time, the Precision is a staggering 98.3%..Now, let’s take a look at the confusion matrix again for this model results again :Looking at the misclassified instances, we can observe that 8 malignant cases have been classified incorrectly as benign (False negatives)..Also, just one benign case has been classified as malignant (False positive).A false negative is more serious as a disease has been ignored, which can lead to the death of the patient..At the same time, a false positive would lead to an unnecessary treatment — incurring additional cost.Let’s try to minimize the false negatives by using Grid Search to find the optimal parameters..Grid search can be used to improve any specific evaluation metric.The metric we need to focus on to reduce false negatives is Recall.6..Grid Search to maximize RecallOutput :The hyperparameters we tuned are:Penalty: l1 or l2 which species the norm used in the penalization.C: Inverse of regularization strength- smaller values of C specify stronger regularization.Also, in Grid-search function, we have the scoring parameter where we can specify the metric to evaluate the model on (We have chosen recall as the metric)..From the confusion matrix below, we can see that the number of false negatives has reduced, however, it is at the cost of increased false positives..The recall after grid search has jumped from 88.2% to 91.1%, whereas the precision has dropped to 87.3% from 98.3%.You can further tune the model to strike a balance between precision and recall by using ‘f1’ score as the evaluation metric..Check out this article for a better understanding of the evaluation metrics.Grid search builds a model for every combination of hyperparameters specified and evaluates each model..A more efficient technique for hyperparameter tuning is the Randomized search — where random combinations of the hyperparameters are used to find the best solution.Connect on LinkedIn and, check out Github (below) for the complete notebook.rohanjoseph93/Python-for-data-scienceLearn data science with Python..Contribute to rohanjoseph93/Python-for-data-science development by creating an account…github.com. More details

Leave a Reply