How to Develop a Cost-Sensitive Neural Network for Imbalanced Classification

Deep learning neural networks are a flexible class of machine learning algorithms that perform well on a wide range of problems.

Neural networks are trained using the backpropagation of error algorithm that involves calculating errors made by the model on the training dataset and updating the model weights in proportion to those errors.

The limitation of this method of training is that examples from each class are treated the same, which for imbalanced datasets means that the model is adapted a lot more for one class than another.

The backpropagation algorithm can be updated to weigh misclassification errors in proportion to the importance of the class, referred to as weighted neural networks or cost-sensitive neural networks.

This has the effect of allowing the model to pay more attention to examples from the minority class than the majority class in datasets with a severely skewed class distribution.

In this tutorial, you will discover weighted neural networks for imbalanced classification.

After completing this tutorial, you will know:Discover SMOTE, one-class classification, cost-sensitive learning, threshold moving, and much more in my new book, with 30 step-by-step tutorials and full Python source code.

Let’s get started.

How to Develop a Cost-Sensitive Neural Network for Imbalanced ClassificationPhoto by Bernard Spragg.

NZ, some rights reserved.

This tutorial is divided into four parts; they are:Before we dive into the modification of neural networks for imbalanced classification, let’s first define an imbalanced classification dataset.

We can use the make_classification() function to define a synthetic imbalanced two-class classification dataset.

We will generate 10,000 examples with an approximate 1:100 minority to majority class ratio.

Once generated, we can summarize the class distribution to confirm that the dataset was created as we expected.

Finally, we can create a scatter plot of the examples and color them by class label to help understand the challenge of classifying examples from this dataset.

Tying this together, the complete example of generating the synthetic dataset and plotting the examples is listed below.

Running the example first creates the dataset and summarizes the class distribution.

We can see that the dataset has an approximate 1:100 class distribution with a little less than 10,000 examples in the majority class and 100 in the minority class.

Next, a scatter plot of the dataset is created showing the large mass of examples for the majority class (blue) and a small number of examples for the minority class (orange), with some modest class overlap.

Scatter Plot of Binary Classification Dataset with 1 to 100 Class ImbalanceTake my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-CourseNext, we can fit a standard neural network model on the dataset.

First, we can define a function to create the synthetic dataset and split it into separate train and test datasets with 5,000 examples in each.

A Multilayer Perceptron neural network can be defined using the Keras deep learning library.

We will define a neural network that expects two input variables, has one hidden layer with 10 nodes, then an output layer that predicts the class label.

We will use the popular ReLU activation function in the hidden layer and the sigmoid activation function in the output layer to ensure predictions are probabilities in the range [0,1].

The model will be fit using stochastic gradient descent with the default learning rate and optimized according to cross-entropy loss.

The network architecture and hyperparameters are not optimized to the problem; instead, the network provides a basis for comparison when the training algorithm is later modified to handle the skewed class distribution.

The define_model() function below defines and returns the model, taking the number of input variables to the network as an argument.

Once the model is defined, it can be fit on the training dataset.

We will fit the model for 100 training epochs with the default batch size.

Once fit, we can use the model to make predictions on the test dataset, then evaluate the predictions using the ROC AUC score.

Tying this together, the complete example of fitting a standard neural network model on the imbalanced classification dataset is listed below.

Running the example evaluates the neural network model on the imbalanced dataset and reports the ROC AUC.

Your specific results may vary given the stochastic nature of the learning algorithm.

Try running the example a few times.

In this case, the model achieves a ROC AUC of about 0.

949.

This suggests that the model has some skill as compared to the naive classifier that has a ROC AUC of 0.

5.

This provides a baseline for comparison for any modifications performed to the standard neural network training algorithm.

Neural network models are commonly trained using the backpropagation of error algorithm.

This involves using the current state of the model to make predictions for training set examples, calculating the error for the predictions, then updating the model weights using the error, and assigning credit for the error to different nodes and layers backward from the output layer back through to the input layer.

Given the balanced focus on misclassification errors, most standard neural network algorithms are not well suited to datasets with a severely skewed class distribution.

Most of the existing deep learning algorithms do not take the data imbalance problem into consideration.

As a result, these algorithms can perform well on the balanced data sets while their performance cannot be guaranteed on imbalanced data sets.

— Training Deep Neural Networks on Imbalanced Data Sets, 2016.

This training procedure can be modified so that some examples have more or less error than others.

The misclassification costs can also be taken in account by changing the error function that is being minimized.

Instead of minimizing the squared error, the backpropagation learning procedure should minimize the misclassification costs.

— Cost-Sensitive Learning with Neural Networks, 1998.

The simplest way to implement this is to use a fixed weighting of error scores for examples based on their class where the prediction error is increased for examples in a more important class and decreased or left unchanged for those examples in a less important class.

… cost sensitive learning methods solve data imbalance problem based on the consideration of the cost associated with misclassifying samples.

In particular, it assigns different cost values for the misclassification of the samples.

— Training Deep Neural Networks on Imbalanced Data Sets, 2016.

A large error weighting can be applied to those examples in the minority class as they are often more important in an imbalanced classification problem than examples from the majority class.

This modification to the neural network training algorithm is referred to as a Weighted Neural Network or Cost-Sensitive Neural Network.

Typically, careful attention is required when defining the costs or “weightings” to use for cost-sensitive learning.

However, for imbalanced classification where only misclassification is the focus, the weighting can use the inverse of the class distribution observed in the training dataset.

The Keras Python deep learning library provides support class weighting.

The fit() function that is used to train Keras neural network models takes an argument called class_weight.

This argument allows you to define a dictionary that maps class integer values to the importance to apply to each class.

This function is used to train each different type of neural network, including Multilayer Perceptrons, Convolutional Neural Networks, and Recurrent Neural Networks, therefore the class weighting capability is available to all of those network types.

For example, a 1 to 1 weighting for each class 0 and 1 can be defined as follows:The class weighing can be defined multiple ways; for example:A best practice for using the class weighting is to use the inverse of the class distribution present in the training dataset.

For example, the class distribution of the test dataset is a 1:100 ratio for the minority class to the majority class.

The invert of this ratio could be used with 1 for the majority class and 100 for the minority class, for example:Fractions that represent the same ratio do not have the same effect.

For example, using 0.

01 and 0.

99 for the majority and minority classes respectively may result in worse performance than using 1 and 100 (it does in this case).

The reason is that the error for examples drawn from both the majority class and the minority class is reduced.

Further, the reduction in error from the majority class is dramatically scaled down to very small numbers that may have limited or only a very minor effect on model weights.

As such integers are recommended to represent the class weightings, such as 1 for no change and 100 for misclassification errors for class 1 having 100-times more impact or penalty than misclassification errors for class 0.

We can evaluate the neural network algorithm with a class weighting using the same evaluation procedure defined in the previous section.

We would expect the class-weighted version of the neural network to perform better than the version of the training algorithm without any class weighting.

The complete example is listed below.

Running the example prepares the synthetic imbalanced classification dataset, then evaluates the class-weighted version of the neural network training algorithm.

Your specific results may vary given the stochastic nature of the learning algorithm.

Try running the example a few times.

The ROC AUC score is reported, in this case showing a better score than the unweighted version of the training algorithm, or about 0.

973 as compared to about 0.

949.

This section provides more resources on the topic if you are looking to go deeper.

In this tutorial, you discovered weighted neural networks for imbalanced classification.

Specifically, you learned:Do you have any questions? Ask your questions in the comments below and I will do my best to answer.

with just a few lines of python codeDiscover how in my new Ebook: Imbalanced Classification with PythonIt provides self-study tutorials and end-to-end projects on: Performance Metrics, Undersampling Methods, SMOTE, Threshold Moving, Probability Calibration, Cost-Sensitive Algorithms and much more.

.

Leave a Reply