Five Methods to Debug your Neural Network

Five Methods to Debug your Neural NetworkSahil DhankhadBlockedUnblockFollowFollowingMay 3A lot of us trying to understand the Machine Learning Algorithms, but sometimes we have some time we faced bugging problem in our algorithm and that what we are going to find out how to debug your Neural Network.

This article is short but well documented about the debugging process.

So, I expect all of you know what Neural Network is and how it is work.

But I’m still going to talk a little bit about the Neural Network, and you can skip this part if you are familiar with Neural Network Concepts.

What is a Neural NetworkNeural networks are a collection of algorithms, modelled loosely after the human brain, that is designed to recognize patterns.

They interpret sensory data through a variety of machine perception, labelling or clustering raw input.

The models they understand are numerical, included in vectors.

Neural networks benefit us to cluster and classify.

You can consider them as a clustering and classification layer on top of the data you store and manage.

They support group unlabeled data according to relationships among the example inputs, and they classify data when they have a labelled dataset to train in.

 In many events, researchers will face a problem: the neural network implemented using the machine learning framework may be far removed from the theoretical model.

To examine whether the model is reliable, the direct way is to continually correct and adjust.

For example, in August 2018, Ian Goodfellow and others at Google Brain introduced TensorFuzz, they added an open source library that helps to automatically debug neural networks by submitting coverage-guided fuzzing (CGF).

It’s not simple to debug a machine learning model, because the cost of discovering a bug is too high.

Even for almost simple feedforward neural networks, researchers need to discuss issues such as network architecture, weight initialization, and network optimization.

Five methods for debugging neural networks:From simple to simpleConfirm model lossCheck standard output and connections.

Diagnostic parametersTracking work1.

From simple to simpleA neural network with a complex architecture of normalization and learning rate schedulers makes a single neural network more difficult to debug.

First, build a relatively simple model: create a small model with a single hidden layer and test it; then slowly add the complexity of the model, and prove that every level of the model structure (additional layer, parameters, etc.

) is real.

Second, train the model on a single data node: one or two training data points can be applied to confirm if the model is over-fitting.

The neural network should be over-fitting quickly, with a training accuracy of 100%, which symbolizes that the model is compatible; if the model cannot over-fitting these data points, it shows to be too small or buggy.

2.

Confirm model lossModel loss is the primary way to evaluate the performance of a model, and it is also the basis for the model to set essential parameters for evaluation, so you need to ensure that: Model loss refers to tasks (using cross-entropy loss for multi-classification problems or using the focal loss to solve imbalance problems);Correctly estimate the importance of the loss function.

If you apply multiple types of loss functions, such as MSE, Antagonistic, L1, feature loss, then make sure all damages distributed in the right way.

3.

Check standard output and connectionsTo debug a neural network, you need to understand the dynamics inside the neural network, the role of the different intermediate layers, and how the layers are connected.

However, you may encounter the following issues:1.

The incorrect gradient update expression2.

Weight not applied3.

Gradient disappears or erupts.

If the gradient value is 0, it indicates that the learning rate in the optimizer may be too small, and the expression of the gradient update is inaccurate.

In addition to concentrating on the absolute value of the gradient, be sure to control the activation and weight of every layer match.

For example, the volume of the parameter update (weight and bias) should be 1-e3.

It should be pointed out that in a phenomenon called “Dying ReLU” or “gradient disappearance,” the ReLU neuron will output 0 after learning the negative deviation term of its weight.

These neurons are not initiated at any data point.

You can do a gradient test to approximate the gradient by numerical techniques to test for these errors.

If it is nearby to the calculation gradient, the backpropagation correctly implemented.

For the primary method of visualizing neural networks there are three examples:Initial process: It is presenting the overall structure of the training model, including demonstrating the shape or filters of the various layers of the neural network and the parameters in each segment;Activation-based method: deciphering the activation function of an individual neuron or a collection of neurons;Gradient-based approach: When training the model, the gradient formed by the forward or backward channel manipulated.

There are also many tools available to visualize the activation and connection of various layers, such as ConX and Tensor-board.

4.

Diagnostic parameters:Neural networks have a large number of settings that interact with each other, making optimization also pretty tricky.

Batch size: You want the batch volume to be long enough to accurately evaluate the error gradient, small enough to provide the stochastic gradient to fall (SGD) to manage the network.

The batch size will cause the learning method to immediately face up at the cost of noise throughout the training process and may start to optimization difficulties.

Learning rate: Too low can lead to a slow convergence or a risk of falling into a local minimum.

Too high will lead to optimized divergence.

Gradient clipping: The maximum or maximum norm used to shear the gradient of a parameter in backpropagation.

Batch normalization: Applied to normalize the input of each layer to counter the internal covariate shift problem.

Stochastic Gradient Descent (SGD): SGD using the momentum, adaptive learning rate, Nesterov update.

Regularization: It is essential to build a scalable model because it raises the penalty for model complexity or extreme parameter values.

At the same time, it significantly minimizes the variance of the model and does not considerably increase the bias.

Dropout: A different technology that regulates networks to prevent it from overfitting.

During training, the loss achieved by maintaining neuron activity with a certain probability p (hyperparameter).

Otherwise, it is set to zero.

As a conclusion, the network must use a separate subset of parameters in every training batch, which decreases the variation of specific parameters and becomes superior to other settings.

5.

Tracking the whole processBy tracking your work better, you can easily view and reproduce previous experiments to reduce duplication of effort.

However, manually logging information can be severe and multiple experiments, and tools like comet.

ml can help automate the tracking of data sets, code changes, experimental histories, and production models, including crucial knowledge about the model, such as hyperparameters: model performance indicators and environmental details.

Neural networks are susceptible to small changes in data, parameters, and even packages, which points to degraded performance.

Work track is the initial step in a standardized environment and modeling workflow.

You can check some of the other topics that I have written down.

But First complete this one.

New in Hadoop: You should know the Various File Format in Hadoop.

A Beginners’ Guide to Hadoop File Formatstowardsdatascience.

comA Brief Summary of Apache Hadoop: A Solution of Big Data Problem and Hint comes from GoogleWelcome to the introduction of Big data and Hadoop where we are going to talk about Apache Hadoop and problems that big…towardsdatascience.

comNow, you know what is happening behind the door and how you can fix it if you start facing the problem in your Neural Network.

If you’ve any suggestion to improve this article, please feel free to contact me on my LinkedIn.

Forget APIs Do Python Scraping Using Beautiful Soup, Import Data File from the web: Part 2APIs are not all there for you, but Beautiful Soup is going to stay with you forever.

towardsdatascience.

com.

. More details

Leave a Reply