Build your own neural network classifier in R

Build your own neural network classifier in RJun M.

BlockedUnblockFollowFollowingApr 28​IntroductionImage classification is one important field in Computer Vision, not only because so many applications are associated with it, but also a lot of Computer Vision problems can be effectively reduced to image classification.

The state of art tool in image classification is Convolutional Neural Network (CNN).

In this article, I am going to write a simple Neural Network with 2 layers (fully connected).

I will first train it to classify a set of 4-class 2D data and visualize the decision boundary.

Second, I am going to train my NN with the famous MNIST data (you can download it here: https://www.

kaggle.

com/c/digit-recognizer/download/train.

csv) and see its performance.

The first part is inspired by CS 231n course offered by Stanford: http://cs231n.

github.

io/, which is taught in Python.

​Data set generationFirst, let’s create a spiral dataset with 4 classes and 200 examples each.

X, y are 800 by 2 and 800 by 1 data frames respectively, and they are created in a way such that a linear classifier cannot separate them.

Since the data is 2D, we can easily visualize it on a plot.

They are roughly evenly spaced and indeed a line is not a good decision boundary.

Neural network constructionNow, let’s construct a NN with 2 layers.

But before that, we need to convert X into a matrix (for matrix operation later on).

For labels in y, a new matrix Y (800 by 4) is created such that for each example (each row in Y), the entry with index==label is 1 (and 0 otherwise).

Next, let’s build a function nnet that takes two matrices X and Y and returns a list of 4 with W, b and W2, b2 (weight and bias for each layer).

I can specify step_size (learning rate) and regularization strength (reg, sometimes symbolized as λ ).

 ​For the choice of activation and loss (cost) function, ReLU and softmax are selected respectively.

If you have taken the ML class by Andrew Ng (strongly recommended), sigmoid and logistic cost function are chosen in the course notes and assignment.

They look slightly different, but can be implemented fairly easily just by modifying the following code.

Also note that the implementation below uses vectorized operation that may seem hard to follow.

If so, you can write down dimensions of each matrix and check multiplications and so on.

By doing so, you also know what’s under the hood for a neural network.

​Prediction function and model trainingNext, create a prediction function, which takes X (same col as training X but may have different rows) and layer parameters as input.

The output is the column index of max score in each row.

In this example, the output is simply the label of each class.

Now we can print out the training accuracy.

Decision boundaryNext, let’s plot the decision boundary.

We can also use the caret package and train different classifiers with the data and visualize the decision boundaries.

It is very interesting to see how different algorithms make decisions.

This is going to be another post.

​MNIST data and preprocessingThe famous MNIST (“Modified National Institute of Standards and Technology”) dataset is a classic within the Machine Learning community that has been extensively studied.

It is a collection of handwritten digits that are decomposed into a csv file, with each row representing one example, and the column values are grey scale from 0–255 of each pixel.

First, let’s display an image.

Now, let’s preprocess the data by removing near zero variance columns and scaling by max(X).

The data is also splitted into two for cross validation.

Once again, we need to create a Y matrix with dimension N by K.

This time the non-zero index in each row is offset by 1: label 0 will have entry 1 at index 1, label 1 will have entry 1 at index 2, and so on.

In the end, we need to convert it back.

(Another way is put 0 at index 10 and no offset for the rest labels.

)​Model training and CV accuracyNow we can train the model with the training set.

Note even after removing nzv columns, the data is still huge, so it may take a while for result to converge.

Here I am only training the model for 3500 iterations.

You can vary the iterations, learning rate and regularization strength and plot the learning curve for optimal fitting.

​Prediction of a random imageFinally, let’s randomly select an image and predict the label.

​​ConclusionIt is rare nowadays for us to write our own machine learning algorithm from ground up.

There are tons of packages available and they most likely outperform this one.

However, by doing so, I really gained a deep understanding how neural network works.

And at the end of the day, seeing your own model produces a pretty good accuracy is a huge satisfaction.

Originally published at http://junma5.

weebly.

com.

.

. More details

Leave a Reply