# Implement your first Neural Network in less than an hour

As the title of the article suggests, in this article, I will guide you to write your first neural network real quick so buckle up guys and stay with me till the end.

PrerequisitesIn this session, we will be implementing our network using Python’s Chainer library, for those who don’t know, Chainer is one of many python libraries which provides necessary functions to implement neural networks.

Let’s StartAs our first network, we will be using Pima Indians onset of diabetes dataset.

This is a very popular dataset for machine learning, you can download it from Kaggle by clicking here.

This dataset is a medical record of Pima Indians to predict diabetes.

Here is the list of variables in the dataset:1.

The number of times pregnant.

2.

Plasma glucose concentration a 2 hours in an oral glucose tolerance test.

3.

Diastolic blood pressure (mm Hg).

4.

Triceps skinfold thickness (mm).

5.

2-Hour serum insulin (mu U/ml).

6.

Body mass index.

7.

Diabetes pedigree function.

8.

Age (years).

9.

Class, the onset of diabetes within five years.

In the dataset, variables from 1 to 8 are inputs and the 9'th variable is output class telling whether that individual has diabetes or not.

We will train our network on this data and then by giving input data of new people our network will predict whether that individual has diabetic symptoms or not by responding 0 or 1.

The problems whose response is of this form (0/1 or yes/no) are known as binary classification problems.

Here is the list of few samples from training data:7,107,74,0,0,29.

6,0.

254,31,11,103,30,38,83,43.

3,0.

183,33,01,115,70,30,96,34.

6,0.

529,32,13,126,88,41,235,39.

3,0.

704,27,0Now we will write a Neural Network for this problem using python.

We will understand this by dividing this project into multiple parts, which are:Preparing the input dataWriting network architectureTraining the neural networkTesting the neural networkPreparing the input data:We will use python’s Numpy library to load and prepare our training data.

We will split our data into train data and test data with a 67:33 ratio.

We will train our network using training data and we will test the accuracy of the network using test data.

Here is the code to prepare our data for training:Our data is ready, Create an empty file with the namepredict_pima.

py and put this code in that file.

Now we will write the architecture of our neural network.

Writing Network Architecture:We will be using python’s chainer library to write our network.

We will create Build_network class and create Linear layers in __init__() function.

In the code below we have created 3 layers, we will pass the number of neurons (units) as parameters while instantiating this class.

We need to write a__call__() function as well, here we need to define the flow of data through these layers and activation functionto be applied on data passing through neurons of the layer.

Here we have created 3 layers, neuron_units is the count of neurons in that particular layer.

If you notice in __call__() function we are passing input X and we are defining X should pass through L1 and apply Relu function on this data.

Here relu is the activation function.

The output of the L1 layer is h1, which will be passed through layer L2 and so on.

This is how we define our Neural Network in chainr .

Create a new empty file with the name pima_mlp.

py and put this code in that file.

Training the neural network:Our network mlp file pima_mlp.

py is ready and our data is also ready, now we will pass this data through mlp file, this process is called training the model.

Remember we created predict_pima.

py file which has code for data preparation, now we will add training code to the same file.

Update the old file with below code:Now let’s understand how we train the model by going line by line of this code.

From line 1to 7 we are importing python library functions, which we will be using later in our code.

We already know prep_data() function, it generates and partitions training and test data.

Here it comes the main part of the training code, the train_network() function in line 20.

This function expects 4 arguments, which are training data and test data generated from prep_data() function.

In lines 23to 30 we are defining the number of neurons in layers, the number of epochs you want to train your model and size of data batch.

The best set of values for these parameters can only be found by experimenting with these values.

In line 33 we are instantiating Build_network class which we have defined in pima_mlp.

py file.

While instantiating this class we pass neuron_units and neuron_units_out parameters which are the count of neurons in hidden layers of our network.

In line 34 we are setting the optimizer for our network.

Optimizers are the components which update weights and bias of neurons after each epoch.

The purpose of multiple epochs is to get optimal weights and bias for each neuron for which combined loss of the network will be minimal.

For now, you don’t worry much about it, just remember optimizers updates the weights and bias of neurons after each epoch.

There are many different optimizer algorithms are available such as Gradient Descent or Stochastic Gradient Descent and Adam.

For this project, we will be using Adam optimizer.

The line 34 defines the syntax to initialize the Adam optimizer.

In the line 35 we attached this optimizer with our model.

Now we will understand further, the line 38 has the epoch loop, the steps under this loop combinedly make 1 epoch, as we know we train our model for multiple epochs, that will be controlled by this loop.

Inside epoch loop, we have initialized multiple counters in lines from 39 to 44 .

Line numbers 45 and 46 will be used to shuffle training and test data.

In each epoch, all our data passes through the network.

To reduce training time we pass this data to the network in batches, we have already defined batch size in line number 26 , So we will pass our data to network 50 records at a time as 50 is our batch size.

The line 48 is the loop on data to pass it batch by batch.

Now we will convert our data into Chainer variables, that what happens in line 51 and 52 .

Now we are all set to pass this data to the network.

In line 54 we passed the data x to model which will call the __call__() function of Build_Network class defined in pima_mlp.

py file.

It is returning x1 which is processed data from the network.

Computing the training loss and backpropagation:Once data is passed through the network, we compute the loss of the network.

In simple words loss the difference between predicted value and actual value.

According to the loss value, the network changes the weights and bias of numerous.

There are different loss functions for the different type of datasets.

How to choose an appropriate loss function is a wast topic, we won’t be going in that direction for now.

To compute the loss of network we are using softmax_cross_entropy loss function.

This function will take two parameters which are model output x1 and labels of input data i.

e.

t .

The syntax of the loss function is defined in line 56 .

If you notice we are computing loss at batch level, to get loss value at epoch level we are adding all batch losses.

Same goes with the training accuracy.

The line numbers 58 and 59 define accuracy computation.

Till now we have computed the loss of the network, now we will do backpropagation, which will update the weights of neurons to minimize loss value.

The line numbers 61 till63 has the steps to do backpropagation and using the optimizer to update neurons.

Epoch loop will continue according to the number of epochs given by you.

The statements in line numbers 66 and 67 will print the training loss and accuracy for each epoch.

Once epoch loop is over, we need to save our trained network to use it for prediction of unseen data.

To save our model files we will use Chainer serializers.

The syntax of saving the model is given in line 69 and 70 .

Testing the neural network:Till now we have understood data preparation, training the model with epoch loop which defines batches, computing the training loss, backpropagation etc.

As we have applied our data into training data and test data, we haven't used test data yet.

Now we will add code to evaluate our model on test data and compute the network loss on unseen data.

The computation of test loss is quite similar to the training process after training batch loop we will add another batch loop with the same steps.

The only difference in this new batch loop will be that after computing the loss we will not do back-propagation and optimizer update steps.

We will pass test data to this loop and get test loss in each epoch.

Here is the complete training code containing test loss code as well, update your existing predict_pima.

py file with this new code.

In the above code, lines in between 75 and 88 does the computation of test loss for each epoch.

You can notice now, the only difference between train batch loop and test batch loop is the backpropagation step.

We don’t have to do backpropagation on test data.

This complete the code of your first Neural Network.

Now you are all set to run your code and create your first neural network.

To make it easy for you guys, I have kept the complete code and data file in my git repository.

Here is the git link of this project:Github: https://github.

com/abhishek-mascon/neural-networks/tree/master/pima_diabetes_predictionFor running this code in your machine, make sure you have python 2.

7, Chiner, Numpy, Pandas installed in your system.

If you don’t have these libraries in your system you can use pip tool to install them.

Start training the network:Now you must be done with getting the latest source from git and installing the required libraries for the code.

Now we will do training of our network and save the model, to start training here is the command:python predict_pima.

pyThis will start the training of your network and it will print train loss, test loss and train accuracy, test accuracy of each epoch of the model.

Here is the screenshot of the training process:Training the modelIf our network architecture and other components are correct, with each epoch, the loss will keep going down and accuracy will keep increasing.

Once training will be done it will generate the pima_mlp.

model file, which is our neural network generated after training.

In a future article, we will learn how to load this trained model and start prediction of unseen data, this process is called inference of the model.