A must-read tutorial when you are starting your journey with Deep Learning

Let’s calculate it.

Note that there are 784 connections to each neuron from hidden layer, each combination having an associated weight (real number).

By multiplying 784 values from the input by 784 weights, connected to the selected node from hidden layer, and then adding them together, we get one number.

This value, in the node, is passed through a non-linear filter (activation function) so that at the end we receive a number informing us how much the input resembles a digit from the selected position.

We need to do a similar thing with nodes from the first to the second hidden layer and from the second to the output layer.

Keras has a nice method to help you calculate the parameters called summary()Step 3.

Train and evaluate the modelIn the last step we need to train and evaluate the model.

Train?In other words feed it to the algorithm.

When training a neural network, training data is put into the first layer of the network, and individual neurons assign a weighting to the input (how correct or incorrect it is).

And if the algorithm informs the neural network that it was wrong, it doesn’t get informed what the right answer is.

The error is propagated back through the network’s layers and it has to guess at something else.

And again.

And again.

Until it results in the correct prediction.

Evaluate?In our case we use the validation dataset to estimate how good or bad your model is.

We repeat training using the dataset for a predetermined number of epochs.

Check out how you can handle this in Keras:batch_size = 500epochs = 20history = model.

fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=.

1)test_loss, test_acc = model.

evaluate(x_test, y_test)To visualise the accuracy of the created model you can use the following code:plt.

plot(history.

history[‘acc’])plt.

plot(history.

history[‘val_acc’])plt.

title(‘model accuracy’)plt.

ylabel(‘accuracy’)plt.

xlabel(‘epoch’)plt.

legend([‘training’, ‘validation’], loc=’best’)plt.

show()As you can see the obtained accuracy is around 95–98% which is an incredible result!.Almost ten times better than guessing randomly.

Great!.Here you can see the expected and received values:def display_output(num, x_train, y_train): x_train_ex = x_train[num, :].

reshape(1, 784) # one training example image y_train_ex = y_train[num, :] label = y_train_ex.

argmax() # get labels as integer prediction = int(model.

predict_classes(x_train_ex)) # get prediction as integer plt.

title(f’Predicted: {prediction} Label: {label}’) plt.

imshow(x_train_ex.

reshape([28, 28]), cmap=plt.

get_cmap(‘gray_r’))figure = plt.

figure()for x in range(1, 10): plt.

subplot(3, 3, x) plt.

axis(‘off’) display_output(randint(0, 55000), x_train, y_train)plt.

show()Step 4.

Be proud of yourself and enjoy your work!If you would like to challenge yourself to create a neural network, we encourage you to recreate the experiment presented in this article using another dataset built with the Keras library like CIFAR10 small image classification or Fashion-MNIST database of fashion articles.

If you have any comments or questions, feel free to comment below!.On our Github repository you can find a whole code from the story.

Fingers crossed.

.. More details

Leave a Reply