A quick recap of what we did:Introduced neurons, the building blocks of neural networks.

Used the sigmoid activation function in our neurons.

Saw that neural networks are just neurons connected together.

Created a dataset with Weight and Height as inputs (or features) and Gender as the output (or label).

Learned about loss functions and the mean squared error (MSE) loss.

Realized that training a network is just minimizing its loss.

Used backpropagation to calculate partial derivatives.

Used stochastic gradient descent (SGD) to train our network.

There’s still much more to do:Experiment with bigger / better neural networks using proper machine learning libraries like Tensorflow, Keras, and PyTorch.

Tinker with a neural network in your browser.

Discover other activation functions besides sigmoid.

Discover other optimizers besides SGD.

Learn about Convolutional Neural Networks, which revolutionized the field of Computer Vision.

Learn about Recurrent Neural Networks, often used for Natural Language Processing (NLP).

I may write about these topics or similar ones in the future, so subscribe if you want to get notified about new posts.

Thanks for reading!Originally posted on victorzhou.

com.

.. More details