Contemporary Classification of Machine Learning Techniques (Part 1)

Contemporary Classification of Machine Learning Techniques (Part 1)donny sohBlockedUnblockFollowFollowingApr 23In the second article of the series titled Learning Machine Learning, we will be classifying Machine Learning techniques into various types / classes.

The goal of this article is to provide you with a simple framework to aid in your understanding of machine learning techniques as you encounter them.

Table of the Animal Kingdom (Regnum Animale) from Carolus Linnaeus’s first edition (1735) of Systema NaturaeAs the field in machine learning grows exponentially, a new practitioner can often feel overwhelmed and confused with the vast number of machine learning techniques.

This article aims to simplify your learning process by considering machine learning techniques as belonging to one of three groups: Supervised Learning, Unsupervised Learning and Reinforcement Learning.

This simple categorization of machine learning algorithms provides you with a simple framework to help you organize your thought processes as you encounter new algorithms.

Hopefully this will prove to be useful as we go deeper into the technical details.

Contemporary Classification of Machine Learning AlgorithmsAs mentioned, today’s Machine Learning Algorithms can be segregated into one of the three classes, supervised learning, unsupervised learning and reinforcement learning.

This segregation is chosen because of the way these algorithms learn the machine learning model.

In particular:Supervised learning: learns the model by exampleUnsupervised learning: learns the model by dataReinforcement learning: learns the model by experienceSupervised Learning: Learning by ExampleLet’s explain supervised learning with a real life example.

Every morning as we are rushing to go about our lives, most of us have had to struggle with this question: “How should I go to work today?”Assume that I only have two transportation options to go to work via, the bus or the train.

I will probably consider a range of factors which might affect my choice.

Examples of these factors might be if its raining, if there is a train breakdown, if I overslept that morning, or if I have an early morning meeting with my boss.

These factors are frequently referred to as features or input variables (You might also hear statisticians refer to these factors as x-variables).

Naturally the outcome of these factors affect my choice of either taking the bus or the train.

For instance, if it is not raining and the train is working well, I will always choose to take the train.

If there is a train fault, regardless of the other factors, I will always choose to take the bus.

Note that this decision of taking the bus or the train is frequently referred to as the output variable, the label or the y-variable.

Hence I am trying to learn a model that, when provided with the input variables of a certain day (eg.

not raining, train is working fine, no meeting with the boss and woke up on time), predicts if I will take the bus or the train to work.

To learn this model in a supervised manner (learning from example), we need to train the model from a list of examples.

This list of examples can possibly be obtained by doing the following.

For the next few days I would log down the various input variables (rain, overslept, train breakdown, meeting with boss), followed by my final decision if I took the bus or the train to work.

A simple five day log would therefore form a table like this:Table 1: Training observationsFor supervised learning, we will be using this table to train the model.

Every single row in this table is termed as an observation.

The data that is used to train the model is also known as training data.

After the model is completed, everytime a new observation is obtained, we will be able to predict if we should be taking the bus or the train to work.

This prediction is obtained by passing the observation features into the model and having the model automatically predicting the output for that observation.

We will illustrate this slightly later as we talk about tree based models and linear models in supervised learning.

Tree based models versus Linear modelsContemporary supervised learning comprises mainly of two main algorithms for machine learning.

They are tree based methods (decision trees, random forests and xgboosts) and linear models (logistic regression, neural networks).

The easiest form of a tree based algorithm is that of a decision stump.

It consists of just a decision condition, known as a decision node.

This node comes with branches on the left and the right side.

This left and right branches represents the condition that was chosen at the node (Figure 1).

Tree based methods are normally more intuitive and their models are relatively easier to explain and let’s try to create a simple decision tree now.

In our example above, you realize from Table 1 rows 1 and 4 that as long as the train is not working, we will always be taking the bus.

Hence it is possible to construct a decision tree with the top (root node) of the tree looking like this.

A decision stump and the root node of the decision treeWe next fill in the node <Next Condition>.

This is depicted as the left hand branch from the root node.

Returning to Table 1, we only consider the observations where the train is working which are rows 0, 2 and 3.

From these three rows you find that as long as it rains I will take the bus.

If it does not rain I will be taking the train.

Hence the next iteration of the tree would look something like this.

Figure 2: Final decision treeAt this point, we note that this tree accounts for every single observation in our training dataset.

Hence we can stop adding any more nodes / branches to the tree and consider this decision tree final.

We will describe in greater technical detail tree based methods in the next article.

We next look at linear based models and in particular Neural Networks.

Note that although models such as neural networks and logistic regression are nonlinear functions, Jeremy Howard from fast.

ai refers to them as linear models because the model is specified by a linear combination of its features.

For now, you can think of neural networks as a technique of organizing the input variables (red layer) to obtain the output variables (green layer).

Mathematically this is achieved through a series of linear functions encapsulated by a non-linear activation function (such as RELU or softmax).

In order to create a more representative relationship between the input and output layers, it is we insert additional layers between the input and output layers.

These additional layers are known as hidden layers and pictorially, they look a lot like this:Figure 3: Pictorial representation of a neural network.

Source: http://cs231n.

github.

io/neural-networks-1/Neural networks are considered “deep” if there is more than one (1) hidden layer between the input and the output layer.

In fact deep learning simply refers to a neural network with multiple hidden layers between the input and the output variables.

Regression and ClassificationThe example we introduced above is a case of supervised classification.

In supervised classification, our aim is to create a model from the labelled dataset to predict if a new observation belongs to a certain class.

In the example above, our aim is to create a model to predict for new observations, should the predicted label of the output variable belong to class Bus or Train.

This is in contrast to another case of supervised learning known as regression.

When you are carrying out regression analysis, you are trying to predict a continuous variable for the observation.

Returning to our illustration above, an example of the output of a regression prediction would be how much would it cost me to get to work.

In summary, when you are doing classification, you are trying to predict a class for the output variable.

When you are doing regression, you are trying to predict a continuous number for the output variable.

SummaryThere are three classes of machine learning algorithmsSupervised learning: learns the model by exampleUnsupervised learning: learns the model by dataReinforcement learning: learns the model by experienceFor supervised learning, there are two types of machine learning algorithms, Tree Based models and Linear Models.

For both of these models, there are two types of analysis that we can carry out: classification and regression.

When you are doing classification analysis, you are trying to predict a class for the output variable.

When you are doing regression analysis, you are trying to predict a continuous number for the output variable.

Deep learning simply refers to a neural network with multiple hidden layers between the input and the output variables.

In the next article of the series, we will continue this discussion with the two other forms of machine learning, Unsupervised Learning and Reinforcement Learning.

Stay tuned!The author is an adjunct professor at Singapore Institute of Technology (SIT).

He holds a PhD in Computer Science from Imperial College.

He also has a Masters in Computer Science from the NUS under the Singapore MIT Alliance (SMA) programme.

The views in this article are that of the author’s and do not necessarily reflect the official policies or positions of any organizations that the author is associated with.

The author also holds no affiliations nor earns any fees from any products, courses or books mentioned in this article.

.

. More details

Leave a Reply