Train-Test Split for Evaluating Machine Learning Algorithms

The train-test split procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model.

It is a fast and easy procedure to perform, the results of which allow you to compare the performance of machine learning algorithms for your predictive modeling problem.

Although simple to use and interpret, there are times when the procedure should not be used, such as when you have a small dataset and situations where additional configuration is required, such as when it is used for classification and the dataset is not balanced.

In this tutorial, you will discover how to evaluate machine learning models using the train-test split.

After completing this tutorial, you will know:Let’s get started.

Train-Test Split for Evaluating Machine Learning AlgorithmsPhoto by Paul VanDerWerf, some rights reserved.

This tutorial is divided into three parts; they are:The train-test split is a technique for evaluating the performance of a machine learning algorithm.

It can be used for classification or regression problems and can be used for any supervised learning algorithm.

The procedure involves taking a dataset and dividing it into two subsets.

The first subset is used to fit the model and is referred to as the training dataset.

The second subset is not used to train the model; instead, the input element of the dataset is provided to the model, then predictions are made and compared to the expected values.

This second dataset is referred to as the test dataset.

The objective is to estimate the performance of the machine learning model on new data: data not used to train the model.

This is how we expect to use the model in practice.

Namely, to fit it on available data with known inputs and outputs, then make predictions on new examples in the future where we do not have the expected output or target values.

The train-test procedure is appropriate when there is a sufficiently large dataset available.

The idea of “sufficiently large” is specific to each predictive modeling problem.

It means that there is enough data to split the dataset into train and test datasets and each of the train and test datasets are suitable representations of the problem domain.

This requires that the original dataset is also a suitable representation of the problem domain.

A suitable representation of the problem domain means that there are enough records to cover all common cases and most uncommon cases in the domain.

This might mean combinations of input variables observed in practice.

It might require thousands, hundreds of thousands, or millions of examples.

Conversely, the train-test procedure is not appropriate when the dataset available is small.

The reason is that when the dataset is split into train and test sets, there will not be enough data in the training dataset for the model to learn an effective mapping of inputs to outputs.

There will also not be enough data in the test set to effectively evaluate the model performance.

The estimated performance could be overly optimistic (good) or overly pessimistic (bad).

If you have insufficient data, then a suitable alternate model evaluation procedure would be the k-fold cross-validation procedure.

In addition to dataset size, another reason to use the train-test split evaluation procedure is computational efficiency.

Some models are very costly to train, and in that case, repeated evaluation used in other procedures is intractable.

An example might be deep neural network models.

In this case, the train-test procedure is commonly used.

Alternately, a project may have an efficient model and a vast dataset, although may require an estimate of model performance quickly.

Again, the train-test split procedure is approached in this situation.

Samples from the original training dataset are split into the two subsets using random selection.

This is to ensure that the train and test datasets are representative of the original dataset.

The procedure has one main configuration parameter, which is the size of the train and test sets.

This is most commonly expressed as a percentage between 0 and 1 for either the train or test datasets.

For example, a training set with the size of 0.

67 (67 percent) means that the remainder percentage 0.

33 (33 percent) is assigned to the test set.

There is no optimal split percentage.

You must choose a split percentage that meets your project’s objectives with considerations that include:Nevertheless, common split percentages include:Now that we are familiar with the train-test split model evaluation procedure, let’s look at how we can use this procedure in Python.

The scikit-learn Python machine learning library provides an implementation of the train-test split evaluation procedure via the train_test_split() function.

The function takes a loaded dataset as input and returns the dataset split into two subsets.

Ideally, you can split your original dataset into input (X) and output (y) columns, then call the function passing both arrays and have them split appropriately into train and test subsets.

The size of the split can be specified via the “test_size” argument that takes a number of rows (integer) or a percentage (float) of the size of the dataset between 0 and 1.

The latter is the most common, with values used such as 0.

33 where 33 percent of the dataset will be allocated to the test set and 67 percent will be allocated to the training set.

We can demonstrate this using a synthetic classification dataset with 1,000 examples.

The complete example is listed below.

Running the example splits the dataset into train and test sets, then prints the size of the new dataset.

We can see that 670 examples (67 percent) were allocated to the training set and 330 examples (33 percent) were allocated to the test set, as we specified.

Alternatively, the dataset can be split by specifying the “train_size” argument that can be either a number of rows (integer) or a percentage of the original dataset between 0 and 1, such as 0.

67 for 67 percent.

Another important consideration is that rows are assigned to the train and test sets randomly.

This is done to ensure that datasets are a representative sample (e.

g.

random sample) of the original dataset, which in turn, should be a representative sample of observations from the problem domain.

When comparing machine learning algorithms, it is desirable (perhaps required) that they are fit and evaluated on the same subsets of the dataset.

This can be achieved by fixing the seed for the pseudo-random number generator used when splitting the dataset.

If you are new to pseudo-random number generators, see the tutorial:This can be achieved by setting the “random_state” to an integer value.

Any value will do; it is not a tunable hyperparameter.

The example below demonstrates this and shows that two separate splits of the data result in the same result.

Running the example splits the dataset and prints the first five rows of the training dataset.

The dataset is split again and the first five rows of the training dataset are printed showing identical values, confirming that when we fix the seed for the pseudorandom number generator, we get an identical split of the original dataset.

One final consideration is for classification problems only.

Some classification problems do not have a balanced number of examples for each class label.

As such, it is desirable to split the dataset into train and test sets in a way that preserves the same proportions of examples in each class as observed in the original dataset.

This is called a stratified train-test split.

We can achieve this by setting the “stratify” argument to the y component of the original dataset.

This will be used by the train_test_split() function to ensure that both the train and test sets have the proportion of examples in each class that is present in the provided “y” array.

We can demonstrate this with an example of a classification dataset with 94 examples in one class and six examples in a second class.

First, we can split the dataset into train and test sets without the “stratify” argument.

The complete example is listed below.

Running the example first reports the composition of the dataset by class label, showing the expected 94 percent vs.

6 percent.

Then the dataset is split and the composition of the train and test sets is reported.

We can see that the train set has 45/5 examples in the test set has 49/1 examples.

The composition of the train and test sets differ, and this is not desirable.

Next, we can stratify the train-test split and compare the results.

Given that we have used a 50 percent split for the train and test sets, we would expect both the train and test sets to have 47/3 examples in the train/test sets respectively.

Running the example, we can see that in this case, the stratified version of the train-test split has created both the train and test datasets with 47/3 examples in the train/test sets as we expected.

Now that we are familiar with the train_test_split() function, let’s look at how we can use it to evaluate a machine learning model.

In this section, we will explore using the train-test split procedure to evaluate machine learning models on standard classification and regression predictive modeling datasets.

We will demonstrate how to use the train-test split to evaluate a random forest algorithm on the sonar dataset.

The sonar dataset is a standard machine learning dataset composed of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.

g.

binary classification.

The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements.

As expected, we can see that there are 208 rows of data with 60 input variables.

We can now evaluate a model using a train-test split.

First, the loaded dataset must be split into input and output components.

Next, we can split the dataset so that 67 percent is used to train the model and 33 percent is used to evaluate it.

This split was chosen arbitrarily.

We can then define and fit the model on the training dataset.

Then use the fit model to make predictions and evaluate the predictions using the classification accuracy performance metric.

Tying this together, the complete example is listed below.

Running the example first loads the dataset and confirms the number of rows in the input and output elements.

The dataset is split into train and test sets and we can see that there are 139 rows for training and 69 rows for the test set.

Finally, the model is evaluated on the test set and the performance of the model when making predictions on new data has an accuracy of about 78.

3 percent.

We will demonstrate how to use the train-test split to evaluate a random forest algorithm on the housing dataset.

The housing dataset is a standard machine learning dataset composed of 506 rows of data with 13 numerical input variables and a numerical target variable.

The dataset involves predicting the house price given details of the house’s suburb in the American city of Boston.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads and loads the dataset as a Pandas DataFrame and summarizes the shape of the dataset.

Running the example confirms the 506 rows of data and 13 input variables and single numeric target variables (14 in total).

We can now evaluate a model using a train-test split.

First, the loaded dataset must be split into input and output components.

Next, we can split the dataset so that 67 percent is used to train the model and 33 percent is used to evaluate it.

This split was chosen arbitrarily.

We can then define and fit the model on the training dataset.

Then use the fit model to make predictions and evaluate the predictions using the mean absolute error (MAE) performance metric.

Tying this together, the complete example is listed below.

Running the example first loads the dataset and confirms the number of rows in the input and output elements.

The dataset is split into train and test sets and we can see that there are 339 rows for training and 167 rows for the test set.

Finally, the model is evaluated on the test set and the performance of the model when making predictions on new data is a mean absolute error of about 2.

211 (thousands of dollars).

This section provides more resources on the topic if you are looking to go deeper.

In this tutorial, you discovered how to evaluate machine learning models using the train-test split.

Specifically, you learned:Do you have any questions? Ask your questions in the comments below and I will do my best to answer.

with just a few lines of scikit-learn codeLearn how in my new Ebook: Machine Learning Mastery With PythonCovers self-study tutorials and end-to-end projects like: Loading data, visualization, modeling, tuning, and much more.

Skip the Academics.

Just Results.

.

Leave a Reply