TPOT for Automated Machine Learning in Python

Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

TPOT is an open-source library for performing AutoML in Python.

It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Genetic Programming stochastic global search procedure to efficiently discover a top-performing model pipeline for a given dataset.

In this tutorial, you will discover how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

After completing this tutorial, you will know:Let’s get started.

TPOT for Automated Machine Learning in PythonPhoto by Gwen, some rights reserved.

This tutorial is divided into four parts; they are:Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms and model hyperparameters.

… an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

— Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

An optimization procedure is then performed to find a tree structure that performs best for a given dataset.

Specifically, a genetic programming algorithm, designed to perform a stochastic global optimization on programs represented as trees.

TPOT uses a version of genetic programming to automatically design and optimize a series of data transformations and machine learning models that attempt to maximize the classification accuracy for a given supervised learning data set.

— Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

The figure below taken from the TPOT paper shows the elements involved in the pipeline search, including data cleaning, feature selection, feature processing, feature construction, model selection, and hyperparameter optimization.

Overview of the TPOT Pipeline SearchTaken from: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

Now that we are familiar with what TPOT is, let’s look at how we can install and use TPOT to find an effective model pipeline.

The first step is to install the TPOT library, which can be achieved using pip, as follows:Once installed, we can import the library and print the version number to confirm it was installed successfully:Running the example prints the version number.

Your version number should be the same or higher.

Using TPOT is straightforward.

It involves creating an instance of the TPOTRegressor or TPOTClassifier class, configuring it for the search, and then exporting the model pipeline that was found to achieve the best performance on your dataset.

Configuring the class involves two main elements.

The first is how models will be evaluated, e.

g.

the cross-validation scheme and performance metric.

I recommend explicitly specifying a cross-validation class with your chosen configuration and the performance metric to use.

For example, RepeatedKFold for regression with ‘neg_mean_absolute_error‘ metric for regression:Or a RepeatedStratifiedKFold for regression with ‘accuracy‘ metric for classification:The other element is the nature of the stochastic global search procedure.

As an evolutionary algorithm, this involves setting configuration, such as the size of the population, the number of generations to run, and potentially crossover and mutation rates.

The former importantly control the extent of the search; the latter can be left on default values if evolutionary search is new to you.

For example, a modest population size of 100 and 5 or 10 generations is a good starting point.

At the end of a search, a Pipeline is found that performs the best.

This Pipeline can be exported as code into a Python file that you can later copy-and-paste into your own project.

Now that we are familiar with how to use TPOT, let’s look at some worked examples with real data.

In this section, we will use TPOT to discover a model for the sonar dataset.

The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.

g.

binary classification.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent.

A top-performing model can achieve accuracy on this same test harness of about 88 percent.

This provides the bounds of expected performance on this dataset.

The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements.

As expected, we can see that there are 208 rows of data with 60 input variables.

Next, let’s use TPOT to find a good model for the sonar dataset.

First, we can define the method for evaluating models.

We will use a good practice of repeated stratified k-fold cross-validation with three repeats and 10 folds.

We will use a population size of 50 for five generations for the search and use all cores on the system by setting “n_jobs” to -1.

Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.

Tying this together, the complete example is listed below.

Running the example may take a few minutes, and you will see a progress bar on the command line.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision.

Consider running the example a few times and compare the average outcome.

The accuracy of top-performing models will be reported along the way.

In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 86.

6 percent.

This is a skillful model, and close to a top-performing model on this dataset.

The top-performing pipeline is then saved to a file named “tpot_sonar_best_model.

py“.

Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline.

An example is listed below.

Note: as-is, this code does not execute, by design.

It is a template that you can copy-and-paste into your project.

In this case, we can see that the best-performing model is a pipeline comprised of a Naive Bayes model and a Gradient Boosting model.

We can adapt this code to fit a final model on all available data and make a prediction for new data.

The complete example is listed below.

Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.

In this section, we will use TPOT to discover a model for the auto insurance dataset.

The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66.

A top-performing model can achieve a MAE on this same test harness of about 28.

This provides the bounds of expected performance on this dataset.

The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements.

As expected, we can see that there are 63 rows of data with one input variable.

Next, we can use TPOT to find a good model for the auto insurance dataset.

First, we can define the method for evaluating models.

We will use a good practice of repeated k-fold cross-validation with three repeats and 10 folds.

We will use a population size of 50 for 5 generations for the search and use all cores on the system by setting “n_jobs” to -1.

Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.

Tying this together, the complete example is listed below.

Running the example may take a few minutes, and you will see a progress bar on the command line.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision.

Consider running the example a few times and compare the average outcome.

The MAE of top-performing models will be reported along the way.

In this case, we can see that the top-performing pipeline achieved the mean MAE of about 29.

14.

This is a skillful model, and close to a top-performing model on this dataset.

The top-performing pipeline is then saved to a file named “tpot_insurance_best_model.

py“.

Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline.

An example is listed below.

Note: as-is, this code does not execute, by design.

It is a template that you can copy-paste into your project.

In this case, we can see that the best-performing model is a pipeline comprised of a linear support vector machine model.

We can adapt this code to fit a final model on all available data and make a prediction for new data.

The complete example is listed below.

Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.

This section provides more resources on the topic if you are looking to go deeper.

In this tutorial, you discovered how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

Specifically, you learned:Do you have any questions? Ask your questions in the comments below and I will do my best to answer.

with just a few lines of scikit-learn codeLearn how in my new Ebook: Machine Learning Mastery With PythonCovers self-study tutorials and end-to-end projects like: Loading data, visualization, modeling, tuning, and much more.

Skip the Academics.

Just Results.

.

Leave a Reply