TPOT-for-Automated-Machine-Learning-in-Python.jpg

TPOT for Automated Machine Learning in Python

Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

TPOT is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Genetic Programming stochastic global search procedure to efficiently discover a top-performing model pipeline for a given dataset.

In this tutorial, you will discover how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

After completing this tutorial, you will know:

  • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
  • How to use TPOT to automatically discover top-performing models for classification tasks.
  • How to use TPOT to automatically discover top-performing models for regression tasks.

Let’s get started.

TPOT for Automated Machine Learning in Python

TPOT for Automated Machine Learning in Python
Photo by Gwen, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  • TPOT for Automated Machine Learning
  • Install and Use TPOT
  • TPOT for Classification
  • TPOT for Regression
  • TPOT for Automated Machine Learning

    Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

    TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms and model hyperparameters.

    … an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    An optimization procedure is then performed to find a tree structure that performs best for a given dataset. Specifically, a genetic programming algorithm, designed to perform a stochastic global optimization on programs represented as trees.

    TPOT uses a version of genetic programming to automatically design and optimize a series of data transformations and machine learning models that attempt to maximize the classification accuracy for a given supervised learning data set.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    The figure below taken from the TPOT paper shows the elements involved in the pipeline search, including data cleaning, feature selection, feature processing, feature construction, model selection, and hyperparameter optimization.

    Overview of the TPOT Pipeline Search

    Overview of the TPOT Pipeline Search
    Taken from: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    Now that we are familiar with what TPOT is, let’s look at how we can install and use TPOT to find an effective model pipeline.

    Install and Use TPOT

    The first step is to install the TPOT library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using TPOT is straightforward.

    It involves creating an instance of the TPOTRegressor or TPOTClassifier class, configuring it for the search, and then exporting the model pipeline that was found to achieve the best performance on your dataset.

    Configuring the class involves two main elements.

    The first is how models will be evaluated, e.g. the cross-validation scheme and performance metric. I recommend explicitly specifying a cross-validation class with your chosen configuration and the performance metric to use.

    For example, RepeatedKFold for regression with ‘neg_mean_absolute_error‘ metric for regression:


    Or a RepeatedStratifiedKFold for regression with ‘accuracy‘ metric for classification:


    The other element is the nature of the stochastic global search procedure.

    As an evolutionary algorithm, this involves setting configuration, such as the size of the population, the number of generations to run, and potentially crossover and mutation rates. The former importantly control the extent of the search; the latter can be left on default values if evolutionary search is new to you.

    For example, a modest population size of 100 and 5 or 10 generations is a good starting point.


    At the end of a search, a Pipeline is found that performs the best.

    This Pipeline can be exported as code into a Python file that you can later copy-and-paste into your own project.


    Now that we are familiar with how to use TPOT, let’s look at some worked examples with real data.

    TPOT for Classification

    In this section, we will use TPOT to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    Next, let’s use TPOT to find a good model for the sonar dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated stratified k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for five generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The accuracy of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 86.6 percent. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_sonar_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-and-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a Naive Bayes model and a Gradient Boosting model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    TPOT for Regression

    In this section, we will use TPOT to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    Next, we can use TPOT to find a good model for the auto insurance dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for 5 generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The MAE of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean MAE of about 29.14. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_insurance_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a linear support vector machine model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use TPOT to automatically discover top-performing models for classification tasks.
    • How to use TPOT to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    1598690419_975_Time-Series-Forecasting-With-Prophet-in-Python.png

    Time Series Forecasting With Prophet in Python

    Time series forecasting can be challenging as there are many different methods you could use and many different hyperparameters for each method.

    The Prophet library is an open-source library designed for making forecasts for univariate time series datasets. It is easy to use and designed to automatically find a good set of hyperparameters for the model in an effort to make skillful forecasts for data with trends and seasonal structure by default.

    In this tutorial, you will discover how to use the Facebook Prophet library for time series forecasting.

    After completing this tutorial, you will know:

    • Prophet is an open-source library developed by Facebook and designed for automatic forecasting of univariate time series data.
    • How to fit Prophet models and use them to make in-sample and out-of-sample forecasts.
    • How to evaluate a Prophet model on a hold-out dataset.

    Let’s get started.

    Time Series Forecasting With Prophet in Python

    Time Series Forecasting With Prophet in Python
    Photo by Rinaldo Wurglitsch, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Prophet Forecasting Library
  • Car Sales Dataset
  • Load and Summarize Dataset
  • Load and Plot Dataset
  • Forecast Car Sales With Prophet
  • Fit Prophet Model
  • Make an In-Sample Forecast
  • Make an Out-of-Sample Forecast
  • Manually Evaluate Forecast Model
  • Prophet Forecasting Library

    Prophet, or “Facebook Prophet,” is an open-source library for univariate (one variable) time series forecasting developed by Facebook.

    Prophet implements what they refer to as an additive time series forecasting model, and the implementation supports trends, seasonality, and holidays.

    Implements a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects

    — Package ‘prophet’, 2019.

    It is designed to be easy and completely automatic, e.g. point it at a time series and get a forecast. As such, it is intended for internal company use, such as forecasting sales, capacity, etc.

    For a great overview of Prophet and its capabilities, see the post:

    The library provides two interfaces, including R and Python. We will focus on the Python interface in this tutorial.

    The first step is to install the Prophet library using Pip, as follows:


    Next, we can confirm that the library was installed correctly.

    To do this, we can import the library and print the version number in Python. The complete example is listed below.


    Running the example prints the installed version of Prophet.

    You should have the same version or higher.


    Now that we have Prophet installed, let’s select a dataset we can use to explore using the library.

    Car Sales Dataset

    We will use the monthly car sales dataset.

    It is a standard univariate time series dataset that contains both a trend and seasonality. The dataset has 108 months of data and a naive persistence forecast can achieve a mean absolute error of about 3,235 sales, providing a lower error limit.

    No need to download the dataset as we will download it automatically as part of each example.

    Load and Summarize Dataset

    First, let’s load and summarize the dataset.

    Prophet requires data to be in Pandas DataFrames. Therefore, we will load and summarize the data using Pandas.

    We can load the data directly from the URL by calling the read_csv() Pandas function, then summarize the shape (number of rows and columns) of the data and view the first few rows of data.

    The complete example is listed below.


    Running the example first reports the number of rows and columns, then lists the first five rows of data.

    We can see that as we expected, there are 108 months worth of data and two columns. The first column is the date and the second is the number of sales.

    Note that the first column in the output is a row index and is not a part of the dataset, just a helpful tool that Pandas uses to order rows.


    Load and Plot Dataset

    A time-series dataset does not make sense to us until we plot it.

    Plotting a time series helps us actually see if there is a trend, a seasonal cycle, outliers, and more. It gives us a feel for the data.

    We can plot the data easily in Pandas by calling the plot() function on the DataFrame.

    The complete example is listed below.


    Running the example creates a plot of the time series.

    We can clearly see the trend in sales over time and a monthly seasonal pattern to the sales. These are patterns we expect the forecast model to take into account.

    Line Plot of Car Sales Dataset

    Line Plot of Car Sales Dataset

    Now that we are familiar with the dataset, let’s explore how we can use the Prophet library to make forecasts.

    Forecast Car Sales With Prophet

    In this section, we will explore using the Prophet to forecast the car sales dataset.

    Let’s start by fitting a model on the dataset

    Fit Prophet Model

    To use Prophet for forecasting, first, a Prophet() object is defined and configured, then it is fit on the dataset by calling the fit() function and passing the data.

    The Prophet() object takes arguments to configure the type of model you want, such as the type of growth, the type of seasonality, and more. By default, the model will work hard to figure out almost everything automatically.

    The fit() function takes a DataFrame of time series data. The DataFrame must have a specific format. The first column must have the name ‘ds‘ and contain the date-times. The second column must have the name ‘y‘ and contain the observations.

    This means we change the column names in the dataset. It also requires that the first column be converted to date-time objects, if they are not already (e.g. this can be down as part of loading the dataset with the right arguments to read_csv).

    For example, we can modify our loaded car sales dataset to have this expected structure, as follows:


    The complete example of fitting a Prophet model on the car sales dataset is listed below.


    Running the example loads the dataset, prepares the DataFrame in the expected format, and fits a Prophet model.

    By default, the library provides a lot of verbose output during the fit process. I think it’s a bad idea in general as it trains developers to ignore output.

    Nevertheless, the output summarizes what happened during the model fitting process, specifically the optimization processes that ran.


    I will not reproduce this output in subsequent sections when we fit the model.

    Next, let’s make a forecast.

    Make an In-Sample Forecast

    It can be useful to make a forecast on historical data.

    That is, we can make a forecast on data used as input to train the model. Ideally, the model has seen the data before and would make a perfect prediction.

    Nevertheless, this is not the case as the model tries to generalize across all cases in the data.

    This is called making an in-sample (in training set sample) forecast and reviewing the results can give insight into how good the model is. That is, how well it learned the training data.

    A forecast is made by calling the predict() function and passing a DataFrame that contains one column named ‘ds‘ and rows with date-times for all the intervals to be predicted.

    There are many ways to create this “forecast” DataFrame. In this case, we will loop over one year of dates, e.g. the last 12 months in the dataset, and create a string for each month. We will then convert the list of dates into a DataFrame and convert the string values into date-time objects.


    This DataFrame can then be provided to the predict() function to calculate a forecast.

    The result of the predict() function is a DataFrame that contains many columns. Perhaps the most important columns are the forecast date time (‘ds‘), the forecasted value (‘yhat‘), and the lower and upper bounds on the predicted value (‘yhat_lower‘ and ‘yhat_upper‘) that provide uncertainty of the forecast.

    For example, we can print the first few predictions as follows:


    Prophet also provides a built-in tool for visualizing the prediction in the context of the training dataset.

    This can be achieved by calling the plot() function on the model and passing it a result DataFrame. It will create a plot of the training dataset and overlay the prediction with the upper and lower bounds for the forecast dates.


    Tying this all together, a complete example of making an in-sample forecast is listed below.


    Running the example forecasts the last 12 months of the dataset.

    The first five months of the prediction are reported and we can see that values are not too different from the actual sales values in the dataset.


    Next, a plot is created. We can see the training data are represented as black dots and the forecast is a blue line with upper and lower bounds in a blue shaded area.

    We can see that the forecasted 12 months is a good match for the real observations, especially when the bounds are taken into account.

    Plot of Time Series and In-Sample Forecast With Prophet

    Plot of Time Series and In-Sample Forecast With Prophet

    Make an Out-of-Sample Forecast

    In practice, we really want a forecast model to make a prediction beyond the training data.

    This is called an out-of-sample forecast.

    We can achieve this in the same way as an in-sample forecast and simply specify a different forecast period.

    In this case, a period beyond the end of the training dataset, starting 1969-01.


    Tying this together, the complete example is listed below.


    Running the example makes an out-of-sample forecast for the car sales data.

    The first five rows of the forecast are printed, although it is hard to get an idea of whether they are sensible or not.


    A plot is created to help us evaluate the prediction in the context of the training data.

    The new one-year forecast does look sensible, at least by eye.

    Plot of Time Series and Out-of-Sample Forecast With Prophet

    Plot of Time Series and Out-of-Sample Forecast With Prophet

    Manually Evaluate Forecast Model

    It is critical to develop an objective estimate of a forecast model’s performance.

    This can be achieved by holding some data back from the model, such as the last 12 months. Then, fitting the model on the first portion of the data, using it to make predictions on the held-pack portion, and calculating an error measure, such as the mean absolute error across the forecasts. E.g. a simulated out-of-sample forecast.

    The score gives an estimate of how well we might expect the model to perform on average when making an out-of-sample forecast.

    We can do this with the samples data by creating a new DataFrame for training with the last 12 months removed.


    A forecast can then be made on the last 12 months of date-times.

    We can then retrieve the forecast values and the expected values from the original dataset and calculate a mean absolute error metric using the scikit-learn library.


    It can also be helpful to plot the expected vs. predicted values to see how well the out-of-sample prediction matches the known values.


    Tying this together, the example below demonstrates how to evaluate a Prophet model on a hold-out dataset.


    Running the example first reports the last few rows of the training dataset.

    It confirms the training ends in the last month of 1967 and 1968 will be used as the hold-out dataset.


    Next, a mean absolute error is calculated for the forecast period.

    In this case we can see that the error is approximately 1,336 sales, which is much lower (better) than a naive persistence model that achieves an error of 3,235 sales over the same period.


    Finally, a plot is created comparing the actual vs. predicted values. In this case, we can see that the forecast is a good fit. The model has skill and forecast that looks sensible.

    Plot of Actual vs. Predicted Values for Last 12 Months of Car Sales

    Plot of Actual vs. Predicted Values for Last 12 Months of Car Sales

    The Prophet library also provides tools to automatically evaluate models and plot results, although those tools don’t appear to work well with data above one day in resolution.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use the Facebook Prophet library for time series forecasting.

    Specifically, you learned:

    • Prophet is an open-source library developed by Facebook and designed for automatic forecasting of univariate time series data.
    • How to fit Prophet models and use them to make in-sample and out-of-sample forecasts.
    • How to evaluate a Prophet model on a hold-out dataset.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Want to Develop Time Series Forecasts with Python?

    Introduction to Time Series Forecasting With Python
    Develop Your Own Forecasts in Minutes

    …with just a few lines of python code

    Discover how in my new Ebook:
    Introduction to Time Series Forecasting With Python

    It covers self-study tutorials and end-to-end projects on topics like:
    Loading data, visualization, modeling, algorithm tuning, and much more…

    Finally Bring Time Series Forecasting to

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Auto-Sklearn-for-Automated-Machine-Learning-in-Python.jpg

    Auto-Sklearn for Automated Machine Learning in Python

    Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

    Auto-Sklearn is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Bayesian Optimization search procedure to efficiently discover a top-performing model pipeline for a given dataset.

    In this tutorial, you will discover how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    After completing this tutorial, you will know:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Let’s get started.

    Auto-Sklearn for Automated Machine Learning in Python

    Auto-Sklearn for Automated Machine Learning in Python
    Photo by Richard, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • AutoML With Auto-Sklearn
  • Install and Using Auto-Sklearn
  • Auto-Sklearn for Classification
  • Auto-Sklearn for Regression
  • AutoML With Auto-Sklearn

    Automated Machine Learning, or AutoML for short, is a process of discovering the best-performing pipeline of data transforms, model, and model configuration for a dataset.

    AutoML often involves the use of sophisticated optimization algorithms, such as Bayesian Optimization, to efficiently navigate the space of possible models and model configurations and quickly discover what works well for a given predictive modeling task. It allows non-expert machine learning practitioners to quickly and easily discover what works well or even best for a given dataset with very little technical background or direct input.

    Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

    It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

    … we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

    — Efficient and Robust Automated Machine Learning, 2015.

    The benefit of Auto-Sklearn is that, in addition to discovering the data preparation and model that performs for a dataset, it also is able to learn from models that performed well on similar datasets and is able to automatically create an ensemble of top-performing models discovered as part of the optimization process.

    This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization.

    — Efficient and Robust Automated Machine Learning, 2015.

    The authors provide a useful depiction of their system in the paper, provided below.

    Overview of the Auto-Sklearn System

    Overview of the Auto-Sklearn System.
    Taken from: Efficient and Robust Automated Machine Learning, 2015.

    Install and Using Auto-Sklearn

    The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using Auto-Sklearn is straightforward.

    Depending on whether your prediction task is classification or regression, you create and configure an instance of the AutoSklearnClassifier or AutoSklearnRegressor class, fit it on your dataset, and that’s it. The resulting model can then be used to make predictions directly or saved to file (using pickle) for later use.


    There are a ton of configuration options provided as arguments to the AutoSklearn class.

    By default, the search will use a train-test split of your dataset during the search, and this default is recommended both for speed and simplicity.

    Importantly, you should set the “n_jobs” argument to the number of cores in your system, e.g. 8 if you have 8 cores.

    The optimization process will run for as long as you allow, measure in minutes. By default, it will run for one hour.

    I recommend setting the “time_left_for_this_task” argument for the number of seconds you want the process to run. E.g. less than 5-10 minutes is probably plenty for many small predictive modeling tasks (sub 1,000 rows).

    We will use 5 minutes (300 seconds) for the examples in this tutorial. We will also limit the time allocated to each model evaluation to 30 seconds via the “per_run_time_limit” argument. For example:


    You can limit the algorithms considered in the search, as well as the data transforms.

    By default, the search will create an ensemble of top-performing models discovered as part of the search. Sometimes, this can lead to overfitting and can be disabled by setting the “ensemble_size” argument to 1 and “initial_configurations_via_metalearning” to 0.


    At the end of a run, the list of models can be accessed, as well as other details.

    Perhaps the most useful feature is the sprint_statistics() function that summarizes the search and the performance of the final model.


    Now that we are familiar with the Auto-Sklearn library, let’s look at some worked examples.

    Auto-Sklearn for Classification

    In this section, we will use Auto-Sklearn to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    We will use Auto-Sklearn to find a good model for the sonar dataset.

    First, we will split the dataset into train and test sets and allow the process to find a good model on the training set, then later evaluate the performance of what was found on the holdout test set.


    The AutoSklearnClassifier is configured to run for 5 minutes with 8 cores and limit each model evaluation to 30 seconds.


    The search is then performed on the training dataset.


    Afterward, a summary of the search and best-performing model is reported.


    Finally, we evaluate the performance of the model that was prepared on the holdout test dataset.


    Tying this together, the complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    At the end of the run, a summary is printed showing that 1,054 models were evaluated and the estimated performance of the final model was 91 percent.

    Your specific results may vary given the stochastic nature of the optimization algorithm.


    We then evaluate the model on the holdout dataset and see that classification accuracy of 81.2 percent was achieved, which is reasonably skillful.


    Auto-Sklearn for Regression

    In this section, we will use Auto-Sklearn to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    We will use Auto-Sklearn to find a good model for the auto insurance dataset.

    We can use the same process as was used in the previous section, although we will use the AutoSklearnRegressor class instead of the AutoSklearnClassifier.


    By default, the regressor will optimize the R^2 metric.

    In this case, we are interested in the mean absolute error, or MAE, which we can specify via the “metric” argument when calling the fit() function.


    The complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    You might see some warning messages during the run and you can safely ignore them, such as:


    At the end of the run, a summary is printed showing that 1,759 models were evaluated and the estimated performance of the final model was a MAE of 29.


    We then evaluate the model on the holdout dataset and see that a MAE of 26 was achieved, which is a great result.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    1598716808_524_Plot-a-Decision-Surface-for-Machine-Learning-Algorithms-in-Python.png

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Last Updated on August 26, 2020

    Classification algorithms learn how to assign class labels to examples, although their decisions can appear opaque.

    A popular diagnostic for understanding the decisions made by a classification algorithm is the decision surface. This is a plot that shows how a fit machine learning algorithm predicts a coarse grid across the input feature space.

    A decision surface plot is a powerful tool for understanding how a given model “sees” the prediction task and how it has decided to divide the input feature space by class label.

    In this tutorial, you will discover how to plot a decision surface for a classification machine learning algorithm.

    After completing this tutorial, you will know:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Plot a Decision Surface for Machine Learning Algorithms in Python
    Photo by Tony Webster, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Decision Surface
  • Dataset and Model
  • Plot a Decision Surface
  • Decision Surface

    Classification machine learning algorithms learn to assign labels to input examples.

    Consider numeric input features for the classification task defining a continuous input feature space.

    We can think of each input feature defining an axis or dimension on a feature space. Two input features would define a feature space that is a plane, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume.

    Each point in the space can be assigned a class label. In terms of a two-dimensional feature space, we can think of each point on the planing having a different color, according to their assigned class.

    The goal of a classification algorithm is to learn how to divide up the feature space such that labels are assigned correctly to points in the feature space, or at least, as correctly as is possible.

    This is a useful geometric understanding of classification predictive modeling. We can take it one step further.

    Once a classification machine learning algorithm divides a feature space, we can then classify each point in the feature space, on some arbitrary grid, to get an idea of how exactly the algorithm chose to divide up the feature space.

    This is called a decision surface or decision boundary, and it provides a diagnostic tool for understanding a model on a classification predictive modeling task.

    Although the notion of a “surface” suggests a two-dimensional feature space, the method can be used with feature spaces with more than two dimensions, where a surface is created for each pair of input features.

    Now that we are familiar with what a decision surface is, next, let’s define a dataset and model for which we later explore the decision surface.

    Dataset and Model

    In this section, we will define a classification task and predictive model to learn the task.

    Synthetic Classification Dataset

    We can use the make_blobs() scikit-learn function to define a classification task with a two-dimensional class numerical feature space and each point assigned one of two class labels, e.g. a binary classification task.


    Once defined, we can then create a scatter plot of the feature space with the first feature defining the x-axis, the second feature defining the y axis, and each sample represented as a point in the feature space.

    We can then color points in the scatter plot according to their class label as either 0 or 1.


    Tying this together, the complete example of defining and plotting a synthetic classification dataset is listed below.


    Running the example creates the dataset, then plots the dataset as a scatter plot with points colored by class label.

    We can see a clear separation between examples from the two classes and we can imagine how a machine learning model might draw a line to separate the two classes, e.g. perhaps a diagonal line right through the middle of the two groups.

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Fit Classification Predictive Model

    We can now fit a model on our dataset.

    In this case, we will fit a logistic regression algorithm because we can predict both crisp class labels and probabilities, both of which we can use in our decision surface.

    We can define the model, then fit it on the training dataset.


    Once defined, we can use the model to make a prediction for the training dataset to get an idea of how well it learned to divide the feature space of the training dataset and assign labels.


    The predictions can be evaluated using classification accuracy.


    Tying this together, the complete example of fitting and evaluating a model on the synthetic binary classification dataset is listed below.


    Running the example fits the model and makes a prediction for each example.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model achieved a performance of about 97.2 percent.


    Now that we have a dataset and model, let’s explore how we can develop a decision surface.

    Plot a Decision Surface

    We can create a decision surface by fitting a model on the training dataset, then using the model to make predictions for a grid of values across the input domain.

    Once we have the grid of predictions, we can plot the values and their class label.

    A scatter plot could be used if a fine enough grid was taken. A better approach is to use a contour plot that can interpolate the colors between the points.

    The contourf() Matplotlib function can be used.

    This requires a few steps.

    First, we need to define a grid of points across the feature space.

    To do this, we can find the minimum and maximum values for each feature and expand the grid one step beyond that to ensure the whole feature space is covered.


    We can then create a uniform sample across each dimension using the arange() function at a chosen resolution. We will use a resolution of 0.1 in this case.


    Now we need to turn this into a grid.

    We can use the meshgrid() NumPy function to create a grid from these two vectors.

    If the first feature x1 is our x-axis of the feature space, then we need one row of x1 values of the grid for each point on the y-axis.

    Similarly, if we take x2 as our y-axis of the feature space, then we need one column of x2 values of the grid for each point on the x-axis.

    The meshgrid() function will do this for us, duplicating the rows and columns for us as needed. It returns two grids for the two input vectors. The first grid of x-values and the second of y-values, organized in an appropriately sized grid of rows and columns across the feature space.


    We then need to flatten out the grid to create samples that we can feed into the model and make a prediction.

    To do this, first, we flatten each grid into a vector.


    Then we stack the vectors side by side as columns in an input dataset, e.g. like our original training dataset, but at a much higher resolution.


    We can then feed this into our model and get a prediction for each point in the grid.


    So far, so good.

    We have a grid of values across the feature space and the class labels as predicted by our model.

    Next, we need to plot the grid of values as a contour plot.

    The contourf() function takes separate grids for each axis, just like what was returned from our prior call to meshgrid(). Great!

    So we can use xx and yy that we prepared earlier and simply reshape the predictions (yhat) from the model to have the same shape.


    We then plot the decision surface with a two-color colormap.


    We can then plot the actual points of the dataset over the top to see how well they were separated by the logistic regression decision surface.

    The complete example of plotting a decision surface for a logistic regression model on our synthetic binary classification dataset is listed below.


    Running the example fits the model and uses it to predict outcomes for the grid of values across the feature space and plots the result as a contour plot.

    We can see, as we might have suspected, logistic regression divides the feature space using a straight line. It is a linear model, after all; this is all it can do.

    Creating a decision surface is almost like magic. It gives immediate and meaningful insight into how the model has learned the task.

    Try it with different algorithms, like an SVM or decision tree.
    Post your resulting maps as links in the comments below!

    Decision Surface for Logistic Regression on a Binary Classification Task

    Decision Surface for Logistic Regression on a Binary Classification Task

    We can add more depth to the decision surface by using the model to predict probabilities instead of class labels.


    When plotted, we can see how confident or likely it is that each point in the feature space belongs to each of the class labels, as seen by the model.

    We can use a different color map that has gradations, and show a legend so we can interpret the colors.


    The complete example of creating a decision surface using probabilities is listed below.


    Running the example predicts the probability of class membership for each point on the grid across the feature space and plots the result.

    Here, we can see that the model is unsure (lighter colors) around the middle of the domain, given the sampling noise in that area of the feature space. We can also see that the model is very confident (full colors) in the bottom-left and top-right halves of the domain.

    Together, the crisp class and probability decision surfaces are powerful diagnostic tools for understanding your model and how it divides the feature space for your predictive modeling task.

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to plot a decision surface for a classification machine learning algorithm.

    Specifically, you learned:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    How-to-Calculate-the-Bias-Variance-Trade-off-with-Python.jpg

    How to Calculate the Bias-Variance Trade-off with Python

    Last Updated on August 26, 2020

    The performance of a machine learning model can be characterized in terms of the bias and the variance of the model.

    A model with high bias makes strong assumptions about the form of the unknown underlying function that maps inputs to outputs in the dataset, such as linear regression. A model with high variance is highly dependent upon the specifics of the training dataset, such as unpruned decision trees. We desire models with low bias and low variance, although there is often a trade-off between these two concerns.

    The bias-variance trade-off is a useful conceptualization for selecting and configuring models, although generally cannot be computed directly as it requires full knowledge of the problem domain, which we do not have. Nevertheless, in some cases, we can estimate the error of a model and divide the error down into bias and variance components, which may provide insight into a given model’s behavior.

    In this tutorial, you will discover how to calculate the bias and variance for a machine learning model.

    After completing this tutorial, you will know:

    • Model error consists of model variance, model bias, and irreducible error.
    • We seek models with low bias and variance, although typically reducing one results in a rise in the other.
    • How to decompose mean squared error into model bias and variance terms.

    Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    How to Calculate the Bias-Variance Trade-off in Python

    How to Calculate the Bias-Variance Trade-off in Python
    Photo by Nathalie, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Bias, Variance, and Irreducible Error
  • Bias-Variance Trade-off
  • Calculate the Bias and Variance
  • Bias, Variance, and Irreducible Error

    Consider a machine learning model that makes predictions for a predictive modeling task, such as regression or classification.

    The performance of the model on the task can be described in terms of the prediction error on all examples not used to train the model. We will refer to this as the model error.

    The model error can be decomposed into three sources of error: the variance of the model, the bias of the model, and the variance of the irreducible error in the data.

    • Error(Model) = Variance(Model) + Bias(Model) + Variance(Irreducible Error)

    Let’s take a closer look at each of these three terms.

    Model Bias

    The bias is a measure of how close the model can capture the mapping function between inputs and outputs.

    It captures the rigidity of the model: the strength of the assumption the model has about the functional form of the mapping between inputs and outputs.

    This reflects how close the functional form of the model can get to the true relationship between the predictors and the outcome.

    — Page 97, Applied Predictive Modeling, 2013.

    A model with high bias is helpful when the bias matches the true but unknown underlying mapping function for the predictive modeling problem. Yet, a model with a large bias will be completely useless when the functional form for the problem is mismatched with the assumptions of the model, e.g. assuming a linear relationship for data with a high non-linear relationship.

    • Low Bias: Weak assumptions regarding the functional form of the mapping of inputs to outputs.
    • High Bias: Strong assumptions regarding the functional form of the mapping of inputs to outputs.

    The bias is always positive.

    Model Variance

    The variance of the model is the amount the performance of the model changes when it is fit on different training data.

    It captures the impact of the specifics the data has on the model.

    Variance refers to the amount by which [the model] would change if we estimated it using a different training data set.

    — Page 34, An Introduction to Statistical Learning with Applications in R, 2014.

    A model with high variance will change a lot with small changes to the training dataset. Conversely, a model with low variance will change little with small or even large changes to the training dataset.

    • Low Variance: Small changes to the model with changes to the training dataset.
    • High Variance: Large changes to the model with changes to the training dataset.

    The variance is always positive.

    Irreducible Error

    On the whole, the error of a model consists of reducible error and irreducible error.

    • Model Error = Reducible Error + Irreducible Error

    The reducible error is the element that we can improve. It is the quantity that we reduce when the model is learning on a training dataset and we try to get this number as close to zero as possible.

    The irreducible error is the error that we can not remove with our model, or with any model.

    The error is caused by elements outside our control, such as statistical noise in the observations.

    … usually called “irreducible noise” and cannot be eliminated by modeling.

    — Page 97, Applied Predictive Modeling, 2013.

    As such, although we may be able to squash the reducible error to a very small value close to zero, or even zero in some cases, we will also have some irreducible error. It defines a lower bound in performance on a problem.

    It is important to keep in mind that the irreducible error will always provide an upper bound on the accuracy of our prediction for Y. This bound is almost always unknown in practice.

    — Page 19, An Introduction to Statistical Learning with Applications in R, 2014.

    It is a reminder that no model is perfect.

    Bias-Variance Trade-off

    The bias and the variance of a model’s performance are connected.

    Ideally, we would prefer a model with low bias and low variance, although in practice, this is very challenging. In fact, this could be described as the goal of applied machine learning for a given predictive modeling problem,

    Reducing the bias can easily be achieved by increasing the variance. Conversely, reducing the variance can easily be achieved by increasing the bias.

    This is referred to as a trade-off because it is easy to obtain a method with extremely low bias but high variance […] or a method with very low variance but high bias …

    — Page 36, An Introduction to Statistical Learning with Applications in R, 2014.

    This relationship is generally referred to as the bias-variance trade-off. It is a conceptual framework for thinking about how to choose models and model configuration.

    We can choose a model based on its bias or variance. Simple models, such as linear regression and logistic regression, generally have a high bias and a low variance. Complex models, such as random forest, generally have a low bias but a high variance.

    We may also choose model configurations based on their effect on the bias and variance of the model. The k hyperparameter in k-nearest neighbors controls the bias-variance trade-off. Small values, such as k=1, result in a low bias and a high variance, whereas large k values, such as k=21, result in a high bias and a low variance.

    High bias is not always bad, nor is high variance, but they can lead to poor results.

    We often must test a suite of different models and model configurations in order to discover what works best for a given dataset. A model with a large bias may be too rigid and underfit the problem. Conversely, a large variance may overfit the problem.

    We may decide to increase the bias or the variance as long as it decreases the overall estimate of model error.

    Calculate the Bias and Variance

    I get this question all the time:

    How can I calculate the bias-variance trade-off for my algorithm on my dataset?

    Technically, we cannot perform this calculation.

    We cannot calculate the actual bias and variance for a predictive modeling problem.

    This is because we do not know the true mapping function for a predictive modeling problem.

    Instead, we use the bias, variance, irreducible error, and the bias-variance trade-off as tools to help select models, configure models, and interpret results.

    In a real-life situation in which f is unobserved, it is generally not possible to explicitly compute the test MSE, bias, or variance for a statistical learning method. Nevertheless, one should always keep the bias-variance trade-off in mind.

    — Page 36, An Introduction to Statistical Learning with Applications in R, 2014.

    Even though the bias-variance trade-off is a conceptual tool, we can estimate it in some cases.

    The mlxtend library by Sebastian Raschka provides the bias_variance_decomp() function that can estimate the bias and variance for a model over multiple bootstrap samples.

    First, you must install the mlxtend library; for example:


    The example below loads the Boston housing dataset directly via URL, splits it into train and test sets, then estimates the mean squared error (MSE) for a linear regression as well as the bias and variance for the model error over 200 bootstrap samples.


    Running the example reports the estimated error as well as the estimated bias and variance for the model error.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model has a high bias and a low variance. This is to be expected given that we are using a linear regression model. We can also see that the sum of the estimated mean and variance equals the estimated error of the model, e.g. 20.726 + 1.761 = 22.487.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    Books
    Articles

    Summary

    In this tutorial, you discovered how to calculate the bias and variance for a machine learning model.

    Specifically, you learned:

    • Model error consists of model variance, model bias, and irreducible error.
    • We seek models with low bias and variance, although typically reducing one results in a rise in the other.
    • How to decompose mean squared error into model bias and variance terms.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo