Automated-Machine-Learning-AutoML-Libraries-for-Python.jpg

Automated Machine Learning (AutoML) Libraries for Python

AutoML provides tools to automatically discover good machine learning model pipelines for a dataset with very little user intervention.

It is ideal for domain experts new to machine learning or machine learning practitioners looking to get good results quickly for a predictive modeling task.

Open-source libraries are available for using AutoML methods with popular machine learning libraries in Python, such as the scikit-learn machine learning library.

In this tutorial, you will discover how to use top open-source AutoML libraries for scikit-learn in Python.

After completing this tutorial, you will know:

  • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
  • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
  • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

Let’s get started.

Automated Machine Learning (AutoML) Libraries for Python

Automated Machine Learning (AutoML) Libraries for Python
Photo by Michael Coghlan, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  • Automated Machine Learning
  • Auto-Sklearn
  • Tree-based Pipeline Optimization Tool (TPOT)
  • Hyperopt-Sklearn
  • Automated Machine Learning

    Automated Machine Learning, or AutoML for short, involves the automatic selection of data preparation, machine learning model, and model hyperparameters for a predictive modeling task.

    It refers to techniques that allow semi-sophisticated machine learning practitioners and non-experts to discover a good predictive model pipeline for their machine learning task quickly, with very little intervention other than providing a dataset.

    … the user simply provides data, and the AutoML system automatically determines the approach that performs best for this particular application. Thereby, AutoML makes state-of-the-art machine learning approaches accessible to domain scientists who are interested in applying machine learning but do not have the resources to learn about the technologies behind it in detail.

    — Page ix, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    Central to the approach is defining a large hierarchical optimization problem that involves identifying data transforms and the machine learning models themselves, in addition to the hyperparameters for the models.

    Many companies now offer AutoML as a service, where a dataset is uploaded and a model pipeline can be downloaded or hosted and used via web service (i.e. MLaaS). Popular examples include service offerings from Google, Microsoft, and Amazon.

    Additionally, open-source libraries are available that implement AutoML techniques, focusing on the specific data transforms, models, and hyperparameters used in the search space and the types of algorithms used to navigate or optimize the search space of possibilities, with versions of Bayesian Optimization being the most common.

    There are many open-source AutoML libraries, although, in this tutorial, we will focus on the best-of-breed libraries that can be used in conjunction with the popular scikit-learn Python machine learning library.

    They are: Hyperopt-Sklearn, Auto-Sklearn, and TPOT.

    Did I miss your favorite AutoML library for scikit-learn?
    Let me know in the comments below.

    We will take a closer look at each, providing the basis for you to evaluate and consider which library might be appropriate for your project.

    Auto-Sklearn

    Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

    It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

    … we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

    — Efficient and Robust Automated Machine Learning, 2015.

    The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number. Your version number should be the same or higher.


    Next, we can demonstrate using Auto-Sklearn on a synthetic classification task.

    We can define an AutoSklearnClassifier class that controls the search and configure it to run for two minutes (120 seconds) and kill any single model that takes more than 30 seconds to evaluate. At the end of the run, we can report the statistics of the search and evaluate the best performing model on a holdout dataset.

    The complete example is listed below.


    Running the example will take about two minutes, given the hard limit we imposed on the run.

    At the end of the run, a summary is printed showing that 599 models were evaluated and the estimated performance of the final model was 95.6 percent.


    We then evaluate the model on the holdout dataset and see that a classification accuracy of 97 percent was achieved, which is reasonably skillful.


    For more on the Auto-Sklearn library, see:

    Tree-based Pipeline Optimization Tool (TPOT)

    Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

    TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms, and model hyperparameters.

    … an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    The first step is to install the TPOT library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number. Your version number should be the same or higher.


    Next, we can demonstrate using TPOT on a synthetic classification task.

    This involves configuring a TPOTClassifier instance with the population size and number of generations for the evolutionary search, as well as the cross-validation procedure and metric used to evaluate models. The algorithm will then run the search procedure and save the best discovered model pipeline to file.

    The complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    The accuracy of top-performing models will be reported along the way.

    Your specific results will vary given the stochastic nature of the search procedure.


    In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 92.6 percent.

    The top-performing pipeline is then saved to a file named “tpot_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    You can then retrieve the code for creating the model pipeline and integrate it into your project.

    For more on TPOT, see the following resources:

    Hyperopt-Sklearn

    HyperOpt is an open-source Python library for Bayesian optimization developed by James Bergstra.

    It is designed for large-scale optimization for models with hundreds of parameters and allows the optimization procedure to be scaled across multiple cores and multiple machines.

    HyperOpt-Sklearn wraps the HyperOpt library and allows for the automatic search of data preparation methods, machine learning algorithms, and model hyperparameters for classification and regression tasks.

    … we introduce Hyperopt-Sklearn: a project that brings the benefits of automatic algorithm configuration to users of Python and scikit-learn. Hyperopt-Sklearn uses Hyperopt to describe a search space over possible configurations of Scikit-Learn components, including preprocessing and classification modules.

    — Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn, 2014.

    Now that we are familiar with HyperOpt and HyperOpt-Sklearn, let’s look at how to use HyperOpt-Sklearn.

    The first step is to install the HyperOpt library.

    This can be achieved using the pip package manager as follows:


    Next, we must install the HyperOpt-Sklearn library.

    This too can be installed using pip, although we must perform this operation manually by cloning the repository and running the installation from the local files, as follows:


    We can confirm that the installation was successful by checking the version number with the following command:


    This will summarize the installed version of HyperOpt-Sklearn, confirming that a modern version is being used.


    Next, we can demonstrate using Hyperopt-Sklearn on a synthetic classification task.

    We can configure a HyperoptEstimator instance that runs the search, including the classifiers to consider in the search space, the pre-processing steps, and the search algorithm to use. In this case, we will use TPE, or Tree of Parzen Estimators, and perform 50 evaluations.

    At the end of the search, the best performing model pipeline is evaluated and summarized.

    The complete example is listed below.


    Running the example may take a few minutes.

    The progress of the search will be reported and you will see some warnings that you can safely ignore.

    At the end of the run, the best-performing model is evaluated on the holdout dataset and the Pipeline discovered is printed for later use.

    Your specific results may differ given the stochastic nature of the learning algorithm and search process. Try running the example a few times.

    In this case, we can see that the chosen model achieved an accuracy of about 84.8 percent on the holdout test set. The Pipeline involves a SGDClassifier model with no pre-processing.


    The printed model can then be used directly, e.g. the code copy-pasted into another project.

    For more on Hyperopt-Sklearn, see:

    Summary

    In this tutorial, you discovered how to use top open-source AutoML libraries for scikit-learn in Python.

    Specifically, you learned:

    • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
    • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
    • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    TPOT-for-Automated-Machine-Learning-in-Python.jpg

    TPOT for Automated Machine Learning in Python

    Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

    TPOT is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Genetic Programming stochastic global search procedure to efficiently discover a top-performing model pipeline for a given dataset.

    In this tutorial, you will discover how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

    After completing this tutorial, you will know:

    • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use TPOT to automatically discover top-performing models for classification tasks.
    • How to use TPOT to automatically discover top-performing models for regression tasks.

    Let’s get started.

    TPOT for Automated Machine Learning in Python

    TPOT for Automated Machine Learning in Python
    Photo by Gwen, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • TPOT for Automated Machine Learning
  • Install and Use TPOT
  • TPOT for Classification
  • TPOT for Regression
  • TPOT for Automated Machine Learning

    Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

    TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms and model hyperparameters.

    … an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    An optimization procedure is then performed to find a tree structure that performs best for a given dataset. Specifically, a genetic programming algorithm, designed to perform a stochastic global optimization on programs represented as trees.

    TPOT uses a version of genetic programming to automatically design and optimize a series of data transformations and machine learning models that attempt to maximize the classification accuracy for a given supervised learning data set.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    The figure below taken from the TPOT paper shows the elements involved in the pipeline search, including data cleaning, feature selection, feature processing, feature construction, model selection, and hyperparameter optimization.

    Overview of the TPOT Pipeline Search

    Overview of the TPOT Pipeline Search
    Taken from: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    Now that we are familiar with what TPOT is, let’s look at how we can install and use TPOT to find an effective model pipeline.

    Install and Use TPOT

    The first step is to install the TPOT library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using TPOT is straightforward.

    It involves creating an instance of the TPOTRegressor or TPOTClassifier class, configuring it for the search, and then exporting the model pipeline that was found to achieve the best performance on your dataset.

    Configuring the class involves two main elements.

    The first is how models will be evaluated, e.g. the cross-validation scheme and performance metric. I recommend explicitly specifying a cross-validation class with your chosen configuration and the performance metric to use.

    For example, RepeatedKFold for regression with ‘neg_mean_absolute_error‘ metric for regression:


    Or a RepeatedStratifiedKFold for regression with ‘accuracy‘ metric for classification:


    The other element is the nature of the stochastic global search procedure.

    As an evolutionary algorithm, this involves setting configuration, such as the size of the population, the number of generations to run, and potentially crossover and mutation rates. The former importantly control the extent of the search; the latter can be left on default values if evolutionary search is new to you.

    For example, a modest population size of 100 and 5 or 10 generations is a good starting point.


    At the end of a search, a Pipeline is found that performs the best.

    This Pipeline can be exported as code into a Python file that you can later copy-and-paste into your own project.


    Now that we are familiar with how to use TPOT, let’s look at some worked examples with real data.

    TPOT for Classification

    In this section, we will use TPOT to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    Next, let’s use TPOT to find a good model for the sonar dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated stratified k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for five generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The accuracy of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 86.6 percent. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_sonar_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-and-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a Naive Bayes model and a Gradient Boosting model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    TPOT for Regression

    In this section, we will use TPOT to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    Next, we can use TPOT to find a good model for the auto insurance dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for 5 generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The MAE of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean MAE of about 29.14. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_insurance_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a linear support vector machine model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use TPOT to automatically discover top-performing models for classification tasks.
    • How to use TPOT to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Auto-Sklearn-for-Automated-Machine-Learning-in-Python.jpg

    Auto-Sklearn for Automated Machine Learning in Python

    Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

    Auto-Sklearn is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Bayesian Optimization search procedure to efficiently discover a top-performing model pipeline for a given dataset.

    In this tutorial, you will discover how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    After completing this tutorial, you will know:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Let’s get started.

    Auto-Sklearn for Automated Machine Learning in Python

    Auto-Sklearn for Automated Machine Learning in Python
    Photo by Richard, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • AutoML With Auto-Sklearn
  • Install and Using Auto-Sklearn
  • Auto-Sklearn for Classification
  • Auto-Sklearn for Regression
  • AutoML With Auto-Sklearn

    Automated Machine Learning, or AutoML for short, is a process of discovering the best-performing pipeline of data transforms, model, and model configuration for a dataset.

    AutoML often involves the use of sophisticated optimization algorithms, such as Bayesian Optimization, to efficiently navigate the space of possible models and model configurations and quickly discover what works well for a given predictive modeling task. It allows non-expert machine learning practitioners to quickly and easily discover what works well or even best for a given dataset with very little technical background or direct input.

    Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

    It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

    … we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

    — Efficient and Robust Automated Machine Learning, 2015.

    The benefit of Auto-Sklearn is that, in addition to discovering the data preparation and model that performs for a dataset, it also is able to learn from models that performed well on similar datasets and is able to automatically create an ensemble of top-performing models discovered as part of the optimization process.

    This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization.

    — Efficient and Robust Automated Machine Learning, 2015.

    The authors provide a useful depiction of their system in the paper, provided below.

    Overview of the Auto-Sklearn System

    Overview of the Auto-Sklearn System.
    Taken from: Efficient and Robust Automated Machine Learning, 2015.

    Install and Using Auto-Sklearn

    The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using Auto-Sklearn is straightforward.

    Depending on whether your prediction task is classification or regression, you create and configure an instance of the AutoSklearnClassifier or AutoSklearnRegressor class, fit it on your dataset, and that’s it. The resulting model can then be used to make predictions directly or saved to file (using pickle) for later use.


    There are a ton of configuration options provided as arguments to the AutoSklearn class.

    By default, the search will use a train-test split of your dataset during the search, and this default is recommended both for speed and simplicity.

    Importantly, you should set the “n_jobs” argument to the number of cores in your system, e.g. 8 if you have 8 cores.

    The optimization process will run for as long as you allow, measure in minutes. By default, it will run for one hour.

    I recommend setting the “time_left_for_this_task” argument for the number of seconds you want the process to run. E.g. less than 5-10 minutes is probably plenty for many small predictive modeling tasks (sub 1,000 rows).

    We will use 5 minutes (300 seconds) for the examples in this tutorial. We will also limit the time allocated to each model evaluation to 30 seconds via the “per_run_time_limit” argument. For example:


    You can limit the algorithms considered in the search, as well as the data transforms.

    By default, the search will create an ensemble of top-performing models discovered as part of the search. Sometimes, this can lead to overfitting and can be disabled by setting the “ensemble_size” argument to 1 and “initial_configurations_via_metalearning” to 0.


    At the end of a run, the list of models can be accessed, as well as other details.

    Perhaps the most useful feature is the sprint_statistics() function that summarizes the search and the performance of the final model.


    Now that we are familiar with the Auto-Sklearn library, let’s look at some worked examples.

    Auto-Sklearn for Classification

    In this section, we will use Auto-Sklearn to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    We will use Auto-Sklearn to find a good model for the sonar dataset.

    First, we will split the dataset into train and test sets and allow the process to find a good model on the training set, then later evaluate the performance of what was found on the holdout test set.


    The AutoSklearnClassifier is configured to run for 5 minutes with 8 cores and limit each model evaluation to 30 seconds.


    The search is then performed on the training dataset.


    Afterward, a summary of the search and best-performing model is reported.


    Finally, we evaluate the performance of the model that was prepared on the holdout test dataset.


    Tying this together, the complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    At the end of the run, a summary is printed showing that 1,054 models were evaluated and the estimated performance of the final model was 91 percent.

    Your specific results may vary given the stochastic nature of the optimization algorithm.


    We then evaluate the model on the holdout dataset and see that classification accuracy of 81.2 percent was achieved, which is reasonably skillful.


    Auto-Sklearn for Regression

    In this section, we will use Auto-Sklearn to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    We will use Auto-Sklearn to find a good model for the auto insurance dataset.

    We can use the same process as was used in the previous section, although we will use the AutoSklearnRegressor class instead of the AutoSklearnClassifier.


    By default, the regressor will optimize the R^2 metric.

    In this case, we are interested in the mean absolute error, or MAE, which we can specify via the “metric” argument when calling the fit() function.


    The complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    You might see some warning messages during the run and you can safely ignore them, such as:


    At the end of the run, a summary is printed showing that 1,759 models were evaluated and the estimated performance of the final model was a MAE of 29.


    We then evaluate the model on the holdout dataset and see that a MAE of 26 was achieved, which is a great result.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives