Hyperparameter-Optimization-With-Random-Search-and-Grid-Search.jpg

Hyperparameter Optimization With Random Search and Grid Search

Machine learning models have hyperparameters that you must set in order to customize the model to your dataset.

Often the general effects of hyperparameters on a model are known, but how to best set a hyperparameter and combinations of interacting hyperparameters for a given dataset is challenging. There are often general heuristics or rules of thumb for configuring hyperparameters.

A better approach is to objectively search different values for model hyperparameters and choose a subset that results in a model that achieves the best performance on a given dataset. This is called hyperparameter optimization or hyperparameter tuning and is available in the scikit-learn Python machine learning library. The result of a hyperparameter optimization is a single set of well-performing hyperparameters that you can use to configure your model.

In this tutorial, you will discover hyperparameter optimization for machine learning in Python.

After completing this tutorial, you will know:

  • Hyperparameter optimization is required to get the most out of your machine learning models.
  • How to configure random and grid search hyperparameter optimization for classification tasks.
  • How to configure random and grid search hyperparameter optimization for regression tasks.

Let’s get started.

Hyperparameter Optimization With Random Search and Grid Search

Hyperparameter Optimization With Random Search and Grid Search
Photo by James St. John, some rights reserved.

Tutorial Overview

This tutorial is divided into five parts; they are:

  • Model Hyperparameter Optimization
  • Hyperparameter Optimization Scikit-Learn API
  • Hyperparameter Optimization for Classification
  • Random Search for Classification
  • Grid Search for Classification
  • Hyperparameter Optimization for Regression
  • Random Search for Regression
  • Grid Search for Regression
  • Common Questions About Hyperparameter Optimization
  • Model Hyperparameter Optimization

    Machine learning models have hyperparameters.

    Hyperparameters are points of choice or configuration that allow a machine learning model to be customized for a specific task or dataset.

    • Hyperparameter: Model configuration argument specified by the developer to guide the learning process for a specific dataset.

    Machine learning models also have parameters, which are the internal coefficients set by training or optimizing the model on a training dataset.

    Parameters are different from hyperparameters. Parameters are learned automatically; hyperparameters are set manually to help guide the learning process.

    For more on the difference between parameters and hyperparameters, see the tutorial:

    Typically a hyperparameter has a known effect on a model in the general sense, but it is not clear how to best set a hyperparameter for a given dataset. Further, many machine learning models have a range of hyperparameters and they may interact in nonlinear ways.

    As such, it is often required to search for a set of hyperparameters that result in the best performance of a model on a dataset. This is called hyperparameter optimization, hyperparameter tuning, or hyperparameter search.

    An optimization procedure involves defining a search space. This can be thought of geometrically as an n-dimensional volume, where each hyperparameter represents a different dimension and the scale of the dimension are the values that the hyperparameter may take on, such as real-valued, integer-valued, or categorical.

    • Search Space: Volume to be searched where each dimension represents a hyperparameter and each point represents one model configuration.

    A point in the search space is a vector with a specific value for each hyperparameter value. The goal of the optimization procedure is to find a vector that results in the best performance of the model after learning, such as maximum accuracy or minimum error.

    A range of different optimization algorithms may be used, although two of the simplest and most common methods are random search and grid search.

    • Random Search. Define a search space as a bounded domain of hyperparameter values and randomly sample points in that domain.
    • Grid Search. Define a search space as a grid of hyperparameter values and evaluate every position in the grid.

    Grid search is great for spot-checking combinations that are known to perform well generally. Random search is great for discovery and getting hyperparameter combinations that you would not have guessed intuitively, although it often requires more time to execute.

    More advanced methods are sometimes used, such as Bayesian Optimization and Evolutionary Optimization.

    Now that we are familiar with hyperparameter optimization, let’s look at how we can use this method in Python.

    Hyperparameter Optimization Scikit-Learn API

    The scikit-learn Python open-source machine learning library provides techniques to tune model hyperparameters.

    Specifically, it provides the RandomizedSearchCV for random search and GridSearchCV for grid search. Both techniques evaluate models for a given hyperparameter vector using cross-validation, hence the “CV” suffix of each class name.

    Both classes require two arguments. The first is the model that you are optimizing. This is an instance of the model with values of hyperparameters set that you do not want to optimize. The second is the search space. This is defined as a dictionary where the names are the hyperparameter arguments to the model and the values are discrete values or a distribution of values to sample in the case of a random search.


    Both classes provide a “cv” argument that allows either an integer number of folds to be specified, e.g. 5, or a configured cross-validation object. I recommend defining and specifying a cross-validation object to gain more control over model evaluation and make the evaluation procedure obvious and explicit.

    In the case of classification tasks, I recommend using the RepeatedStratifiedKFold class, and for regression tasks, I recommend using the RepeatedKFold with an appropriate number of folds and repeats, such as 10 folds and three repeats.


    Both hyperparameter optimization classes also provide a “scoring” argument that takes a string indicating the metric to optimize.

    The metric must be maximizing, meaning better models result in larger scores. For classification, this may be ‘accuracy‘. For regression, this is a negative error measure, such as ‘neg_mean_absolute_error‘ for a negative version of the mean absolute error, where values closer to zero represent less prediction error by the model.


    You can see a list of build-in scoring metrics here:

    Finally, the search can be made parallel, e.g. use all of the CPU cores by specifying the “n_jobs” argument as an integer with the number of cores in your system, e.g. 8. Or you can set it to be -1 to automatically use all of the cores in your system.


    Once defined, the search is performed by calling the fit() function and providing a dataset used to train and evaluate model hyperparameter combinations using cross-validation.


    Running the search may take minutes or hours, depending on the size of the search space and the speed of your hardware. You’ll often want to tailor the search to how much time you have rather than the possibility of what could be searched.

    At the end of the search, you can access all of the results via attributes on the class. Perhaps the most important attributes are the best score observed and the hyperparameters that achieved the best score.


    Once you know the set of hyperparameters that achieve the best result, you can then define a new model, set the values of each hyperparameter, then fit the model on all available data. This model can then be used to make predictions on new data.

    Now that we are familiar with the hyperparameter optimization API in scikit-learn, let’s look at some worked examples.

    Hyperparameter Optimization for Classification

    In this section, we will use hyperparameter optimization to discover a well-performing model configuration for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprising 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    Next, let’s use random search to find a good model configuration for the sonar dataset.

    To keep things simple, we will focus on a linear model, the logistic regression model, and the common hyperparameters tuned for this model.

    Random Search for Classification

    In this section, we will explore hyperparameter optimization of the logistic regression model on the sonar dataset.

    First, we will define the model that will be optimized and use default values for the hyperparameters that will not be optimized.


    We will evaluate model configurations using repeated stratified k-fold cross-validation with three repeats and 10 folds.


    Next, we can define the search space.

    This is a dictionary where names are arguments to the model and values are distributions from which to draw samples. We will optimize the solver, the penalty, and the C hyperparameters of the model with discrete distributions for the solver and penalty type and a log-uniform distribution from 1e-5 to 100 for the C value.

    Log-uniform is useful for searching penalty values as we often explore values at different orders of magnitude, at least as a first step.


    Next, we can define the search procedure with all of these elements.

    Importantly, we must set the number of iterations or samples to draw from the search space via the “n_iter” argument. In this case, we will set it to 500.


    Finally, we can perform the optimization and report the results.


    Tying this together, the complete example is listed below.


    Running the example may take a minute. It is fast because we are using a small search space and a fast model to fit and evaluate. You may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved an accuracy of about 78.9 percent, which is fair, and the specific values for the solver, penalty, and C hyperparameters used to achieve that score.


    Next, let’s use grid search to find a good model configuration for the sonar dataset.

    Grid Search for Classification

    Using the grid search is much like using the random search for classification.

    The main difference is that the search space must be a discrete grid to be searched. This means that instead of using a log-uniform distribution for C, we can specify discrete values on a log scale.


    Additionally, the GridSearchCV class does not take a number of iterations, as we are only evaluating combinations of hyperparameters in the grid.


    Tying this together, the complete example of grid searching logistic regression configurations for the sonar dataset is listed below.


    Running the example may take a moment. It is fast because we are using a small search space and a fast model to fit and evaluate. Again, you may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved an accuracy of about 78.2% which is also fair and the specific values for the solver, penalty and C hyperparameters used to achieve that score. Interestingly, the results are very similar to those found via the random search.


    Hyperparameter Optimization for Regression

    In this section we will use hyper optimization to discover a top-performing model configuration for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprising 63 rows of data with 1 numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with 3 repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset, we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with 1 input variable.


    Next, we can use hyperparameter optimization to find a good model configuration for the auto insurance dataset.

    To keep things simple, we will focus on a linear model, the linear regression model and the common hyperparameters tuned for this model.

    Random Search for Regression

    Configuring and using the random search hyperparameter optimization procedure for regression is much like using it for classification.

    In this case, we will configure the important hyperparameters of the linear regression implementation, including the solver, alpha, fit_intercept, and normalize.

    We will use a discrete distribution of values in the search space for all except the “alpha” argument which is a penalty term, in which case we will use a log-uniform distribution as we did in the previous section for the “C” argument of logistic regression.


    The main difference in regression compared to classification is the choice of the scoring method.

    For regression, performance is often measured using an error, which is minimized, with zero representing a model with perfect skill. The hyperparameter optimization procedures in scikit-learn assume a maximizing score. Therefore a version of each error metric is provided that is made negative.

    This means that large positive errors become large negative errors, good performance are small negative values close to zero and perfect skill is zero.

    The sign of the negative MAE can be ignored when interpreting the result.

    In this case we will mean absolute error (MAE) and a maximizing version of this error is available by setting the “scoring” argument to “neg_mean_absolute_error“.


    Tying this together, the complete example is listed below.


    Running the example may take a moment. It is fast because we are using a small search space and a fast model to fit and evaluate. You may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved a MAE of about 29.2, which is very close to the best performance on the model. We can then see the specific hyperparameter values that achieved this result.


    Next, let’s use grid search to find a good model configuration for the auto insurance dataset.

    Grid Search for Regression

    As a grid search, we cannot define a distribution to sample and instead must define a discrete grid of hyperparameter values. As such, we will specify the “alpha” argument as a range of values on a log-10 scale.


    Grid search for regression requires that the “scoring” be specified, much as we did for random search.

    In this case, we will again use the negative MAE scoring function.


    Tying this together, the complete example of grid searching linear regression configurations for the auto insurance dataset is listed below.


    Running the example may take a minute. It is fast because we are using a small search space and a fast model to fit and evaluate. Again, you may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved a MAE of about 29.2, which is nearly identical to what we achieved with the random search in the previous section. Interestingly, the hyperparameters are also nearly identical, which is good confirmation.


    Common Questions About Hyperparameter Optimization

    This section addresses some common questions about hyperparameter optimization.

    How to Choose Between Random and Grid Search?

    Choose the method based on your needs. I recommend starting with grid and doing a random search if you have the time.

    Grid search is appropriate for small and quick searches of hyperparameter values that are known to perform well generally.

    Random search is appropriate for discovering new hyperparameter values or new combinations of hyperparameters, often resulting in better performance, although it may take more time to complete.

    How to Speed-Up Hyperparameter Optimization?

    Ensure that you set the “n_jobs” argument to the number of cores on your machine.

    After that, more suggestions include:

    • Evaluate on a smaller sample of your dataset.
    • Explore a smaller search space.
    • Use fewer repeats and/or folds for cross-validation.
    • Execute the search on a faster machine, such as AWS EC2.
    • Use an alternate model that is faster to evaluate.

    How to Choose Hyperparameters to Search?

    Most algorithms have a subset of hyperparameters that have the most influence over the search procedure.

    These are listed in most descriptions of the algorithm. For example, here are some algorithms and their most important hyperparameters:

    If you are unsure:

    • Review papers that use the algorithm to get ideas.
    • Review the API and algorithm documentation to get ideas.
    • Search all hyperparameters.

    How to Use Best-Performing Hyperparameters?

    Define a new model and set the hyperparameter values of the model to the values found by the search.

    Then fit the model on all available data and use the model to start making predictions on new data.

    This is called preparing a final model. See more here:

    How to Make a Prediction?

    First, fit a final model (previous question).

    Then call the predict() function to make a prediction.

    For examples of making a prediction with a final model, see the tutorial:

    Do you have another question about hyperparameter optimization?
    Let me know in the comments below.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    APIs
    Articles

    Summary

    In this tutorial, you discovered hyperparameter optimization for machine learning in Python.

    Specifically, you learned:

    • Hyperparameter optimization is required to get the most out of your machine learning models.
    • How to configure random and grid search hyperparameter optimization for classification tasks.
    • How to configure random and grid search hyperparameter optimization for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    How-to-Train-to-the-Test-Set-in-Machine-Learning.jpg

    How to Train to the Test Set in Machine Learning

    Training to the test set is a type of overfitting where a model is prepared that intentionally achieves good performance on a given test set at the expense of increased generalization error.

    It is a type of overfitting that is common in machine learning competitions where a complete training dataset is provided and where only the input portion of a test set is provided. One approach to training to the test set involves constructing a training set that most resembles the test set and then using it as the basis for training a model. The model is expected to have better performance on the test set, but most likely worse performance on the training dataset and on any new data in the future.

    Although overfitting the test set is not desirable, it can be interesting to explore as a thought experiment and provide more insight into both machine learning competitions and avoiding overfitting generally.

    In this tutorial, you will discover how to intentionally train to the test set for classification and regression problems.

    After completing this tutorial, you will know:

    • Training to the test set is a type of data leakage that may occur in machine learning competitions.
    • One approach to training to the test set involves creating a training dataset that is most similar to a provided test set.
    • How to use a KNN model to construct a training dataset and train to the test set with a real dataset.

    Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    How to Train to the Test Set in Machine Learning

    How to Train to the Test Set in Machine Learning
    Photo by ND Strupler, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Train to the Test Set
  • Train to Test Set for Classification
  • Train to Test Set for Regression
  • Train to the Test Set

    In applied machine learning, we seek a model that learns the relationship between the input and output variables using the training dataset.

    The hope and goal is that we learn a relationship that generalizes to new examples beyond the training dataset. This goal motivates why we use resampling techniques like k-fold cross-validation to estimate the performance of the model when making predictions on data not used during training.

    In the case of machine learning competitions, like those on Kaggle, we are given access to the complete training dataset and the inputs of the test dataset and are required to make predictions for the test dataset.

    This leads to a possible situation where we may accidentally or choose to train a model to the test set. That is, tune the model behavior to achieve the best performance on the test dataset rather than develop a model that performs well in general, using a technique like k-fold cross-validation.

    Another, more overt path to information leakage, can sometimes be seen in machine learning competitions where the training and test set data are given at the same time.

    — Page 56, Feature Engineering and Selection: A Practical Approach for Predictive Models, 2019.

    Training to the test set is often a bad idea.

    It is an explicit type of data leakage. Nevertheless, it is an interesting thought experiment.

    One approach to training to the test set is to contrive a training dataset that is most similar to the test set. For example, we could discard all rows in the training set that are too different from the test set and only train on those rows in the training set that are maximally similar to rows in the test set.

    While the test set data often have the outcome data blinded, it is possible to “train to the test” by only using the training set samples that are most similar to the test set data. This may very well improve the model’s performance scores for this particular test set but might ruin the model for predicting on a broader data set.

    — Page 56, Feature Engineering and Selection: A Practical Approach for Predictive Models, 2019.

    We would expect the model to overfit the test set, but this is the whole point of this thought experiment.

    Let’s explore this approach to training to the test set in this tutorial.

    We can use a k-nearest neighbor model to select those instances of the training set that are most similar to the test set. The KNeighborsRegressor and KNeighborsClassifier both provide the kneighbors() function that will return indexes into the training dataset for rows that are most similar to a given data, such as a test set.


    We might want to try removing duplicates from the selected row indexes.


    We can then use those row indexes to construct a custom training dataset and fit a model.


    Given that we are using a KNN model to construct the training set from the test set, we will also use the same type of model to make predictions on the test set. This is not required, but it makes the examples simpler.

    Using this approach, we can now experiment with training to the test set for both classification and regression datasets.



    Want to Get Started With Data Preparation?

    Take my free 7-day email crash course now (with sample code).

    Click to sign-up and also get a free PDF Ebook version of the course.

    Download Your FREE Mini-Course


    Train to Test Set for Classification

    We will use the diabetes dataset as the basis for exploring training for the test set for classification problems.

    Each record describes the medical details of a female and the prediction is the onset of diabetes within the next five years.

    The dataset has eight input variables and 768 rows of data; the input variables are all numeric and the target has two class labels, e.g. it is a binary classification task.

    Below provides a sample of the first five rows of the dataset.


    First, we can load the dataset directly from the URL, split it into input and output elements, then split the dataset into train and test sets, holding thirty percent back for the test set. We can then evaluate a KNN model with default model hyperparameters by training it on the training set and making predictions on the test set.

    The complete example is listed below.


    Running the example first loads the dataset and summarizes the number of rows and columns, matching our expectations. The shape of the train and test sets are then reported, showing we have about 230 rows in the test set.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Finally, the classification accuracy of the model is reported to be about 77.056 percent.


    Now, let’s see if we can achieve better performance on the test set by preparing a model that is trained directly for it.

    First, we will construct a training dataset using the simpler example in the training set for each row in the test set.


    Next, we will train the model on this new dataset and evaluate it on the test set as we did before.


    The complete example is listed below.


    Running the example, we can see that the reported size of the new training dataset is the same size as the test set, as we expected.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    We can see that we have achieved a lift in performance by training to the test set over training the model on the entire training dataset. In this case, we achieved a classification accuracy of about 79.654 percent compared to 77.056 percent when the entire training dataset is used.


    You might want to try selecting different numbers of neighbors from the training set for each example in the test set to see if you can achieve better performance.

    Also, you might want to try keeping unique row indexes in the training set and see if that makes a difference.

    Finally, it might be interesting to hold back a final validation dataset and compare how different “train-to-the-test-set” techniques affect performance on the holdout dataset. E.g. see how training to the test set impacts generalization error.

    Report your findings in the comments below.

    Now that we know how to train to the test set for classification, let’s look at an example for regression.

    Train to Test Set for Regression

    We will use the housing dataset as the basis for exploring training for the test set for regression problems.

    The housing dataset involves the prediction of a house price in thousands of dollars given details of the house and its neighborhood.

    It is a regression problem, meaning we are predicting a numerical value. There are 506 observations with 13 input variables and one output variable.

    A sample of the first five rows is listed below.


    First, we can load the dataset, split it, and evaluate a KNN model on it directly using the entire training dataset. We will report performance on this regression class using mean absolute error (MAE).

    The complete example is listed below.


    Running the example first loads the dataset and summarizes the number of rows and columns, matching our expectations. The shape of the train and test sets are then reported, showing we have about 150 rows in the test set.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Finally, the MAE of the model is reported to be about 4.488.


    Now, let’s see if we can achieve better performance on the test set by preparing a model that is trained to it.

    First, we will construct a training dataset using the simpler example in the training set for each row in the test set.


    Next, we will train the model on this new dataset and evaluate it on the test set as we did before.


    The complete example is listed below.


    Running the example, we can see that the reported size of the new training dataset is the same size as the test set, as we expected.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    We can see that we have achieved a lift in performance by training to the test set over training the model on the entire training dataset. In this case, we achieved a MAE of about 4.433 compared to 4.488 when the entire training dataset is used.

    Again, you might want to explore using a different number of neighbors when constructing the new training set and see if keeping unique rows in the training dataset makes a difference. Report your findings in the comments below.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Books
    APIs

    Summary

    In this tutorial, you discovered how to intentionally train to the test set for classification and regression problems.

    Specifically, you learned:

    • Training to the test set is a type of data leakage that may occur in machine learning competitions.
    • One approach to training to the test set involves creating a training dataset that is most similar to a provided test set.
    • How to use a KNN model to construct a training dataset and train to the test set with a real dataset.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Modern Data Preparation!

    Data Preparation for Machine Learning

    Prepare Your Machine Learning Data in Minutes

    …with just a few lines of python code

    Discover how in my new Ebook:
    Data Preparation for Machine Learning

    It provides self-study tutorials with full working code on:
    Feature Selection, RFE, Data Cleaning, Data Transforms, Scaling, Dimensionality Reduction,
    and much more…

    Bring Modern Data Preparation Techniques to
    Your Machine Learning Projects

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Automated-Machine-Learning-AutoML-Libraries-for-Python.jpg

    Automated Machine Learning (AutoML) Libraries for Python

    AutoML provides tools to automatically discover good machine learning model pipelines for a dataset with very little user intervention.

    It is ideal for domain experts new to machine learning or machine learning practitioners looking to get good results quickly for a predictive modeling task.

    Open-source libraries are available for using AutoML methods with popular machine learning libraries in Python, such as the scikit-learn machine learning library.

    In this tutorial, you will discover how to use top open-source AutoML libraries for scikit-learn in Python.

    After completing this tutorial, you will know:

    • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
    • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
    • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

    Let’s get started.

    Automated Machine Learning (AutoML) Libraries for Python

    Automated Machine Learning (AutoML) Libraries for Python
    Photo by Michael Coghlan, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • Automated Machine Learning
  • Auto-Sklearn
  • Tree-based Pipeline Optimization Tool (TPOT)
  • Hyperopt-Sklearn
  • Automated Machine Learning

    Automated Machine Learning, or AutoML for short, involves the automatic selection of data preparation, machine learning model, and model hyperparameters for a predictive modeling task.

    It refers to techniques that allow semi-sophisticated machine learning practitioners and non-experts to discover a good predictive model pipeline for their machine learning task quickly, with very little intervention other than providing a dataset.

    … the user simply provides data, and the AutoML system automatically determines the approach that performs best for this particular application. Thereby, AutoML makes state-of-the-art machine learning approaches accessible to domain scientists who are interested in applying machine learning but do not have the resources to learn about the technologies behind it in detail.

    — Page ix, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    Central to the approach is defining a large hierarchical optimization problem that involves identifying data transforms and the machine learning models themselves, in addition to the hyperparameters for the models.

    Many companies now offer AutoML as a service, where a dataset is uploaded and a model pipeline can be downloaded or hosted and used via web service (i.e. MLaaS). Popular examples include service offerings from Google, Microsoft, and Amazon.

    Additionally, open-source libraries are available that implement AutoML techniques, focusing on the specific data transforms, models, and hyperparameters used in the search space and the types of algorithms used to navigate or optimize the search space of possibilities, with versions of Bayesian Optimization being the most common.

    There are many open-source AutoML libraries, although, in this tutorial, we will focus on the best-of-breed libraries that can be used in conjunction with the popular scikit-learn Python machine learning library.

    They are: Hyperopt-Sklearn, Auto-Sklearn, and TPOT.

    Did I miss your favorite AutoML library for scikit-learn?
    Let me know in the comments below.

    We will take a closer look at each, providing the basis for you to evaluate and consider which library might be appropriate for your project.

    Auto-Sklearn

    Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

    It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

    … we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

    — Efficient and Robust Automated Machine Learning, 2015.

    The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number. Your version number should be the same or higher.


    Next, we can demonstrate using Auto-Sklearn on a synthetic classification task.

    We can define an AutoSklearnClassifier class that controls the search and configure it to run for two minutes (120 seconds) and kill any single model that takes more than 30 seconds to evaluate. At the end of the run, we can report the statistics of the search and evaluate the best performing model on a holdout dataset.

    The complete example is listed below.


    Running the example will take about two minutes, given the hard limit we imposed on the run.

    At the end of the run, a summary is printed showing that 599 models were evaluated and the estimated performance of the final model was 95.6 percent.


    We then evaluate the model on the holdout dataset and see that a classification accuracy of 97 percent was achieved, which is reasonably skillful.


    For more on the Auto-Sklearn library, see:

    Tree-based Pipeline Optimization Tool (TPOT)

    Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

    TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms, and model hyperparameters.

    … an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    The first step is to install the TPOT library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number. Your version number should be the same or higher.


    Next, we can demonstrate using TPOT on a synthetic classification task.

    This involves configuring a TPOTClassifier instance with the population size and number of generations for the evolutionary search, as well as the cross-validation procedure and metric used to evaluate models. The algorithm will then run the search procedure and save the best discovered model pipeline to file.

    The complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    The accuracy of top-performing models will be reported along the way.

    Your specific results will vary given the stochastic nature of the search procedure.


    In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 92.6 percent.

    The top-performing pipeline is then saved to a file named “tpot_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    You can then retrieve the code for creating the model pipeline and integrate it into your project.

    For more on TPOT, see the following resources:

    Hyperopt-Sklearn

    HyperOpt is an open-source Python library for Bayesian optimization developed by James Bergstra.

    It is designed for large-scale optimization for models with hundreds of parameters and allows the optimization procedure to be scaled across multiple cores and multiple machines.

    HyperOpt-Sklearn wraps the HyperOpt library and allows for the automatic search of data preparation methods, machine learning algorithms, and model hyperparameters for classification and regression tasks.

    … we introduce Hyperopt-Sklearn: a project that brings the benefits of automatic algorithm configuration to users of Python and scikit-learn. Hyperopt-Sklearn uses Hyperopt to describe a search space over possible configurations of Scikit-Learn components, including preprocessing and classification modules.

    — Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn, 2014.

    Now that we are familiar with HyperOpt and HyperOpt-Sklearn, let’s look at how to use HyperOpt-Sklearn.

    The first step is to install the HyperOpt library.

    This can be achieved using the pip package manager as follows:


    Next, we must install the HyperOpt-Sklearn library.

    This too can be installed using pip, although we must perform this operation manually by cloning the repository and running the installation from the local files, as follows:


    We can confirm that the installation was successful by checking the version number with the following command:


    This will summarize the installed version of HyperOpt-Sklearn, confirming that a modern version is being used.


    Next, we can demonstrate using Hyperopt-Sklearn on a synthetic classification task.

    We can configure a HyperoptEstimator instance that runs the search, including the classifiers to consider in the search space, the pre-processing steps, and the search algorithm to use. In this case, we will use TPE, or Tree of Parzen Estimators, and perform 50 evaluations.

    At the end of the search, the best performing model pipeline is evaluated and summarized.

    The complete example is listed below.


    Running the example may take a few minutes.

    The progress of the search will be reported and you will see some warnings that you can safely ignore.

    At the end of the run, the best-performing model is evaluated on the holdout dataset and the Pipeline discovered is printed for later use.

    Your specific results may differ given the stochastic nature of the learning algorithm and search process. Try running the example a few times.

    In this case, we can see that the chosen model achieved an accuracy of about 84.8 percent on the holdout test set. The Pipeline involves a SGDClassifier model with no pre-processing.


    The printed model can then be used directly, e.g. the code copy-pasted into another project.

    For more on Hyperopt-Sklearn, see:

    Summary

    In this tutorial, you discovered how to use top open-source AutoML libraries for scikit-learn in Python.

    Specifically, you learned:

    • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
    • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
    • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives

    X
    wpChatIcon