Featured Image

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)

Machine learning model selection and configuration may be the biggest challenge in applied machine learning.

Controlled experiments must be performed in order to discover what works best for a given classification or regression predictive modeling task. This can feel overwhelming given the large number of data preparation schemes, learning algorithms, and model hyperparameters that could be considered.

The common approach is to use a shortcut, such as using a popular algorithm or testing a small number of algorithms with default hyperparameters.

A modern alternative is to consider the selection of data preparation, learning algorithm, and algorithm hyperparameters one large global optimization problem. This characterization is generally referred to as Combined Algorithm Selection and Hyperparameter Optimization, or “CASH Optimization” for short.

In this post, you will discover the challenge of machine learning model selection and the modern solution referred to CASH Optimization.

After reading this post, you will know:

  • The challenge of machine learning model and hyperparameter selection.
  • The shortcuts of using popular models or making a series of sequential decisions.
  • The characterization of Combined Algorithm Selection and Hyperparameter Optimization that underlies modern AutoML.

Let’s get started.

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)
Photo by Bernard Spragg. NZ, some rights reserved.

Overview

This tutorial is divided into three parts; they are:

  • Challenge of Model and Hyperparameter Selection
  • Solutions to Model and Hyperparameter Selection
  • Combined Algorithm Selection and Hyperparameter Optimization
  • Challenge of Model and Hyperparameter Selection

    There is no definitive mapping of machine learning algorithms to predictive modeling tasks.

    We cannot look at a dataset and know the best algorithm to use, let alone the best data transforms to use to prepare the data or the best configuration for a given model.

    Instead, we must use controlled experiments to discover what works best for a given dataset.

    As such, applied machine learning is an empirical discipline. It is engineering and art more than science.

    The problem is that there are tens, if not hundreds, of machine learning algorithms to choose from. Each algorithm may have up to tens of hyperparameters to be configured.

    To a beginner, the scope of the problem is overwhelming.

    • Where do you start?
    • What do you start with?
    • When do you discard a model?
    • When do you double down on a model?

    There are a few standard solutions to this problem adopted by most practitioners, experienced and otherwise.

    Solutions to Model and Hyperparameter Selection

    Let’s look at two of the most common short-cuts to this problem of selecting data transforms, machine learning models, and model hyperparameters.

    Use a Popular Algorithm

    One approach is to use a popular machine learning algorithm.

    It can be challenging to make the right choice when faced with these degrees of freedom, leaving many users to select algorithms based on reputation or intuitive appeal, and/or to leave hyperparameters set to default values. Of course, this approach can yield performance far worse than that of the best method and hyperparameter settings.

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    For example, if it seems like everyone is talking about “random forest,” then random forest becomes the right algorithm for all classification and regression problems you encounter, and you limit the experimentation to the hyperparameters of the random forest algorithm.

    • Short-Cut #1: Use a popular algorithm like “random forest” or “xgboost“.

    Random forest indeed performs well on a wide range of prediction tasks. But we cannot know if it will be good or even best for a given dataset. The risk is that we may be able to achieve better results with a much simpler linear model.

    A workaround might be to test a range of popular algorithms, leading into the next shortcut.

    Sequentially Test Transforms, Models, and Hyperparameters

    Another approach is to approach the problem as a series of sequential decisions.

    For example, review the data and select data transforms that make data more Gaussian, remove outliers, etc. Then test a suite of algorithms with default hyperparameters and select one or a few that perform well. Then tune the hyperparameters of those top-performing models.

    • Short-Cut #2: Sequentially select data transforms, models, and model hyperparameters.

    This is the approach that I recommend for getting good results quickly; for example:

    This short-cut too can be effective and reduces the likelihood of missing an algorithm that performs well on your dataset. The downside here is more subtle and impacts you if you are seeking great or excellent results rather than merely good results quickly.

    The risk is selecting data transforms prior to selecting models might mean that you miss the data preparation sequence that gets the most out of an algorithm.

    Similarly, selecting a model or subset of models prior to selecting model hyperparameters means that you might be missing a model with hyperparameters other than the default values that performs better than any of the subset of models selected and their subsequent configurations.

    Two important problems in AutoML are that (1) no single machine learning method performs best on all datasets and (2) some machine learning methods (e.g., non-linear SVMs) crucially rely on hyperparameter optimization.

    — Page 115, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    A workaround might be to spot check good or well-performing configurations of each algorithm as part of the algorithm spot check. This is only a partial solution.

    There is a better approach.

    Combined Algorithm Selection and Hyperparameter Optimization

    Selecting a data preparation pipeline, machine learning model, and model hyperparameters is a search problem.

    The possible choices at each step define a search space, and a single combination represents a point in that space that can be evaluated with a dataset.

    Navigating the search space efficiently is referred to as global optimization.

    This has been well understood for a long time in the field of machine learning, although perhaps tacitly, with focus typically on one element of the problem, such as hyperparameter optimization.

    The important insight is that there are dependencies between each step, which influences the size and structure of the search space.

    … [the problem] can be viewed as a single hierarchical hyperparameter optimization problem, in which even the choice of algorithm itself is considered a hyperparameter.

    — Page 82, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    This requires that the data preparation and machine learning model, along with the model hyperparameters, must form the scope of the optimization problem and that the optimization algorithm must be aware of the dependencies between.

    This is a challenging global optimization problem, notably because of the dependencies, but also because estimating the performance of a machine learning model on a dataset is stochastic, resulting in a noisy distribution of performance scores (e.g. via repeated k-fold cross-validation).

    … the combined space of learning algorithms and their hyperparameters is very challenging to search: the response function is noisy and the space is high dimensional, involves both categorical and continuous choices, and contains hierarchical dependencies (e.g., the hyperparameters of a learning algorithm are only meaningful if that algorithm is chosen; the algorithm choices in an ensemble method are only meaningful if that ensemble method is chosen; etc).

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    This challenge was perhaps best characterized by Chris Thornton, et al. in their 2013 paper titled “Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms.” In the paper, they refer to this problem as “Combined Algorithm Selection And Hyperparameter Optimization,” or “CASH Optimization” for short.

    … a natural challenge for machine learning: given a dataset, to automatically and simultaneously choose a learning algorithm and set its hyperparameters to optimize empirical performance. We dub this the combined algorithm selection and hyperparameter optimization problem (short: CASH).

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    This characterization is also sometimes referred to as “Full Model Selection,” or FMS for short.

    The FMS problem consists of the following: given a pool of preprocessing methods, feature selection and learning algorithms, select the combination of these that obtains the lowest classification error for a given data set. This task also includes the selection of hyperparameters for the considered methods, resulting in a vast search space that is well suited for stochastic optimization techniques.

    — Particle Swarm Model Selection, 2009.

    Thornton, et al. proceeded to use global optimization algorithms that are aware of the dependencies, so-called sequential global optimization algorithms, such as specific versions of Bayesian Optimization. They then proceeded to implement their approach for the WEKA machine learning workbench, called the AutoWEKA Projects.

    A promising approach is Bayesian Optimization, and in particular Sequential Model-Based Optimization (SMBO), a versatile stochastic optimization framework that can work with both categorical and continuous hyperparameters, and that can exploit hierarchical structure stemming from conditional parameters.

    — Page 85, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    This now provides the dominant paradigm for a field of study referred to as “Automated Machine Learning,” or AutoML for short. AutoML is concerned with providing tools that allow practitioners with modest technical skill to quickly find effective solutions to machine learning tasks, such as classification and regression predictive modeling.

    AutoML aims to provide effective off-the-shelf learning systems to free experts and non-experts alike from the tedious and time-consuming tasks of selecting the right algorithm for a dataset at hand, along with the right preprocessing method and the various hyperparameters of all involved components.

    — Page 136,Automated Machine Learning: Methods, Systems, Challenges, 2019.

    AutoML techniques are provided by machine learning libraries and increasingly as services, so-called machine learning as a service, or MLaaS for short.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Papers
    Books
    Articles

    Summary

    In this post, you discovered the challenge of machine learning model selection and the modern solution referred to as CASH Optimization.

    Specifically, you learned:

    • The challenge of machine learning model and hyperparameter selection.
    • The shortcuts of using popular models or making a series of sequential decisions.
    • The characterization of Combined Algorithm Selection and Hyperparameter Optimization that underlies modern AutoML.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Featured Image

    Hyperparameter Optimization With Random Search and Grid Search

    Machine learning models have hyperparameters that you must set in order to customize the model to your dataset.

    Often the general effects of hyperparameters on a model are known, but how to best set a hyperparameter and combinations of interacting hyperparameters for a given dataset is challenging. There are often general heuristics or rules of thumb for configuring hyperparameters.

    A better approach is to objectively search different values for model hyperparameters and choose a subset that results in a model that achieves the best performance on a given dataset. This is called hyperparameter optimization or hyperparameter tuning and is available in the scikit-learn Python machine learning library. The result of a hyperparameter optimization is a single set of well-performing hyperparameters that you can use to configure your model.

    In this tutorial, you will discover hyperparameter optimization for machine learning in Python.

    After completing this tutorial, you will know:

    • Hyperparameter optimization is required to get the most out of your machine learning models.
    • How to configure random and grid search hyperparameter optimization for classification tasks.
    • How to configure random and grid search hyperparameter optimization for regression tasks.

    Let’s get started.

    Hyperparameter Optimization With Random Search and Grid Search

    Hyperparameter Optimization With Random Search and Grid Search
    Photo by James St. John, some rights reserved.

    Tutorial Overview

    This tutorial is divided into five parts; they are:

  • Model Hyperparameter Optimization
  • Hyperparameter Optimization Scikit-Learn API
  • Hyperparameter Optimization for Classification
  • Random Search for Classification
  • Grid Search for Classification
  • Hyperparameter Optimization for Regression
  • Random Search for Regression
  • Grid Search for Regression
  • Common Questions About Hyperparameter Optimization
  • Model Hyperparameter Optimization

    Machine learning models have hyperparameters.

    Hyperparameters are points of choice or configuration that allow a machine learning model to be customized for a specific task or dataset.

    • Hyperparameter: Model configuration argument specified by the developer to guide the learning process for a specific dataset.

    Machine learning models also have parameters, which are the internal coefficients set by training or optimizing the model on a training dataset.

    Parameters are different from hyperparameters. Parameters are learned automatically; hyperparameters are set manually to help guide the learning process.

    For more on the difference between parameters and hyperparameters, see the tutorial:

    Typically a hyperparameter has a known effect on a model in the general sense, but it is not clear how to best set a hyperparameter for a given dataset. Further, many machine learning models have a range of hyperparameters and they may interact in nonlinear ways.

    As such, it is often required to search for a set of hyperparameters that result in the best performance of a model on a dataset. This is called hyperparameter optimization, hyperparameter tuning, or hyperparameter search.

    An optimization procedure involves defining a search space. This can be thought of geometrically as an n-dimensional volume, where each hyperparameter represents a different dimension and the scale of the dimension are the values that the hyperparameter may take on, such as real-valued, integer-valued, or categorical.

    • Search Space: Volume to be searched where each dimension represents a hyperparameter and each point represents one model configuration.

    A point in the search space is a vector with a specific value for each hyperparameter value. The goal of the optimization procedure is to find a vector that results in the best performance of the model after learning, such as maximum accuracy or minimum error.

    A range of different optimization algorithms may be used, although two of the simplest and most common methods are random search and grid search.

    • Random Search. Define a search space as a bounded domain of hyperparameter values and randomly sample points in that domain.
    • Grid Search. Define a search space as a grid of hyperparameter values and evaluate every position in the grid.

    Grid search is great for spot-checking combinations that are known to perform well generally. Random search is great for discovery and getting hyperparameter combinations that you would not have guessed intuitively, although it often requires more time to execute.

    More advanced methods are sometimes used, such as Bayesian Optimization and Evolutionary Optimization.

    Now that we are familiar with hyperparameter optimization, let’s look at how we can use this method in Python.

    Hyperparameter Optimization Scikit-Learn API

    The scikit-learn Python open-source machine learning library provides techniques to tune model hyperparameters.

    Specifically, it provides the RandomizedSearchCV for random search and GridSearchCV for grid search. Both techniques evaluate models for a given hyperparameter vector using cross-validation, hence the “CV” suffix of each class name.

    Both classes require two arguments. The first is the model that you are optimizing. This is an instance of the model with values of hyperparameters set that you do not want to optimize. The second is the search space. This is defined as a dictionary where the names are the hyperparameter arguments to the model and the values are discrete values or a distribution of values to sample in the case of a random search.


    Both classes provide a “cv” argument that allows either an integer number of folds to be specified, e.g. 5, or a configured cross-validation object. I recommend defining and specifying a cross-validation object to gain more control over model evaluation and make the evaluation procedure obvious and explicit.

    In the case of classification tasks, I recommend using the RepeatedStratifiedKFold class, and for regression tasks, I recommend using the RepeatedKFold with an appropriate number of folds and repeats, such as 10 folds and three repeats.


    Both hyperparameter optimization classes also provide a “scoring” argument that takes a string indicating the metric to optimize.

    The metric must be maximizing, meaning better models result in larger scores. For classification, this may be ‘accuracy‘. For regression, this is a negative error measure, such as ‘neg_mean_absolute_error‘ for a negative version of the mean absolute error, where values closer to zero represent less prediction error by the model.


    You can see a list of build-in scoring metrics here:

    Finally, the search can be made parallel, e.g. use all of the CPU cores by specifying the “n_jobs” argument as an integer with the number of cores in your system, e.g. 8. Or you can set it to be -1 to automatically use all of the cores in your system.


    Once defined, the search is performed by calling the fit() function and providing a dataset used to train and evaluate model hyperparameter combinations using cross-validation.


    Running the search may take minutes or hours, depending on the size of the search space and the speed of your hardware. You’ll often want to tailor the search to how much time you have rather than the possibility of what could be searched.

    At the end of the search, you can access all of the results via attributes on the class. Perhaps the most important attributes are the best score observed and the hyperparameters that achieved the best score.


    Once you know the set of hyperparameters that achieve the best result, you can then define a new model, set the values of each hyperparameter, then fit the model on all available data. This model can then be used to make predictions on new data.

    Now that we are familiar with the hyperparameter optimization API in scikit-learn, let’s look at some worked examples.

    Hyperparameter Optimization for Classification

    In this section, we will use hyperparameter optimization to discover a well-performing model configuration for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprising 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    Next, let’s use random search to find a good model configuration for the sonar dataset.

    To keep things simple, we will focus on a linear model, the logistic regression model, and the common hyperparameters tuned for this model.

    Random Search for Classification

    In this section, we will explore hyperparameter optimization of the logistic regression model on the sonar dataset.

    First, we will define the model that will be optimized and use default values for the hyperparameters that will not be optimized.


    We will evaluate model configurations using repeated stratified k-fold cross-validation with three repeats and 10 folds.


    Next, we can define the search space.

    This is a dictionary where names are arguments to the model and values are distributions from which to draw samples. We will optimize the solver, the penalty, and the C hyperparameters of the model with discrete distributions for the solver and penalty type and a log-uniform distribution from 1e-5 to 100 for the C value.

    Log-uniform is useful for searching penalty values as we often explore values at different orders of magnitude, at least as a first step.


    Next, we can define the search procedure with all of these elements.

    Importantly, we must set the number of iterations or samples to draw from the search space via the “n_iter” argument. In this case, we will set it to 500.


    Finally, we can perform the optimization and report the results.


    Tying this together, the complete example is listed below.


    Running the example may take a minute. It is fast because we are using a small search space and a fast model to fit and evaluate. You may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved an accuracy of about 78.9 percent, which is fair, and the specific values for the solver, penalty, and C hyperparameters used to achieve that score.


    Next, let’s use grid search to find a good model configuration for the sonar dataset.

    Grid Search for Classification

    Using the grid search is much like using the random search for classification.

    The main difference is that the search space must be a discrete grid to be searched. This means that instead of using a log-uniform distribution for C, we can specify discrete values on a log scale.


    Additionally, the GridSearchCV class does not take a number of iterations, as we are only evaluating combinations of hyperparameters in the grid.


    Tying this together, the complete example of grid searching logistic regression configurations for the sonar dataset is listed below.


    Running the example may take a moment. It is fast because we are using a small search space and a fast model to fit and evaluate. Again, you may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved an accuracy of about 78.2% which is also fair and the specific values for the solver, penalty and C hyperparameters used to achieve that score. Interestingly, the results are very similar to those found via the random search.


    Hyperparameter Optimization for Regression

    In this section we will use hyper optimization to discover a top-performing model configuration for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprising 63 rows of data with 1 numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with 3 repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset, we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with 1 input variable.


    Next, we can use hyperparameter optimization to find a good model configuration for the auto insurance dataset.

    To keep things simple, we will focus on a linear model, the linear regression model and the common hyperparameters tuned for this model.

    Random Search for Regression

    Configuring and using the random search hyperparameter optimization procedure for regression is much like using it for classification.

    In this case, we will configure the important hyperparameters of the linear regression implementation, including the solver, alpha, fit_intercept, and normalize.

    We will use a discrete distribution of values in the search space for all except the “alpha” argument which is a penalty term, in which case we will use a log-uniform distribution as we did in the previous section for the “C” argument of logistic regression.


    The main difference in regression compared to classification is the choice of the scoring method.

    For regression, performance is often measured using an error, which is minimized, with zero representing a model with perfect skill. The hyperparameter optimization procedures in scikit-learn assume a maximizing score. Therefore a version of each error metric is provided that is made negative.

    This means that large positive errors become large negative errors, good performance are small negative values close to zero and perfect skill is zero.

    The sign of the negative MAE can be ignored when interpreting the result.

    In this case we will mean absolute error (MAE) and a maximizing version of this error is available by setting the “scoring” argument to “neg_mean_absolute_error“.


    Tying this together, the complete example is listed below.


    Running the example may take a moment. It is fast because we are using a small search space and a fast model to fit and evaluate. You may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved a MAE of about 29.2, which is very close to the best performance on the model. We can then see the specific hyperparameter values that achieved this result.


    Next, let’s use grid search to find a good model configuration for the auto insurance dataset.

    Grid Search for Regression

    As a grid search, we cannot define a distribution to sample and instead must define a discrete grid of hyperparameter values. As such, we will specify the “alpha” argument as a range of values on a log-10 scale.


    Grid search for regression requires that the “scoring” be specified, much as we did for random search.

    In this case, we will again use the negative MAE scoring function.


    Tying this together, the complete example of grid searching linear regression configurations for the auto insurance dataset is listed below.


    Running the example may take a minute. It is fast because we are using a small search space and a fast model to fit and evaluate. Again, you may see some warnings during the optimization for invalid configuration combinations. These can be safely ignored.

    At the end of the run, the best score and hyperparameter configuration that achieved the best performance are reported.

    Your specific results will vary given the stochastic nature of the optimization procedure. Try running the example a few times.

    In this case, we can see that the best configuration achieved a MAE of about 29.2, which is nearly identical to what we achieved with the random search in the previous section. Interestingly, the hyperparameters are also nearly identical, which is good confirmation.


    Common Questions About Hyperparameter Optimization

    This section addresses some common questions about hyperparameter optimization.

    How to Choose Between Random and Grid Search?

    Choose the method based on your needs. I recommend starting with grid and doing a random search if you have the time.

    Grid search is appropriate for small and quick searches of hyperparameter values that are known to perform well generally.

    Random search is appropriate for discovering new hyperparameter values or new combinations of hyperparameters, often resulting in better performance, although it may take more time to complete.

    How to Speed-Up Hyperparameter Optimization?

    Ensure that you set the “n_jobs” argument to the number of cores on your machine.

    After that, more suggestions include:

    • Evaluate on a smaller sample of your dataset.
    • Explore a smaller search space.
    • Use fewer repeats and/or folds for cross-validation.
    • Execute the search on a faster machine, such as AWS EC2.
    • Use an alternate model that is faster to evaluate.

    How to Choose Hyperparameters to Search?

    Most algorithms have a subset of hyperparameters that have the most influence over the search procedure.

    These are listed in most descriptions of the algorithm. For example, here are some algorithms and their most important hyperparameters:

    If you are unsure:

    • Review papers that use the algorithm to get ideas.
    • Review the API and algorithm documentation to get ideas.
    • Search all hyperparameters.

    How to Use Best-Performing Hyperparameters?

    Define a new model and set the hyperparameter values of the model to the values found by the search.

    Then fit the model on all available data and use the model to start making predictions on new data.

    This is called preparing a final model. See more here:

    How to Make a Prediction?

    First, fit a final model (previous question).

    Then call the predict() function to make a prediction.

    For examples of making a prediction with a final model, see the tutorial:

    Do you have another question about hyperparameter optimization?
    Let me know in the comments below.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    APIs
    Articles

    Summary

    In this tutorial, you discovered hyperparameter optimization for machine learning in Python.

    Specifically, you learned:

    • Hyperparameter optimization is required to get the most out of your machine learning models.
    • How to configure random and grid search hyperparameter optimization for classification tasks.
    • How to configure random and grid search hyperparameter optimization for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Featured Image

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning

    Hyperparameter optimization refers to performing a search in order to discover the set of specific model configuration arguments that result in the best performance of the model on a specific dataset.

    There are many ways to perform hyperparameter optimization, although modern methods, such as Bayesian Optimization, are fast and effective. The Scikit-Optimize library is an open-source Python library that provides an implementation of Bayesian Optimization that can be used to tune the hyperparameters of machine learning models from the scikit-Learn Python library.

    You can easily use the Scikit-Optimize library to tune the models on your next machine learning project.

    In this tutorial, you will discover how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

    After completing this tutorial, you will know:

    • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
    • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
    • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

    Let’s get started.

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning
    Photo by Dan Nevill, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • Scikit-Optimize
  • Machine Learning Dataset and Model
  • Manually Tune Algorithm Hyperparameters
  • Automatically Tune Algorithm Hyperparameters
  • Scikit-Optimize

    Scikit-Optimize, or skopt for short, is an open-source Python library for performing optimization tasks.

    It offers efficient optimization algorithms, such as Bayesian Optimization, and can be used to find the minimum or maximum of arbitrary cost functions.

    Bayesian Optimization provides a principled technique based on Bayes Theorem to direct a search of a global optimization problem that is efficient and effective. It works by building a probabilistic model of the objective function, called the surrogate function, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.

    For more on the topic of Bayesian Optimization, see the tutorial:

    Importantly, the library provides support for tuning the hyperparameters of machine learning algorithms offered by the scikit-learn library, so-called hyperparameter optimization. As such, it offers an efficient alternative to less efficient hyperparameter optimization procedures such as grid search and random search.

    The scikit-optimize library can be installed using pip, as follows:


    Once installed, we can import the library and print the version number to confirm the library was installed successfully and can be accessed.

    The complete example is listed below.


    Running the example reports the currently installed version number of scikit-optimize.

    Your version number should be the same or higher.


    For more installation instructions, see the documentation:

    Now that we are familiar with what Scikit-Optimize is and how to install it, let’s explore how we can use it to tune the hyperparameters of a machine learning model.

    Machine Learning Dataset and Model

    First, let’s select a standard dataset and a model to address it.

    We will use the ionosphere machine learning dataset. This is a standard machine learning dataset comprising 351 rows of data with three numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 64 percent. A top performing model can achieve accuracy on this same test harness of about 94 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether measurements of the ionosphere indicate a specific structure or not.

    You can learn more about the dataset here:

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 351 rows of data with 34 input variables.


    We can evaluate a support vector machine (SVM) model on this dataset using repeated stratified cross-validation.

    We can report the mean model performance on the dataset averaged over all folds and repeats, which will provide a reference for model hyperparameter tuning performed in later sections.

    The complete example is listed below.


    Running the example first loads and prepares the dataset, then evaluates the SVM model on the dataset.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the SVM with default hyperparameters achieved a mean classification accuracy of about 83.7 percent, which is skillful and close to the top performance on the problem of 94 percent.


    Next, let’s see if we can improve performance by tuning the model hyperparameters using the scikit-optimize library.

    Manually Tune Algorithm Hyperparameters

    The Scikit-Optimize library can be used to tune the hyperparameters of a machine learning model.

    We can achieve this manually by using the Bayesian Optimization capabilities of the library.

    This requires that we first define a search space. In this case, this will be the hyperparameters of the model that we wish to tune, and the scope or range of each hyperparameter.

    We will tune the following hyperparameters of the SVM model:

    • C, the regularization parameter.
    • kernel, the type of kernel used in the model.
    • degree, used for the polynomial kernel.
    • gamma, used in most other kernels.

    For the numeric hyperparameters C and gamma, we will define a log scale to search between a small value of 1e-6 and 100. Degree is an integer and we will search values between 1 and 5. Finally, the kernel is a categorical variable with specific named values.

    We can define the search space for these four hyperparameters, a list of data types from the skopt library, as follows:


    Note the data type, the range, and the name of the hyperparameter specified for each.

    We can then define a function that will be called by the search procedure. This is a function expected by the optimization procedure later and takes a model and set of specific hyperparameters for the model, evaluates it, and returns a score for the set of hyperparameters.

    In our case, we want to evaluate the model using repeated stratified 10-fold cross-validation on our ionosphere dataset. We want to maximize classification accuracy, e.g. find the set of model hyperparameters that give the best accuracy. By default, the process minimizes the score returned from this function, therefore, we will return one minus the accuracy, e.g. perfect skill will be (1 – accuracy) or 0.0, and the worst skill will be 1.0.

    The evaluate_model() function below implements this and takes a specific set of hyperparameters.


    Next, we can execute the search by calling the gp_minimize() function and passing the name of the function to call to evaluate each model and the search space to optimize.


    The procedure will run until it converges and returns a result.

    The result object contains lots of details, but importantly, we can access the score of the best performing configuration and the hyperparameters used by the best forming model.


    Tying this together, the complete example of manually tuning the hyperparameters of an SVM on the ionosphere dataset is listed below.


    Running the example may take a few moments, depending on the speed of your machine.

    You may see some warning messages that you can safely ignore, such as:


    At the end of the run, the best-performing configuration is reported.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that configuration, reported in order of the search space list, was a modest C value, a RBF kernel, a degree of 2 (ignored by the RBF kernel), and a modest gamma value.

    Importantly, we can see that the skill of this model was approximately 94.7 percent, which is a top-performing model


    This is not the only way to use the Scikit-Optimize library for hyperparameter tuning. In the next section, we can see a more automated approach.

    Automatically Tune Algorithm Hyperparameters

    The Scikit-Learn machine learning library provides tools for tuning model hyperparameters.

    Specifically, it provides the GridSearchCV and RandomizedSearchCV classes that take a model, a search space, and a cross-validation configuration.

    The benefit of these classes is that the search procedure is performed automatically, requiring minimal configuration.

    Similarly, the Scikit-Optimize library provides a similar interface for performing a Bayesian Optimization of model hyperparameters via the BayesSearchCV class.

    This class can be used in the same way as the Scikit-Learn equivalents.

    First, the search space must be defined as a dictionary with hyperparameter names used as the key and the scope of the variable as the value.


    We can then define the BayesSearchCV configuration taking the model we wish to evaluate, the hyperparameter search space, and the cross-validation configuration.


    We can then execute the search and report the best result and configuration at the end.


    Tying this together, the complete example of automatically tuning SVM hyperparameters using the BayesSearchCV class on the ionosphere dataset is listed below.


    Running the example may take a few moments, depending on the speed of your machine.

    You may see some warning messages that you can safely ignore, such as:


    At the end of the run, the best-performing configuration is reported.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model performed above top-performing models achieving a mean classification accuracy of about 95.2 percent.

    The search discovered a large C value, an RBF kernel, and a small gamma value.


    This provides a template that you can use to tune the hyperparameters on your machine learning project.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials
    APIs

    Summary

    In this tutorial, you discovered how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

    Specifically, you learned:

    • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
    • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
    • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives