Featured Image

Hypothesis Test for Comparing Machine Learning Algorithms

Machine learning models are chosen based on their mean performance, often calculated using k-fold cross-validation.

The algorithm with the best mean performance is expected to be better than those algorithms with worse mean performance. But what if the difference in the mean performance is caused by a statistical fluke?

The solution is to use a statistical hypothesis test to evaluate whether the difference in the mean performance between any two algorithms is real or not.

In this tutorial, you will discover how to use statistical hypothesis tests for comparing machine learning algorithms.

After completing this tutorial, you will know:

  • Performing model selection based on the mean model performance can be misleading.
  • The five repeats of two-fold cross-validation with a modified Student’s t-Test is a good practice for comparing machine learning algorithms.
  • How to use the MLxtend machine learning to compare algorithms using a statistical hypothesis test.

Kick-start your project with my new book Statistics for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Hypothesis Test for Comparing Machine Learning Algorithms

Hypothesis Test for Comparing Machine Learning Algorithms
Photo by Frank Shepherd, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Hypothesis Test for Comparing Algorithms
  • 5×2 Procedure With MLxtend
  • Comparing Classifier Algorithms
  • Hypothesis Test for Comparing Algorithms

    Model selection involves evaluating a suite of different machine learning algorithms or modeling pipelines and comparing them based on their performance.

    The model or modeling pipeline that achieves the best performance according to your performance metric is then selected as your final model that you can then use to start making predictions on new data.

    This applies to regression and classification predictive modeling tasks with classical machine learning algorithms and deep learning. It’s always the same process.

    The problem is, how do you know the difference between two models is real and not just a statistical fluke?

    This problem can be addressed using a statistical hypothesis test.

    One approach is to evaluate each model on the same k-fold cross-validation split of the data (e.g. using the same random number seed to split the data in each case) and calculate a score for each split. This would give a sample of 10 scores for 10-fold cross-validation. The scores can then be compared using a paired statistical hypothesis test because the same treatment (rows of data) was used for each algorithm to come up with each score. The Paired Student’s t-Test could be used.

    A problem with using the Paired Student’s t-Test, in this case, is that each evaluation of the model is not independent. This is because the same rows of data are used to train the data multiple times — actually, each time, except for the time a row of data is used in the hold-out test fold. This lack of independence in the evaluation means that the Paired Student’s t-Test is optimistically biased.

    This statistical test can be adjusted to take the lack of independence into account. Additionally, the number of folds and repeats of the procedure can be configured to achieve a good sampling of model performance that generalizes well to a wide range of problems and algorithms. Specifically two-fold cross-validation with five repeats, so-called 5×2-fold cross-validation.

    This approach was proposed by Thomas Dietterich in his 1998 paper titled “Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms.”

    For more on this topic, see the tutorial:

    Thankfully, we don’t need to implement this procedure ourselves.

    5×2 Procedure With MLxtend

    The MLxtend library by Sebastian Raschka provides an implementation via the paired_ttest_5x2cv() function.

    First, you must install the mlxtend library, for example:


    To use the evaluation, you must first load your dataset, then define the two models that you wish to compare.


    You can then call the paired_ttest_5x2cv() function and pass in your data and models and it will report the t-statistic value and the p-value as to whether the difference in the performance of the two algorithms is significant or not.


    The p-value must be interpreted using an alpha value, which is the significance level that you are willing to accept.

    If the p-value is less or equal to the chosen alpha, we reject the null hypothesis that the models have the same mean performance, which means the difference is probably real. If the p-value is greater than alpha, we fail to reject the null hypothesis that the models have the same mean performance and any observed difference in the mean accuracies is probability a statistical fluke.

    The smaller the alpha value, the better, and a common value is 5 percent (0.05).


    Now that we are familiar with the way to use a hypothesis test to compare algorithms, let’s look at some examples.

    Comparing Classifier Algorithms

    In this section, let’s compare the performance of two machine learning algorithms on a binary classification task, then check if the observed difference is statistically significant or not.

    First, we can use the make_classification() function to create a synthetic dataset with 1,000 samples and 20 input variables.

    The example below creates the dataset and summarizes its shape.


    Running the example creates the dataset and summarizes the number of rows and columns, confirming our expectations.

    We can use this data as the basis for comparing two algorithms.


    We will compare the performance of two linear algorithms on this dataset. Specifically, a logistic regression algorithm and a linear discriminant analysis (LDA) algorithm.

    The procedure I like is to use repeated stratified k-fold cross-validation with 10 folds and three repeats. We will use this procedure to evaluate each algorithm and return and report the mean classification accuracy.

    The complete example is listed below.


    Running the example first reports the mean classification accuracy for each algorithm.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, the results suggest that LDA has better performance if we just look at the mean scores: 89.2 percent for logistic regression and 89.3 percent for LDA.


    A box and whisker plot is also created summarizing the distribution of accuracy scores.

    This plot would support my decision in choosing LDA over LR.

    Box and Whisker Plot of Classification Accuracy Scores for Two Algorithms

    Box and Whisker Plot of Classification Accuracy Scores for Two Algorithms

    Now we can use a hypothesis test to see if the observed results are statistically significant.

    First, we will use the 5×2 procedure to evaluate the algorithms and calculate a p-value and test statistic value.


    We can then interpret the p-value using an alpha of 5 percent.


    Tying this together, the complete example is listed below.


    Running the example, we first evaluate the algorithms before, then report on the result of the statistical hypothesis test.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, we can see that the p-value is about 0.3, which is much larger than 0.05. This leads us to fail to reject the null hypothesis, suggesting that any observed difference between the algorithms is probably not real.

    We could just as easily choose logistic regression or LDA and both would perform about the same on average.

    This highlights that performing model selection based only on the mean performance may not be sufficient.


    Recall that we are reporting performance using a different procedure (3×10 CV) than the procedure used to estimate the performance in the statistical test (5×2 CV). Perhaps results would be different if we looked at scores using five repeats of two-fold cross-validation?

    The example below is updated to report classification accuracy for each algorithm using 5×2 CV.


    Running the example reports the mean accuracy for both algorithms and the results of the statistical test.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, we can see that the difference in the mean performance for the two algorithms is even larger, 89.4 percent vs. 89.0 percent in favor of logistic regression instead of LDA as we saw with 3×10 CV.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    Papers
    APIs

    Summary

    In this tutorial, you discovered how to use statistical hypothesis tests for comparing machine learning algorithms.

    Specifically, you learned:

    • Performing model selection based on the mean model performance can be misleading.
    • The five repeats of two-fold cross-validation with a modified Student’s t-Test is a good practice for comparing machine learning algorithms.
    • How to use the MLxtend machine learning to compare algorithms using a statistical hypothesis test.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Statistics for Machine Learning!

    Statistical Methods for Machine Learning
    Develop a working understanding of statistics

    …by writing lines of code in python

    Discover how in my new Ebook:
    Statistical Methods for Machine Learning

    It provides self-study tutorials on topics like:
    Hypothesis Tests, Correlation, Nonparametric Stats, Resampling, and much more…

    Discover how to Transform Data into Knowledge

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Featured Image

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Last Updated on August 26, 2020

    Classification algorithms learn how to assign class labels to examples, although their decisions can appear opaque.

    A popular diagnostic for understanding the decisions made by a classification algorithm is the decision surface. This is a plot that shows how a fit machine learning algorithm predicts a coarse grid across the input feature space.

    A decision surface plot is a powerful tool for understanding how a given model “sees” the prediction task and how it has decided to divide the input feature space by class label.

    In this tutorial, you will discover how to plot a decision surface for a classification machine learning algorithm.

    After completing this tutorial, you will know:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Plot a Decision Surface for Machine Learning Algorithms in Python
    Photo by Tony Webster, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Decision Surface
  • Dataset and Model
  • Plot a Decision Surface
  • Decision Surface

    Classification machine learning algorithms learn to assign labels to input examples.

    Consider numeric input features for the classification task defining a continuous input feature space.

    We can think of each input feature defining an axis or dimension on a feature space. Two input features would define a feature space that is a plane, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume.

    Each point in the space can be assigned a class label. In terms of a two-dimensional feature space, we can think of each point on the planing having a different color, according to their assigned class.

    The goal of a classification algorithm is to learn how to divide up the feature space such that labels are assigned correctly to points in the feature space, or at least, as correctly as is possible.

    This is a useful geometric understanding of classification predictive modeling. We can take it one step further.

    Once a classification machine learning algorithm divides a feature space, we can then classify each point in the feature space, on some arbitrary grid, to get an idea of how exactly the algorithm chose to divide up the feature space.

    This is called a decision surface or decision boundary, and it provides a diagnostic tool for understanding a model on a classification predictive modeling task.

    Although the notion of a “surface” suggests a two-dimensional feature space, the method can be used with feature spaces with more than two dimensions, where a surface is created for each pair of input features.

    Now that we are familiar with what a decision surface is, next, let’s define a dataset and model for which we later explore the decision surface.

    Dataset and Model

    In this section, we will define a classification task and predictive model to learn the task.

    Synthetic Classification Dataset

    We can use the make_blobs() scikit-learn function to define a classification task with a two-dimensional class numerical feature space and each point assigned one of two class labels, e.g. a binary classification task.


    Once defined, we can then create a scatter plot of the feature space with the first feature defining the x-axis, the second feature defining the y axis, and each sample represented as a point in the feature space.

    We can then color points in the scatter plot according to their class label as either 0 or 1.


    Tying this together, the complete example of defining and plotting a synthetic classification dataset is listed below.


    Running the example creates the dataset, then plots the dataset as a scatter plot with points colored by class label.

    We can see a clear separation between examples from the two classes and we can imagine how a machine learning model might draw a line to separate the two classes, e.g. perhaps a diagonal line right through the middle of the two groups.

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Fit Classification Predictive Model

    We can now fit a model on our dataset.

    In this case, we will fit a logistic regression algorithm because we can predict both crisp class labels and probabilities, both of which we can use in our decision surface.

    We can define the model, then fit it on the training dataset.


    Once defined, we can use the model to make a prediction for the training dataset to get an idea of how well it learned to divide the feature space of the training dataset and assign labels.


    The predictions can be evaluated using classification accuracy.


    Tying this together, the complete example of fitting and evaluating a model on the synthetic binary classification dataset is listed below.


    Running the example fits the model and makes a prediction for each example.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model achieved a performance of about 97.2 percent.


    Now that we have a dataset and model, let’s explore how we can develop a decision surface.

    Plot a Decision Surface

    We can create a decision surface by fitting a model on the training dataset, then using the model to make predictions for a grid of values across the input domain.

    Once we have the grid of predictions, we can plot the values and their class label.

    A scatter plot could be used if a fine enough grid was taken. A better approach is to use a contour plot that can interpolate the colors between the points.

    The contourf() Matplotlib function can be used.

    This requires a few steps.

    First, we need to define a grid of points across the feature space.

    To do this, we can find the minimum and maximum values for each feature and expand the grid one step beyond that to ensure the whole feature space is covered.


    We can then create a uniform sample across each dimension using the arange() function at a chosen resolution. We will use a resolution of 0.1 in this case.


    Now we need to turn this into a grid.

    We can use the meshgrid() NumPy function to create a grid from these two vectors.

    If the first feature x1 is our x-axis of the feature space, then we need one row of x1 values of the grid for each point on the y-axis.

    Similarly, if we take x2 as our y-axis of the feature space, then we need one column of x2 values of the grid for each point on the x-axis.

    The meshgrid() function will do this for us, duplicating the rows and columns for us as needed. It returns two grids for the two input vectors. The first grid of x-values and the second of y-values, organized in an appropriately sized grid of rows and columns across the feature space.


    We then need to flatten out the grid to create samples that we can feed into the model and make a prediction.

    To do this, first, we flatten each grid into a vector.


    Then we stack the vectors side by side as columns in an input dataset, e.g. like our original training dataset, but at a much higher resolution.


    We can then feed this into our model and get a prediction for each point in the grid.


    So far, so good.

    We have a grid of values across the feature space and the class labels as predicted by our model.

    Next, we need to plot the grid of values as a contour plot.

    The contourf() function takes separate grids for each axis, just like what was returned from our prior call to meshgrid(). Great!

    So we can use xx and yy that we prepared earlier and simply reshape the predictions (yhat) from the model to have the same shape.


    We then plot the decision surface with a two-color colormap.


    We can then plot the actual points of the dataset over the top to see how well they were separated by the logistic regression decision surface.

    The complete example of plotting a decision surface for a logistic regression model on our synthetic binary classification dataset is listed below.


    Running the example fits the model and uses it to predict outcomes for the grid of values across the feature space and plots the result as a contour plot.

    We can see, as we might have suspected, logistic regression divides the feature space using a straight line. It is a linear model, after all; this is all it can do.

    Creating a decision surface is almost like magic. It gives immediate and meaningful insight into how the model has learned the task.

    Try it with different algorithms, like an SVM or decision tree.
    Post your resulting maps as links in the comments below!

    Decision Surface for Logistic Regression on a Binary Classification Task

    Decision Surface for Logistic Regression on a Binary Classification Task

    We can add more depth to the decision surface by using the model to predict probabilities instead of class labels.


    When plotted, we can see how confident or likely it is that each point in the feature space belongs to each of the class labels, as seen by the model.

    We can use a different color map that has gradations, and show a legend so we can interpret the colors.


    The complete example of creating a decision surface using probabilities is listed below.


    Running the example predicts the probability of class membership for each point on the grid across the feature space and plots the result.

    Here, we can see that the model is unsure (lighter colors) around the middle of the domain, given the sampling noise in that area of the feature space. We can also see that the model is very confident (full colors) in the bottom-left and top-right halves of the domain.

    Together, the crisp class and probability decision surfaces are powerful diagnostic tools for understanding your model and how it divides the feature space for your predictive modeling task.

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to plot a decision surface for a classification machine learning algorithm.

    Specifically, you learned:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives