Multi-Label-Classification-with-Deep-Learning.jpg

Multi-Label Classification with Deep Learning

Last Updated on August 31, 2020

Multi-label classification involves predicting zero or more class labels.

Unlike normal classification tasks where class labels are mutually exclusive, multi-label classification requires specialized machine learning algorithms that support predicting multiple mutually non-exclusive classes or “labels.”

Deep learning neural networks are an example of an algorithm that natively supports multi-label classification problems. Neural network models for multi-label classification tasks can be easily defined and evaluated using the Keras deep learning library.

In this tutorial, you will discover how to develop deep learning models for multi-label classification.

After completing this tutorial, you will know:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

Let’s get started.

Multi-Label Classification with Deep Learning

Multi-Label Classification with Deep Learning
Photo by Trevor Marron, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Multi-Label Classification
  • Neural Networks for Multiple Labels
  • Neural Network for Multi-Label Classification

Multi-Label Classification

Classification is a predictive modeling problem that involves outputting a class label given some input

It is different from regression tasks that involve predicting a numeric value.

Typically, a classification task involves predicting a single label. Alternately, it might involve predicting the likelihood across two or more class labels. In these cases, the classes are mutually exclusive, meaning the classification task assumes that the input belongs to one class only.

Some classification tasks require predicting more than one class label. This means that class labels or class membership are not mutually exclusive. These tasks are referred to as multiple label classification, or multi-label classification for short.

In multi-label classification, zero or more labels are required as output for each input sample, and the outputs are required simultaneously. The assumption is that the output labels are a function of the inputs.

We can create a synthetic multi-label classification dataset using the make_multilabel_classification() function in the scikit-learn library.

Our dataset will have 1,000 samples with 10 input features. The dataset will have three class label outputs for each sample and each class will have one or two values (0 or 1, e.g. present or not present).

The complete example of creating and summarizing the synthetic multi-label classification dataset is listed below.


Running the example creates the dataset and summarizes the shape of the input and output elements.

We can see that, as expected, there are 1,000 samples, each with 10 input features and three output features.

The first 10 rows of inputs and outputs are summarized and we can see that all inputs for this dataset are numeric and that output class labels have 0 or 1 values for each of the three class labels.


Next, let’s look at how we can develop neural network models for multi-label classification tasks.

Neural Networks for Multiple Labels

Some machine learning algorithms support multi-label classification natively.

Neural network models can be configured to support multi-label classification and can perform well, depending on the specifics of the classification task.

Multi-label classification can be supported directly by neural networks simply by specifying the number of target labels there is in the problem as the number of nodes in the output layer. For example, a task that has three output labels (classes) will require a neural network output layer with three nodes in the output layer.

Each node in the output layer must use the sigmoid activation. This will predict a probability of class membership for the label, a value between 0 and 1. Finally, the model must be fit with the binary cross-entropy loss function.

In summary, to configure a neural network model for multi-label classification, the specifics are:

  • Number of nodes in the output layer matches the number of labels.
  • Sigmoid activation for each node in the output layer.
  • Binary cross-entropy loss function.

We can demonstrate this using the Keras deep learning library.

We will define a Multilayer Perceptron (MLP) model for the multi-label classification task defined in the previous section.

Each sample has 10 inputs and three outputs; therefore, the network requires an input layer that expects 10 inputs specified via the “input_dim” argument in the first hidden layer and three nodes in the output layer.

We will use the popular ReLU activation function in the hidden layer. The hidden layer has 20 nodes that were chosen after some trial and error. We will fit the model using binary cross-entropy loss and the Adam version of stochastic gradient descent.

The definition of the network for the multi-label classification task is listed below.


You may want to adapt this model for your own multi-label classification task; therefore, we can create a function to define and return the model where the number of input and output variables is provided as arguments.


Now that we are familiar with how to define an MLP for multi-label classification, let’s explore how this model can be evaluated.

Neural Network for Multi-Label Classification

If the dataset is small, it is good practice to evaluate neural network models repeatedly on the same dataset and report the mean performance across the repeats.

This is because of the stochastic nature of the learning algorithm.

Additionally, it is good practice to use k-fold cross-validation instead of train/test splits of a dataset to get an unbiased estimate of model performance when making predictions on new data. Again, only if there is not too much data that the process can be completed in a reasonable time.

Taking this into account, we will evaluate the MLP model on the multi-output regression task using repeated k-fold cross-validation with 10 folds and three repeats.

The MLP model will predict the probability for each class label by default. This means it will predict three probabilities for each sample. These can be converted to crisp class labels by rounding the values to either 0 or 1. We can then calculate the classification accuracy for the crisp class labels.


The scores are collected and can be summarized by reporting the mean and standard deviation across all repeats and cross-validation folds.

The evaluate_model() function below takes the dataset, evaluates the model, and returns a list of evaluation scores, in this case, accuracy scores.


We can then load our dataset and evaluate the model and report the mean performance.

Tying this together, the complete example is listed below.


Running the example reports the classification accuracy for each fold and each repeat, to give an idea of the evaluation progress.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

At the end, the mean and standard deviation accuracy is reported. In this case, the model is shown to achieve an accuracy of about 81.2 percent.

You can use this code as a template for evaluating MLP models on your own multi-label classification tasks. The number of nodes and layers in the model can easily be adapted and tailored to the complexity of your dataset.


Once a model configuration is chosen, we can use it to fit a final model on all available data and make a prediction for new data.

The example below demonstrates this by first fitting the MLP model on the entire multi-label classification dataset, then calling the predict() function on the saved model in order to make a prediction for a new row of data.


Running the example fits the model and makes a prediction for a new row. As expected, the prediction contains three output variables required for the multi-label classification task: the probabilities of each class label.


Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how to develop deep learning models for multi-label classification.

Specifically, you learned:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Projects with Python!

Deep Learning with Python
 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook:
Deep Learning With Python

It covers end-to-end projects on topics like:
Multilayer Perceptrons, Convolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To

Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

A-Gentle-Introduction-to-Computational-Learning-Theory.jpg

A Gentle Introduction to Computational Learning Theory

Last Updated on August 19, 2020

Computational learning theory, or statistical learning theory, refers to mathematical frameworks for quantifying learning tasks and algorithms.

These are sub-fields of machine learning that a machine learning practitioner does not need to know in great depth in order to achieve good results on a wide range of problems. Nevertheless, it is a sub-field where having a high-level understanding of some of the more prominent methods may provide insight into the broader task of learning from data.

In this post, you will discover a gentle introduction to computational learning theory for machine learning.

After reading this post, you will know:

  • Computational learning theory uses formal methods to study learning tasks and learning algorithms.
  • PAC learning provides a way to quantify the computational difficulty of a machine learning task.
  • VC Dimension provides a way to quantify the computational capacity of a machine learning algorithm.

Kick-start your project with my new book Probability for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

A Gentle Introduction to Computational Learning Theory

A Gentle Introduction to Computational Learning Theory
Photo by someone10x, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Computational Learning Theory
  • PAC Learning (Theory of Learning Problems)
  • VC Dimension (Theory of Learning Algorithms)
  • Computational Learning Theory

    Computational learning theory, or CoLT for short, is a field of study concerned with the use of formal mathematical methods applied to learning systems.

    It seeks to use the tools of theoretical computer science to quantify learning problems. This includes characterizing the difficulty of learning specific tasks.

    Computational learning theory may be thought of as an extension or sibling of statistical learning theory, or SLT for short, that uses formal methods to quantify learning algorithms.

    • Computational Learning Theory (CoLT): Formal study of learning tasks.
    • Statistical Learning Theory (SLT): Formal study of learning algorithms.

    This division of learning tasks vs. learning algorithms is arbitrary, and in practice, there is a lot of overlap between the two fields.

    One can extend statistical learning theory by taking computational complexity of the learner into account. This field is called computational learning theory or COLT.

    — Page 210, Machine Learning: A Probabilistic Perspective, 2012.

    They might be considered synonyms in modern usage.

    … a theoretical framework known as computational learning theory, also some- times called statistical learning theory.

    — Page 344, Pattern Recognition and Machine Learning, 2006.

    The focus in computational learning theory is typically on supervised learning tasks. Formal analysis of real problems and real algorithms is very challenging. As such, it is common to reduce the complexity of the analysis by focusing on binary classification tasks and even simple binary rule-based systems. As such, the practical application of the theorems may be limited or challenging to interpret for real problems and algorithms.

    The main unanswered question in learning is this: How can we be sure that our learning algorithm has produced a hypothesis that will predict the correct value for previously unseen inputs?

    — Page 713, Artificial Intelligence: A Modern Approach, 3rd edition, 2009.

    Questions explored in computational learning theory might include:

    • How do we know a model has a good approximation for the target function?
    • What hypothesis space should be used?
    • How do we know if we have a local or globally good solution?
    • How do we avoid overfitting?
    • How many data examples are needed?

    As a machine learning practitioner, it can be useful to know about computational learning theory and some of the main areas of investigation. The field provides a useful grounding for what we are trying to achieve when fitting models on data, and it may provide insight into the methods.

    There are many subfields of study, although perhaps two of the most widely discussed areas of study from computational learning theory are:

    • PAC Learning.
    • VC Dimension.

    Tersely, we can say that PAC Learning is the theory of machine learning problems and VC dimension is the theory of machine learning algorithms.

    You may encounter the topics as a practitioner and it is useful to have a thumbnail idea of what they are about. Let’s take a closer look at each.

    If you would like to dive deeper into the field of computational learning theory, I recommend the book:

    PAC Learning (Theory of Learning Problems)

    Probability approximately correct learning, or PAC learning, refers to a theoretical machine learning framework developed by Leslie Valiant.

    PAC learning seeks to quantify the difficulty of a learning task and might be considered the premier sub-field of computational learning theory.

    Consider that in supervised learning, we are trying to approximate an unknown underlying mapping function from inputs to outputs. We don’t know what this mapping function looks like, but we suspect it exists, and we have examples of data produced by the function.

    PAC learning is concerned with how much computational effort is required to find a hypothesis (fit model) that is a close match for the unknown target function.

    For more on the use of “hypothesis” in machine learning to refer to a fit model, see the tutorial:

    The idea is that a bad hypothesis will be found out based on the predictions it makes on new data, e.g. based on its generalization error.

    A hypothesis that gets most or a large number of predictions correct, e.g. has a small generalization error, is probably a good approximation for the target function.

    The underlying principle is that any hypothesis that is seriously wrong will almost certainly be “found out” with high probability after a small number of examples, because it will make an incorrect prediction. Thus, any hypothesis that is consistent with a sufficiently large set of training examples is unlikely to be seriously wrong: that is, TELY it must be probably approximately correct.

    — Page 714, Artificial Intelligence: A Modern Approach, 3rd edition, 2009.

    This probabilistic language gives the theorem its name: “probability approximately correct.” That is, a hypothesis seeks to “approximate” a target function and is “probably” good if it has a low generalization error.

    A PAC learning algorithm refers to an algorithm that returns a hypothesis that is PAC.

    Using formal methods, a minimum generalization error can be specified for a supervised learning task. The theorem can then be used to estimate the expected number of samples from the problem domain that would be required to determine whether a hypothesis was PAC or not. That is, it provides a way to estimate the number of samples required to find a PAC hypothesis.

    The goal of the PAC framework is to understand how large a data set needs to be in order to give good generalization. It also gives bounds for the computational cost of learning …

    — Page 344, Pattern Recognition and Machine Learning, 2006.

    Additionally, a hypothesis space (machine learning algorithm) is efficient under the PAC framework if an algorithm can find a PAC hypothesis (fit model) in polynomial time.

    A hypothesis space is said to be efficiently PAC-learnable if there is a polynomial time algorithm that can identify a function that is PAC.

    — Page 210, Machine Learning: A Probabilistic Perspective, 2012.

    For more on PAC learning, refer to the seminal book on the topic titled:

    VC Dimension (Theory of Learning Algorithms)

    Vapnik–Chervonenkis theory, or VC theory for short, refers to a theoretical machine learning framework developed by Vladimir Vapnik and Alexey Chervonenkis.

    VC theory learning seeks to quantify the capability of a learning algorithm and might be considered the premier sub-field of statistical learning theory.

    VC theory is comprised of many elements, most notably the VC dimension.

    The VC dimension quantifies the complexity of a hypothesis space, e.g. the models that could be fit given a representation and learning algorithm.

    One way to consider the complexity of a hypothesis space (space of models that could be fit) is based on the number of distinct hypotheses it contains and perhaps how the space might be navigated. The VC dimension is a clever approach that instead measures the number of examples from the target problem that can be discriminated by hypotheses in the space.

    The VC dimension measures the complexity of the hypothesis space […] by the number of distinct instances from X that can be completely discriminated using H.

    — Page 214, Machine Learning, 1997.

    The VC dimension estimates the capability or capacity of a classification machine learning algorithm for a specific dataset (number and dimensionality of examples).

    Formally, the VC dimension is the largest number of examples from the training dataset that the space of hypothesis from the algorithm can “shatter.”

    The Vapnik-Chervonenkis dimension, VC(H), of hypothesis space H defined over instance space X is the size of the largest finite subset of X shattered by H.

    — Page 215, Machine Learning, 1997.

    Shatter or a shattered set, in the case of a dataset, means points in the feature space can be selected or separated from each other using hypotheses in the space such that the labels of examples in the separate groups are correct (whatever they happen to be).

    Whether a group of points can be shattered by an algorithm depends on the hypothesis space and the number of points.

    For example, a line (hypothesis space) can be used to shatter three points, but not four points.

    Any placement of three points on a 2d plane with class labels 0 or 1 can be “correctly” split by label with a line, e.g. shattered. But, there exists placements of four points on plane with binary class labels that cannot be correctly split by label with a line, e.g. cannot be shattered. Instead, another “algorithm” must be used, such as ovals.

    The figure below makes this clear.

    Example of a Line Hypothesis Shattering 3 Points and Ovals Shattering 4 Points

    Example of a Line Hypothesis Shattering 3 Points and Ovals Shattering 4 Points
    Taken from Page 81, The Nature of Statistical Learning Theory, 1999.

    Therefore, the VC dimension of a machine learning algorithm is the largest number of data points in a dataset that a specific configuration of the algorithm (hyperparameters) or specific fit model can shatter.

    A classifier that predicts the same value in all cases will have a VC dimension of 0, no points. A large VC dimension indicates that an algorithm is very flexible, although the flexibility may come at the cost of additional risk of overfitting.

    The VC dimension is used as part of the PAC learning framework.

    A key quantity in PAC learning is the Vapnik-Chervonenkis dimension, or VC dimension, which provides a measure of the complexity of a space of functions, and which allows the PAC framework to be extended to spaces containing an infinite number of functions.

    — Page 344, Pattern Recognition and Machine Learning, 2006.

    For more on PCA Learning, refer to the seminal book on the topic titled:

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Books
    Articles

    Summary

    In this post, you discovered a gentle introduction to computational learning theory for machine learning.

    Specifically, you learned:

    • Computational learning theory uses formal methods to study learning tasks and learning algorithms.
    • PAC learning provides a way to quantify the computational difficulty of a machine learning task.
    • VC Dimension provides a way to quantify the computational capacity of a machine learning algorithm.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Probability for Machine Learning!

    Probability for Machine Learning

    Develop Your Understanding of Probability
    …with just a few lines of python code

    Discover how in my new Ebook:
    Probability for Machine Learning

    It provides self-study tutorials and end-to-end projects on:
    Bayes Theorem, Bayesian Optimization, Distributions, Maximum Likelihood, Cross-Entropy, Calibrating Models

    and much more…

    Finally Harness Uncertainty in Your Projects
    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Why-Do-I-Get-Different-Results-Each-Time-in-Machine.jpg

    Why Do I Get Different Results Each Time in Machine Learning?

    Last Updated on August 27, 2020

    Are you getting different results for your machine learning algorithm?

    Perhaps your results differ from a tutorial and you want to understand why.

    Perhaps your model is making different predictions each time it is trained, even when it is trained on the same data set each time.

    This is to be expected and might even be a feature of the algorithm, not a bug.

    In this tutorial, you will discover why you can expect different results when using machine learning algorithms.

    After completing this tutorial, you will know:

    • Machine learning algorithms will train different models if the training dataset is changed.
    • Stochastic machine learning algorithms use randomness during learning, ensuring a different model is trained each run.
    • Differences in the development environment, such as software versions and CPU type, can cause rounding error differences in predictions and model evaluations.

    Let’s get started.

    Why Do I Get Different Results Each Time in Machine Learning?

    Why Do I Get Different Results Each Time in Machine Learning?
    Photo by Bonnie Moreland, some rights reserved.

    Tutorial Overview

    This tutorial is divided into five parts; they are:

  • Help, I’m Getting Different Results!?
  • Differences Caused by Training Data
  • Differences Caused by Learning Algorithm
  • Differences Caused by Evaluation Procedure
  • Differences Caused by Platform
  • 1. Help, I’m Getting Different Results!?

    Don’t panic. Machine learning algorithms or models can give different results.

    It’s not your fault. In fact, it is often a feature, not a bug.

    We will clearly specify and explain the problem you are having.

    First, let’s get a handle on the basics.

    In applied machine learning, we run a machine learning “algorithm” on a dataset to get a machine learning “model.” The model can then be evaluated on data not used during training or used to make predictions on new data, also not seen during training.

    • Algorithm: Procedure run on data that results in a model (e.g. training or learning).
    • Model: Data structure and coefficients used to make predictions on data.

    For more on the difference between machine learning algorithms and models, see the tutorial:

    Supervised machine learning means we have examples (rows) with input and output variables (columns). We cannot write code to predict outputs given inputs because it is too hard, so we use machine learning algorithms to learn how to predict outputs from inputs given historical examples.

    This is called function approximation, and we are learning or searching for a function that maps inputs to outputs on our specific prediction task in such a way that it has skill, meaning the performance of the mapping is better than random and ideally better than all other algorithms and algorithm configurations we have tried.

    • Supervised Learning: Automatically learn a mapping function from examples of inputs to examples of outputs.

    In this sense, a machine learning model is a program we intend to use for some project or application; it just so happens that the program was learned from examples (using an algorithm) rather than written explicitly with if-statements and such. It’s a type of automatic programming.

    • Machine Learning Model: A “program” automatically learned from historical data.

    Unlike the programming that we may be used to, the programs may not be entirely deterministic.

    The machine learning models may be different each time they are trained. In turn, the models may make different predictions, and when evaluated, may have a different level of error or accuracy.

    There are at least four cases where you will get different results; they are:

    • Different results because of differences in training data.
    • Different results because of stochastic learning algorithms.
    • Different results because of stochastic evaluation procedures.
    • Different results because of differences in platform.

    Let’s take a closer look at each in turn.

    Did I miss a possible cause of a difference in results?
    Let me know in the comments below.

    2. Differences Caused by Training Data

    You will get different results when you run the same algorithm on different data.

    This is referred to as the variance of the machine learning algorithm. You may have heard of it in the context of the bias-variance trade-off.

    The variance is a measure of how sensitive the algorithm is to the specific data used during training.

    • Variance: How sensitive the algorithm is to the specific data used during training.

    A more sensitive algorithm has a larger variance, which will result in more difference in the model, and in turn, the predictions made and evaluation of the model. Conversely, a less sensitive algorithm has a smaller variance and will result in less difference in the resulting model with different training data, and in turn, less difference in the resulting predictions and model evaluation.

    • High Variance: Algorithm is more sensitive to the specific data used during training.
    • Low Variance: Algorithm is less sensitive to the specific data used during training.

    For more on the variance and the bias-variance trade-off, see the tutorial:

    All useful machine learning algorithms will have some variance, and some of the most effective algorithms will have a high variance.

    Algorithms with a high variance often require more training data than those algorithms with less variance. This is intuitive if we consider the model approximating a mapping function from inputs and outputs and the law of large numbers.

    Nevertheless, when you train a machine learning algorithm on different training data, you will get a different model that has different behavior. This means different training data will give models that make different predictions and have a different estimate of performance (e.g. error or accuracy).

    The amount of difference in the results will be related to how different the training data is for each model, and the variance of the specific model and model configuration you have chosen.

    What Should I Do?

    You can often reduce the variance of the model by changing a hyperparameter of the algorithm.

    For example, the k in k-nearest neighbors controls the variance of the algorithm, where small values like k=1 result in high variance and large values like k=21 result in low variance.

    You can reduce the variance by changing the algorithm. For example, simpler algorithms like linear regression and logistic regression have a lower variance than other types of algorithms.

    You can also lower the variance with a high variance algorithm by increasing the size of the training dataset, meaning you may need to collect more data.

    3. Differences Caused by Learning Algorithm

    You can get different results when you run the same algorithm on the same data due to the nature of the learning algorithm.

    This is the most likely reason that you’re reading this tutorial.

    You run the same code on the same dataset and get a model that makes different predictions or has a different performance each time, and you think it’s a bug or something. Am I right?

    It’s not a bug, it’s a feature.

    Some machine learning algorithms are deterministic. Just like the programming that you’re used to. That means, when the algorithm is given the same dataset, it learns the same model every time. An example is a linear regression or logistic regression algorithm.

    Some algorithms are not deterministic; instead, they are stochastic. This means that their behavior incorporates elements of randomness.

    Stochastic does not mean random. Stochastic machine learning algorithms are not learning a random model. They are learning a model conditional on the historical data you have provided. Instead, the specific small decisions made by the algorithm during the learning process can vary randomly.

    The impact is that each time the stochastic machine learning algorithm is run on the same data, it learns a slightly different model. In turn, the model may make slightly different predictions, and when evaluated using error or accuracy, may have a slightly different performance.

    For more on stochastic and what it means in machine learning, see the tutorial:

    Adding randomness to some of the decisions made by an algorithm can improve performance on hard problems. Learning a supervised learning mapping function with a limited sample of data from the domain is a very hard problem.

    Finding a good or best mapping function for a dataset is a type of search problem. We test different algorithms and test algorithm configurations that define the shape of the search space and give us a starting point in the search space. We then run the algorithms, which then navigate the search space to a single model.

    Adding randomness can help avoid the good solutions and help find the really good and great solutions in the search space. They allow the model to escape local optima or deceptive local optima where the learning algorithm might get such, and help find better solutions, even a global optima.

    For more on thinking about supervised learning as a search problem, see the tutorial:

    An example of an algorithm that uses randomness during learning is a neural network. It uses randomness in two ways:

    • Random initial weights (model coefficients).
    • Random shuffle of samples each epoch.

    Neural networks (deep learning) are a stochastic machine learning algorithm. The random initial weights allow the model to try learning from a different starting point in the search space each algorithm run and allow the learning algorithm to “break symmetry” during learning. The random shuffle of examples during training ensures that each gradient estimate and weight update is slightly different.

    For more on the stochastic nature of neural networks, see the tutorial:

    Another example is ensemble machine learning algorithms that are stochastic, such as bagging.

    Randomness is used in the sampling procedure of the training dataset that ensures a different decision tree is prepared for each contributing member in the ensemble. In ensemble learning, this is called ensemble diversity and is an approach to simulating independent predictions from a single training dataset.

    For more on the stochastic nature of bagging ensembles, see the tutorial:

    What Should I Do?

    The randomness used by learning algorithms can be controlled.

    For example, you set the seed used by the pseudorandom number generator to ensure that each time the algorithm is run, it gets the same randomness.

    For more on random number generators and setting fixing the seed, see the tutorial:

    This can be a good approach for tutorials, but not a good approach in practice. It leads to questions like:

    • What is the best seed for the pseudorandom number generator?

    There is no best seed for a stochastic machine learning algorithm. You are fighting the nature of the algorithm, forcing stochastic learning to be deterministic.

    You could make a case that the final model is fit using a fixed seed to ensure the same model is created from the same data before being used in production prior to any pre-deployment system testing. Nevertheless, as soon as the training dataset changes, the model will change.

    A better approach is to embrace the stochastic nature of machine learning algorithms.

    Consider that there is not a single model for your dataset. Instead, there is a stochastic process (the algorithm pipeline) that can generate models for your problem.

    For more on this, see the tutorial:

    You can then summarize the performance of these models — of the algorithm pipeline — as a distribution with mean expected error or accuracy and a standard deviation.

    You can then ensure you achieve the average performance of the models by fitting multiple final models on your dataset and averaging their predictions when you need to make a prediction on new data.

    For more on the ensemble approach to final models, see the tutorial:

    4. Differences Caused by Evaluation Procedure

    You can get different results when running the same algorithm with the same data due to the evaluation procedure.

    The two most common evaluation procedures are a train-test split and k-fold cross-validation.

    A train-test split involves randomly assigning rows to either be used to train the model or evaluate the model to meet a predefined train or test set size.

    For more on the train-test split, see the tutorial:

    The k-fold cross-validation procedure involves dividing a dataset into k non-overlapping partitions and using one fold as the test set and all other folds as the training set. A model is fit on the training set and evaluated on the holdout fold and this process is repeated k times, giving each fold an opportunity to be used as the holdout fold.

    For more on k-fold cross-validation, see the tutorial:

    Both of these model evaluation procedures are stochastic.

    Again, this does not mean that they are random; it means that small decisions made in the process involve randomness. Specifically, the choice of which rows are assigned to a given subset of the data.

    This use of randomness is a feature, not a bug.

    The use of randomness, in this case, allows the resampling to approximate an estimate of model performance that is independent of the specific data sample drawn from the domain. This approximation is biased because we only have a small sample of data to work with rather than the complete set of possible observations.

    Performance estimates provide an idea of the expected or average capability of the model when making predictions in the domain on data not seen during training. Regardless of the specific rows of data used to train or test the model, at least ideally.

    For more on the more general topic of statistical sampling, see the tutorial:

    As such, each evaluation of a deterministic machine learning algorithm, like a linear regression or a logistic regression, can give a different estimate of error or accuracy.

    What Should I Do?

    The solution in this case is much like the case for stochastic learning algorithms.

    The seed for the pseudorandom number generator can be fixed or the randomness of the procedure can be embraced.

    Unlike stochastic learning algorithms, both solutions are quite reasonable.

    If a large number of machine learning algorithms and algorithm configurations are being evaluated systematically on a predictive modeling task, it can be a good idea to fix the random seed of the evaluation procedure. Any value will do.

    The idea is that each candidate solution (each algorithm or configuration) will be evaluated in an identical manner. This ensures an apples-to-apples comparison. It also allows for the use of paired statistical hypothesis tests later, if needed, to check if differences between algorithms are statistically significant.

    Embracing the randomness can also be appropriate. This involves repeating the evaluation procedure many times and reporting a summary of the distribution of performance scores, such as the mean and standard deviation.

    Perhaps the least biased approach to repeated evaluation would be to use repeated k-fold cross-validation, such as three repeats with 10 folds (3×10), which is common, or five repeats with two folds (5×2), which is commonly used when comparing algorithms with statistical hypothesis tests.

    For a gentle introduction to using statistical hypothesis tests for comparing algoritms, see the tutorial:

    For a tutorial on comparing mean algorithm performance with a hypothesis test, see the tutorial:

    5. Differences Caused by Platform

    You can get different results when running the same algorithm on the same data on different computers.

    This can happen even if you fix the random number seed to address the stochastic nature of the learning algorithm and evaluation procedure.

    The cause in this case is the platform or development environment used to run the example, and the results are often different in minor ways, but not always.

    This includes:

    • Differences in the system architecture, e.g. CPU or GPU.
    • Differences in the operating system, e.g. MacOS or Linux.
    • Differences in the underlying math libraries, e.g. LAPACK or BLAS.
    • Differences in the Python version, e.g. 3.6 or 3.7.
    • Differences in the library version, e.g. scikit-learn 0.22 or 0.23.

    Machine learning algorithms are a type of numerical computation.

    This means that they typically involve a lot of math with floating point values. Differences in aspects, such as the architecture and operating system, can result in differences in round errors, which can compound with the number of calculations performed to give very different results.

    Additionally, differences in the version of libraries can mean the fixing of bugs and the changing of functionality that too can result in different results.

    Additionally, this also explains why you will get different results for the same algorithm on the same machine implemented by different languages, such as R and Python. Small differences in the implementation and/or differences in the underlying math libraries used will cause differences in the resulting model and predictions made by that model.

    What Should I Do?

    This does not mean that the platform itself can be treated as a hyperparameter and tuned for a predictive modeling problem.

    Instead, it means that the platform is an important factor when evaluating machine learning algorithms and should be fixed or fully described to ensure full reproducibility when moving from development to production, or in reporting performance in academic studies.

    One approach might be to use virtualization, such as docker or a virtual machine instance to ensure the environment is kept constant, if full reproducibility is critical to a project.

    Honestly, the effect is often very small in practice (at least in my limited experience) as long as major software versions are a good or close enough match.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials

    Summary

    In this tutorial, you discovered why you can expect different results when using machine learning algorithms.

    Specifically, you learned:

    • Machine learning algorithms will train different models if the training dataset is changed.
    • Stochastic machine learning algorithms use randomness during learning, ensuring a different model is trained each run.
    • Differences in the development environment, such as software versions and CPU type, can cause rounding error differences in predictions and model evaluations.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives