TPOT-for-Automated-Machine-Learning-in-Python.jpg

TPOT for Automated Machine Learning in Python

Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

TPOT is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Genetic Programming stochastic global search procedure to efficiently discover a top-performing model pipeline for a given dataset.

In this tutorial, you will discover how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

After completing this tutorial, you will know:

  • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
  • How to use TPOT to automatically discover top-performing models for classification tasks.
  • How to use TPOT to automatically discover top-performing models for regression tasks.

Let’s get started.

TPOT for Automated Machine Learning in Python

TPOT for Automated Machine Learning in Python
Photo by Gwen, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  • TPOT for Automated Machine Learning
  • Install and Use TPOT
  • TPOT for Classification
  • TPOT for Regression
  • TPOT for Automated Machine Learning

    Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

    TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms and model hyperparameters.

    … an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    An optimization procedure is then performed to find a tree structure that performs best for a given dataset. Specifically, a genetic programming algorithm, designed to perform a stochastic global optimization on programs represented as trees.

    TPOT uses a version of genetic programming to automatically design and optimize a series of data transformations and machine learning models that attempt to maximize the classification accuracy for a given supervised learning data set.

    — Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    The figure below taken from the TPOT paper shows the elements involved in the pipeline search, including data cleaning, feature selection, feature processing, feature construction, model selection, and hyperparameter optimization.

    Overview of the TPOT Pipeline Search

    Overview of the TPOT Pipeline Search
    Taken from: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

    Now that we are familiar with what TPOT is, let’s look at how we can install and use TPOT to find an effective model pipeline.

    Install and Use TPOT

    The first step is to install the TPOT library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using TPOT is straightforward.

    It involves creating an instance of the TPOTRegressor or TPOTClassifier class, configuring it for the search, and then exporting the model pipeline that was found to achieve the best performance on your dataset.

    Configuring the class involves two main elements.

    The first is how models will be evaluated, e.g. the cross-validation scheme and performance metric. I recommend explicitly specifying a cross-validation class with your chosen configuration and the performance metric to use.

    For example, RepeatedKFold for regression with ‘neg_mean_absolute_error‘ metric for regression:


    Or a RepeatedStratifiedKFold for regression with ‘accuracy‘ metric for classification:


    The other element is the nature of the stochastic global search procedure.

    As an evolutionary algorithm, this involves setting configuration, such as the size of the population, the number of generations to run, and potentially crossover and mutation rates. The former importantly control the extent of the search; the latter can be left on default values if evolutionary search is new to you.

    For example, a modest population size of 100 and 5 or 10 generations is a good starting point.


    At the end of a search, a Pipeline is found that performs the best.

    This Pipeline can be exported as code into a Python file that you can later copy-and-paste into your own project.


    Now that we are familiar with how to use TPOT, let’s look at some worked examples with real data.

    TPOT for Classification

    In this section, we will use TPOT to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    Next, let’s use TPOT to find a good model for the sonar dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated stratified k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for five generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The accuracy of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 86.6 percent. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_sonar_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-and-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a Naive Bayes model and a Gradient Boosting model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    TPOT for Regression

    In this section, we will use TPOT to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    Next, we can use TPOT to find a good model for the auto insurance dataset.

    First, we can define the method for evaluating models. We will use a good practice of repeated k-fold cross-validation with three repeats and 10 folds.


    We will use a population size of 50 for 5 generations for the search and use all cores on the system by setting “n_jobs” to -1.


    Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.


    Tying this together, the complete example is listed below.


    Running the example may take a few minutes, and you will see a progress bar on the command line.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The MAE of top-performing models will be reported along the way.


    In this case, we can see that the top-performing pipeline achieved the mean MAE of about 29.14. This is a skillful model, and close to a top-performing model on this dataset.

    The top-performing pipeline is then saved to a file named “tpot_insurance_best_model.py“.

    Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.


    Note: as-is, this code does not execute, by design. It is a template that you can copy-paste into your project.

    In this case, we can see that the best-performing model is a pipeline comprised of a linear support vector machine model.

    We can adapt this code to fit a final model on all available data and make a prediction for new data.

    The complete example is listed below.


    Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use TPOT to automatically discover top-performing models for classification tasks.
    • How to use TPOT to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Hypothesis-Test-for-Comparing-Machine-Learning-Algorithms.png

    Hypothesis Test for Comparing Machine Learning Algorithms

    Machine learning models are chosen based on their mean performance, often calculated using k-fold cross-validation.

    The algorithm with the best mean performance is expected to be better than those algorithms with worse mean performance. But what if the difference in the mean performance is caused by a statistical fluke?

    The solution is to use a statistical hypothesis test to evaluate whether the difference in the mean performance between any two algorithms is real or not.

    In this tutorial, you will discover how to use statistical hypothesis tests for comparing machine learning algorithms.

    After completing this tutorial, you will know:

    • Performing model selection based on the mean model performance can be misleading.
    • The five repeats of two-fold cross-validation with a modified Student’s t-Test is a good practice for comparing machine learning algorithms.
    • How to use the MLxtend machine learning to compare algorithms using a statistical hypothesis test.

    Kick-start your project with my new book Statistics for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    Hypothesis Test for Comparing Machine Learning Algorithms

    Hypothesis Test for Comparing Machine Learning Algorithms
    Photo by Frank Shepherd, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Hypothesis Test for Comparing Algorithms
  • 5×2 Procedure With MLxtend
  • Comparing Classifier Algorithms
  • Hypothesis Test for Comparing Algorithms

    Model selection involves evaluating a suite of different machine learning algorithms or modeling pipelines and comparing them based on their performance.

    The model or modeling pipeline that achieves the best performance according to your performance metric is then selected as your final model that you can then use to start making predictions on new data.

    This applies to regression and classification predictive modeling tasks with classical machine learning algorithms and deep learning. It’s always the same process.

    The problem is, how do you know the difference between two models is real and not just a statistical fluke?

    This problem can be addressed using a statistical hypothesis test.

    One approach is to evaluate each model on the same k-fold cross-validation split of the data (e.g. using the same random number seed to split the data in each case) and calculate a score for each split. This would give a sample of 10 scores for 10-fold cross-validation. The scores can then be compared using a paired statistical hypothesis test because the same treatment (rows of data) was used for each algorithm to come up with each score. The Paired Student’s t-Test could be used.

    A problem with using the Paired Student’s t-Test, in this case, is that each evaluation of the model is not independent. This is because the same rows of data are used to train the data multiple times — actually, each time, except for the time a row of data is used in the hold-out test fold. This lack of independence in the evaluation means that the Paired Student’s t-Test is optimistically biased.

    This statistical test can be adjusted to take the lack of independence into account. Additionally, the number of folds and repeats of the procedure can be configured to achieve a good sampling of model performance that generalizes well to a wide range of problems and algorithms. Specifically two-fold cross-validation with five repeats, so-called 5×2-fold cross-validation.

    This approach was proposed by Thomas Dietterich in his 1998 paper titled “Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms.”

    For more on this topic, see the tutorial:

    Thankfully, we don’t need to implement this procedure ourselves.

    5×2 Procedure With MLxtend

    The MLxtend library by Sebastian Raschka provides an implementation via the paired_ttest_5x2cv() function.

    First, you must install the mlxtend library, for example:


    To use the evaluation, you must first load your dataset, then define the two models that you wish to compare.


    You can then call the paired_ttest_5x2cv() function and pass in your data and models and it will report the t-statistic value and the p-value as to whether the difference in the performance of the two algorithms is significant or not.


    The p-value must be interpreted using an alpha value, which is the significance level that you are willing to accept.

    If the p-value is less or equal to the chosen alpha, we reject the null hypothesis that the models have the same mean performance, which means the difference is probably real. If the p-value is greater than alpha, we fail to reject the null hypothesis that the models have the same mean performance and any observed difference in the mean accuracies is probability a statistical fluke.

    The smaller the alpha value, the better, and a common value is 5 percent (0.05).


    Now that we are familiar with the way to use a hypothesis test to compare algorithms, let’s look at some examples.

    Comparing Classifier Algorithms

    In this section, let’s compare the performance of two machine learning algorithms on a binary classification task, then check if the observed difference is statistically significant or not.

    First, we can use the make_classification() function to create a synthetic dataset with 1,000 samples and 20 input variables.

    The example below creates the dataset and summarizes its shape.


    Running the example creates the dataset and summarizes the number of rows and columns, confirming our expectations.

    We can use this data as the basis for comparing two algorithms.


    We will compare the performance of two linear algorithms on this dataset. Specifically, a logistic regression algorithm and a linear discriminant analysis (LDA) algorithm.

    The procedure I like is to use repeated stratified k-fold cross-validation with 10 folds and three repeats. We will use this procedure to evaluate each algorithm and return and report the mean classification accuracy.

    The complete example is listed below.


    Running the example first reports the mean classification accuracy for each algorithm.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, the results suggest that LDA has better performance if we just look at the mean scores: 89.2 percent for logistic regression and 89.3 percent for LDA.


    A box and whisker plot is also created summarizing the distribution of accuracy scores.

    This plot would support my decision in choosing LDA over LR.

    Box and Whisker Plot of Classification Accuracy Scores for Two Algorithms

    Box and Whisker Plot of Classification Accuracy Scores for Two Algorithms

    Now we can use a hypothesis test to see if the observed results are statistically significant.

    First, we will use the 5×2 procedure to evaluate the algorithms and calculate a p-value and test statistic value.


    We can then interpret the p-value using an alpha of 5 percent.


    Tying this together, the complete example is listed below.


    Running the example, we first evaluate the algorithms before, then report on the result of the statistical hypothesis test.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, we can see that the p-value is about 0.3, which is much larger than 0.05. This leads us to fail to reject the null hypothesis, suggesting that any observed difference between the algorithms is probably not real.

    We could just as easily choose logistic regression or LDA and both would perform about the same on average.

    This highlights that performing model selection based only on the mean performance may not be sufficient.


    Recall that we are reporting performance using a different procedure (3×10 CV) than the procedure used to estimate the performance in the statistical test (5×2 CV). Perhaps results would be different if we looked at scores using five repeats of two-fold cross-validation?

    The example below is updated to report classification accuracy for each algorithm using 5×2 CV.


    Running the example reports the mean accuracy for both algorithms and the results of the statistical test.

    Your specific results may differ given the stochastic nature of the learning algorithms and evaluation procedure. Try running the example a few times.

    In this case, we can see that the difference in the mean performance for the two algorithms is even larger, 89.4 percent vs. 89.0 percent in favor of logistic regression instead of LDA as we saw with 3×10 CV.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    Papers
    APIs

    Summary

    In this tutorial, you discovered how to use statistical hypothesis tests for comparing machine learning algorithms.

    Specifically, you learned:

    • Performing model selection based on the mean model performance can be misleading.
    • The five repeats of two-fold cross-validation with a modified Student’s t-Test is a good practice for comparing machine learning algorithms.
    • How to use the MLxtend machine learning to compare algorithms using a statistical hypothesis test.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Statistics for Machine Learning!

    Statistical Methods for Machine Learning
    Develop a working understanding of statistics

    …by writing lines of code in python

    Discover how in my new Ebook:
    Statistical Methods for Machine Learning

    It provides self-study tutorials on topics like:
    Hypothesis Tests, Correlation, Nonparametric Stats, Resampling, and much more…

    Discover how to Transform Data into Knowledge

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    1598690419_975_Time-Series-Forecasting-With-Prophet-in-Python.png

    Time Series Forecasting With Prophet in Python

    Time series forecasting can be challenging as there are many different methods you could use and many different hyperparameters for each method.

    The Prophet library is an open-source library designed for making forecasts for univariate time series datasets. It is easy to use and designed to automatically find a good set of hyperparameters for the model in an effort to make skillful forecasts for data with trends and seasonal structure by default.

    In this tutorial, you will discover how to use the Facebook Prophet library for time series forecasting.

    After completing this tutorial, you will know:

    • Prophet is an open-source library developed by Facebook and designed for automatic forecasting of univariate time series data.
    • How to fit Prophet models and use them to make in-sample and out-of-sample forecasts.
    • How to evaluate a Prophet model on a hold-out dataset.

    Let’s get started.

    Time Series Forecasting With Prophet in Python

    Time Series Forecasting With Prophet in Python
    Photo by Rinaldo Wurglitsch, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Prophet Forecasting Library
  • Car Sales Dataset
  • Load and Summarize Dataset
  • Load and Plot Dataset
  • Forecast Car Sales With Prophet
  • Fit Prophet Model
  • Make an In-Sample Forecast
  • Make an Out-of-Sample Forecast
  • Manually Evaluate Forecast Model
  • Prophet Forecasting Library

    Prophet, or “Facebook Prophet,” is an open-source library for univariate (one variable) time series forecasting developed by Facebook.

    Prophet implements what they refer to as an additive time series forecasting model, and the implementation supports trends, seasonality, and holidays.

    Implements a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects

    — Package ‘prophet’, 2019.

    It is designed to be easy and completely automatic, e.g. point it at a time series and get a forecast. As such, it is intended for internal company use, such as forecasting sales, capacity, etc.

    For a great overview of Prophet and its capabilities, see the post:

    The library provides two interfaces, including R and Python. We will focus on the Python interface in this tutorial.

    The first step is to install the Prophet library using Pip, as follows:


    Next, we can confirm that the library was installed correctly.

    To do this, we can import the library and print the version number in Python. The complete example is listed below.


    Running the example prints the installed version of Prophet.

    You should have the same version or higher.


    Now that we have Prophet installed, let’s select a dataset we can use to explore using the library.

    Car Sales Dataset

    We will use the monthly car sales dataset.

    It is a standard univariate time series dataset that contains both a trend and seasonality. The dataset has 108 months of data and a naive persistence forecast can achieve a mean absolute error of about 3,235 sales, providing a lower error limit.

    No need to download the dataset as we will download it automatically as part of each example.

    Load and Summarize Dataset

    First, let’s load and summarize the dataset.

    Prophet requires data to be in Pandas DataFrames. Therefore, we will load and summarize the data using Pandas.

    We can load the data directly from the URL by calling the read_csv() Pandas function, then summarize the shape (number of rows and columns) of the data and view the first few rows of data.

    The complete example is listed below.


    Running the example first reports the number of rows and columns, then lists the first five rows of data.

    We can see that as we expected, there are 108 months worth of data and two columns. The first column is the date and the second is the number of sales.

    Note that the first column in the output is a row index and is not a part of the dataset, just a helpful tool that Pandas uses to order rows.


    Load and Plot Dataset

    A time-series dataset does not make sense to us until we plot it.

    Plotting a time series helps us actually see if there is a trend, a seasonal cycle, outliers, and more. It gives us a feel for the data.

    We can plot the data easily in Pandas by calling the plot() function on the DataFrame.

    The complete example is listed below.


    Running the example creates a plot of the time series.

    We can clearly see the trend in sales over time and a monthly seasonal pattern to the sales. These are patterns we expect the forecast model to take into account.

    Line Plot of Car Sales Dataset

    Line Plot of Car Sales Dataset

    Now that we are familiar with the dataset, let’s explore how we can use the Prophet library to make forecasts.

    Forecast Car Sales With Prophet

    In this section, we will explore using the Prophet to forecast the car sales dataset.

    Let’s start by fitting a model on the dataset

    Fit Prophet Model

    To use Prophet for forecasting, first, a Prophet() object is defined and configured, then it is fit on the dataset by calling the fit() function and passing the data.

    The Prophet() object takes arguments to configure the type of model you want, such as the type of growth, the type of seasonality, and more. By default, the model will work hard to figure out almost everything automatically.

    The fit() function takes a DataFrame of time series data. The DataFrame must have a specific format. The first column must have the name ‘ds‘ and contain the date-times. The second column must have the name ‘y‘ and contain the observations.

    This means we change the column names in the dataset. It also requires that the first column be converted to date-time objects, if they are not already (e.g. this can be down as part of loading the dataset with the right arguments to read_csv).

    For example, we can modify our loaded car sales dataset to have this expected structure, as follows:


    The complete example of fitting a Prophet model on the car sales dataset is listed below.


    Running the example loads the dataset, prepares the DataFrame in the expected format, and fits a Prophet model.

    By default, the library provides a lot of verbose output during the fit process. I think it’s a bad idea in general as it trains developers to ignore output.

    Nevertheless, the output summarizes what happened during the model fitting process, specifically the optimization processes that ran.


    I will not reproduce this output in subsequent sections when we fit the model.

    Next, let’s make a forecast.

    Make an In-Sample Forecast

    It can be useful to make a forecast on historical data.

    That is, we can make a forecast on data used as input to train the model. Ideally, the model has seen the data before and would make a perfect prediction.

    Nevertheless, this is not the case as the model tries to generalize across all cases in the data.

    This is called making an in-sample (in training set sample) forecast and reviewing the results can give insight into how good the model is. That is, how well it learned the training data.

    A forecast is made by calling the predict() function and passing a DataFrame that contains one column named ‘ds‘ and rows with date-times for all the intervals to be predicted.

    There are many ways to create this “forecast” DataFrame. In this case, we will loop over one year of dates, e.g. the last 12 months in the dataset, and create a string for each month. We will then convert the list of dates into a DataFrame and convert the string values into date-time objects.


    This DataFrame can then be provided to the predict() function to calculate a forecast.

    The result of the predict() function is a DataFrame that contains many columns. Perhaps the most important columns are the forecast date time (‘ds‘), the forecasted value (‘yhat‘), and the lower and upper bounds on the predicted value (‘yhat_lower‘ and ‘yhat_upper‘) that provide uncertainty of the forecast.

    For example, we can print the first few predictions as follows:


    Prophet also provides a built-in tool for visualizing the prediction in the context of the training dataset.

    This can be achieved by calling the plot() function on the model and passing it a result DataFrame. It will create a plot of the training dataset and overlay the prediction with the upper and lower bounds for the forecast dates.


    Tying this all together, a complete example of making an in-sample forecast is listed below.


    Running the example forecasts the last 12 months of the dataset.

    The first five months of the prediction are reported and we can see that values are not too different from the actual sales values in the dataset.


    Next, a plot is created. We can see the training data are represented as black dots and the forecast is a blue line with upper and lower bounds in a blue shaded area.

    We can see that the forecasted 12 months is a good match for the real observations, especially when the bounds are taken into account.

    Plot of Time Series and In-Sample Forecast With Prophet

    Plot of Time Series and In-Sample Forecast With Prophet

    Make an Out-of-Sample Forecast

    In practice, we really want a forecast model to make a prediction beyond the training data.

    This is called an out-of-sample forecast.

    We can achieve this in the same way as an in-sample forecast and simply specify a different forecast period.

    In this case, a period beyond the end of the training dataset, starting 1969-01.


    Tying this together, the complete example is listed below.


    Running the example makes an out-of-sample forecast for the car sales data.

    The first five rows of the forecast are printed, although it is hard to get an idea of whether they are sensible or not.


    A plot is created to help us evaluate the prediction in the context of the training data.

    The new one-year forecast does look sensible, at least by eye.

    Plot of Time Series and Out-of-Sample Forecast With Prophet

    Plot of Time Series and Out-of-Sample Forecast With Prophet

    Manually Evaluate Forecast Model

    It is critical to develop an objective estimate of a forecast model’s performance.

    This can be achieved by holding some data back from the model, such as the last 12 months. Then, fitting the model on the first portion of the data, using it to make predictions on the held-pack portion, and calculating an error measure, such as the mean absolute error across the forecasts. E.g. a simulated out-of-sample forecast.

    The score gives an estimate of how well we might expect the model to perform on average when making an out-of-sample forecast.

    We can do this with the samples data by creating a new DataFrame for training with the last 12 months removed.


    A forecast can then be made on the last 12 months of date-times.

    We can then retrieve the forecast values and the expected values from the original dataset and calculate a mean absolute error metric using the scikit-learn library.


    It can also be helpful to plot the expected vs. predicted values to see how well the out-of-sample prediction matches the known values.


    Tying this together, the example below demonstrates how to evaluate a Prophet model on a hold-out dataset.


    Running the example first reports the last few rows of the training dataset.

    It confirms the training ends in the last month of 1967 and 1968 will be used as the hold-out dataset.


    Next, a mean absolute error is calculated for the forecast period.

    In this case we can see that the error is approximately 1,336 sales, which is much lower (better) than a naive persistence model that achieves an error of 3,235 sales over the same period.


    Finally, a plot is created comparing the actual vs. predicted values. In this case, we can see that the forecast is a good fit. The model has skill and forecast that looks sensible.

    Plot of Actual vs. Predicted Values for Last 12 Months of Car Sales

    Plot of Actual vs. Predicted Values for Last 12 Months of Car Sales

    The Prophet library also provides tools to automatically evaluate models and plot results, although those tools don’t appear to work well with data above one day in resolution.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use the Facebook Prophet library for time series forecasting.

    Specifically, you learned:

    • Prophet is an open-source library developed by Facebook and designed for automatic forecasting of univariate time series data.
    • How to fit Prophet models and use them to make in-sample and out-of-sample forecasts.
    • How to evaluate a Prophet model on a hold-out dataset.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Want to Develop Time Series Forecasts with Python?

    Introduction to Time Series Forecasting With Python
    Develop Your Own Forecasts in Minutes

    …with just a few lines of python code

    Discover how in my new Ebook:
    Introduction to Time Series Forecasting With Python

    It covers self-study tutorials and end-to-end projects on topics like:
    Loading data, visualization, modeling, algorithm tuning, and much more…

    Finally Bring Time Series Forecasting to

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    How-to-Set-Axis-for-Rows-and-Columns-in-NumPy.jpg

    How to Set Axis for Rows and Columns in NumPy

    NumPy arrays provide a fast and efficient way to store and manipulate data in Python.

    They are particularly useful for representing data as vectors and matrices in machine learning.

    Data in NumPy arrays can be accessed directly via column and row indexes, and this is reasonably straightforward. Nevertheless, sometimes we must perform operations on arrays of data such as sum or mean of values by row or column and this requires the axis of the operation to be specified.

    Unfortunately, the column-wise and row-wise operations on NumPy arrays do not match our intuitions gained from row and column indexing, and this can cause confusion for beginners and seasoned machine learning practitioners alike. Specifically, operations like sum can be performed column-wise using axis=0 and row-wise using axis=1.

    In this tutorial, you will discover how to access and operate on NumPy arrays by row and by column.

    After completing this tutorial, you will know:

    • How to define NumPy arrays with rows and columns of data.
    • How to access values in NumPy arrays by row and column indexes.
    • How to perform operations on NumPy arrays by row and column axis.

    Let’s get started.

    How to Set NumPy Axis for Rows and Columns in Python

    How to Set NumPy Axis for Rows and Columns in Python
    Photo by Jonathan Cutrer, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • NumPy Array With Rows and Columns
  • Rows and Columns of Data in NumPy Arrays
  • NumPy Array Operations By Row and Column
  • Axis=None Array-Wise Operation
  • Axis=0 Column-Wise Operation
  • Axis=1 Row-Wise Operation
  • NumPy Array With Rows and Columns

    Before we dive into the NumPy array axis, let’s refresh our knowledge of NumPy arrays.

    Typically in Python, we work with lists of numbers or lists of lists of numbers. For example, we can define a two-dimensional matrix of two rows of three numbers as a list of numbers as follows:


    A NumPy array allows us to define and operate upon vectors and matrices of numbers in an efficient manner, e.g. a lot more efficient than simply Python lists. NumPy arrays are called NDArrays and can have virtually any number of dimensions, although, in machine learning, we are most commonly working with 1D and 2D arrays (or 3D arrays for images).

    For example, we can convert our list of lists matrix to a NumPy array via the asarray() function:


    We can print the array directly and expect to see two rows of numbers, where each row has three numbers or columns.


    We can summarize the dimensionality of an array by printing the “shape” property, which is a tuple, where the number of values in the tuple defines the number of dimensions, and the integer in each position defines the size of the dimension.

    For example, we expect the shape of our array to be (2,3) for two rows and three columns.


    Tying this all together, a complete example is listed below.


    Running the example defines our data as a list of lists, converts it to a NumPy array, then prints the data and shape.

    We can see that when the array is printed, it has the expected shape of two rows with three columns. We can then see that the printed shape matches our expectations.


    For more on the basics of NumPy arrays, see the tutorial:

    So far, so good.

    But how do we access data in the array by row or column? More importantly, how can we perform operations on the array by-row or by-column?

    Let’s take a closer look at these questions.

    Rows and Columns of Data in NumPy Arrays

    The “shape” property summarizes the dimensionality of our data.

    Importantly, the first dimension defines the number of rows and the second dimension defines the number of columns. For example (2,3) defines an array with two rows and three columns, as we saw in the last section.

    We can enumerate each row of data in an array by enumerating from index 0 to the first dimension of the array shape, e.g. shape[0]. We can access data in the array via the row and column index.

    For example, data[0, 0] is the value at the first row and the first column, whereas data[0, :] is the values in the first row and all columns, e.g. the complete first row in our matrix.

    The example below enumerates all rows in the data and prints each in turn.


    As expected, the results show the first row of data, then the second row of data.


    We can achieve the same effect for columns.

    That is, we can enumerate data by columns. For example, data[:, 0] accesses all rows for the first column. We can enumerate all columns from column 0 to the final column defined by the second dimension of the “shape” property, e.g. shape[1].

    The example below demonstrates this by enumerating all columns in our matrix.


    Running the example enumerates and prints each column in the matrix.

    Given that the matrix has three columns, we can see that the result is that we print three columns, each as a one-dimensional vector. That is column 1 (index 0) that has values 1 and 4, column 2 (index 1) that has values 2 and 5, and column 3 (index 2) that has values 3 and 6.

    It just looks funny because our columns don’t look like columns; they are turned on their side, rather than vertical.


    Now we know how to access data in a numpy array by column and by row.

    So far, so good, but what about operations on the array by column and array? That’s next.

    NumPy Array Operations By Row and Column

    We often need to perform operations on NumPy arrays by column or by row.

    For example, we may need to sum values or calculate a mean for a matrix of data by row or by column.

    This can be achieved by using the sum() or mean() NumPy function and specifying the “axis” on which to perform the operation.

    We can specify the axis as the dimension across which the operation is to be performed, and this dimension does not match our intuition based on how we interpret the “shape” of the array and how we index data in the array.

    As such, this causes maximum confusion for beginners.

    That is, axis=0 will perform the operation column-wise and axis=1 will perform the operation row-wise. We can also specify the axis as None, which will perform the operation for the entire array.

    In summary:

    • axis=None: Apply operation array-wise.
    • axis=0: Apply operation column-wise, across all rows for each column.
    • axis=1: Apply operation row-wise, across all columns for each row.

    Let’s make this concrete with a worked example.

    We will sum values in our array by each of the three axes.

    Axis=None Array-Wise Operation

    Setting the axis=None when performing an operation on a NumPy array will perform the operation for the entire array.

    This is often the default for most operations, such as sum, mean, std, and so on.


    The example below demonstrates summing all values in an array, e.g. an array-wise operation.


    Running the example first prints the array, then performs the sum operation array-wise and prints the result.

    We can see the array has six values that would sum to 21 if we add them manually and that the result of the sum operation performed array-wise matches this expectation.


    Axis=0 Column-Wise Operation

    Setting the axis=0 when performing an operation on a NumPy array will perform the operation column-wise, that is, across all rows for each column.


    For example, given our data with two rows and three columns:


    We expect a sum column-wise with axis=0 will result in three values, one for each column, as follows:

    • Column 1: 1 + 4 = 5
    • Column 2: 2 + 5 = 7
    • Column 3: 3 + 6 = 9

    The example below demonstrates summing values in the array by column, e.g. a column-wise operation.


    Running the example first prints the array, then performs the sum operation column-wise and prints the result.

    We can see the array has six values with two rows and three columns as expected; we can then see the column-wise operation result in a vector with three values, one for the sum of each column matching our expectation.


    Axis=1 Row-Wise Operation

    Setting the axis=1 when performing an operation on a NumPy array will perform the operation row-wise, that is across all columns for each row.


    For example, given our data with two rows and three columns:


    We expect a sum row-wise with axis=1 will result in two values, one for each row, as follows:

    • Row 1: 1 + 2 + 3 = 6
    • Row 2: 4 + 5 + 6 = 15

    The example below demonstrates summing values in the array by row, e.g. a row-wise operation.


    Running the example first prints the array, then performs the sum operation row-wise and prints the result.

    We can see the array has six values with two rows and three columns as expected; we can then see the row-wise operation result in a vector with two values, one for the sum of each row matching our expectation.


    We now have a concrete idea of how to set axis appropriately when performing operations on our NumPy arrays.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    APIs

    Summary

    In this tutorial, you discovered how to access and operate on NumPy arrays by row and by column.

    Specifically, you learned:

    • How to define NumPy arrays with rows and columns of data.
    • How to access values in NumPy arrays by row and column indexes.
    • How to perform operations on NumPy arrays by row and column axis.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Linear Algebra for Machine Learning!

    Linear Algebra for Machine Learning
    Develop a working understand of linear algebra

    …by writing lines of code in python

    Discover how in my new Ebook:
    Linear Algebra for Machine Learning

    It provides self-study tutorials on topics like:
    Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA and much more…

    Finally Understand the Mathematics of Data

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Multi-Class-Imbalanced-Classification.png

    Multi-Class Imbalanced Classification

    Last Updated on August 21, 2020

    Imbalanced classification are those prediction tasks where the distribution of examples across class labels is not equal.

    Most imbalanced classification examples focus on binary classification tasks, yet many of the tools and techniques for imbalanced classification also directly support multi-class classification problems.

    In this tutorial, you will discover how to use the tools of imbalanced classification with a multi-class dataset.

    After completing this tutorial, you will know:

    • About the glass identification standard imbalanced multi-class prediction problem.
    • How to use SMOTE oversampling for imbalanced multi-class classification.
    • How to use cost-sensitive learning for imbalanced multi-class classification.

    Kick-start your project with my new book Imbalanced Classification with Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    Multi-Class Imbalanced Classification

    Multi-Class Imbalanced Classification
    Photo by istolethetv, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Glass Multi-Class Classification Dataset
  • SMOTE Oversampling for Multi-Class Classification
  • Cost-Sensitive Learning for Multi-Class Classification
  • Glass Multi-Class Classification Dataset

    In this tutorial, we will focus on the standard imbalanced multi-class classification problem referred to as “Glass Identification” or simply “glass.”

    The dataset describes the chemical properties of glass and involves classifying samples of glass using their chemical properties as one of six classes. The dataset was credited to Vina Spiehler in 1987.

    Ignoring the sample identification number, there are nine input variables that summarize the properties of the glass dataset; they are:

    • RI: Refractive Index
    • Na: Sodium
    • Mg: Magnesium
    • Al: Aluminum
    • Si: Silicon
    • K: Potassium
    • Ca: Calcium
    • Ba: Barium
    • Fe: Iron

    The chemical compositions are measured as the weight percent in corresponding oxide.

    There are seven types of glass listed; they are:

    • Class 1: building windows (float processed)
    • Class 2: building windows (non-float processed)
    • Class 3: vehicle windows (float processed)
    • Class 4: vehicle windows (non-float processed)
    • Class 5: containers
    • Class 6: tableware
    • Class 7: headlamps

    Float glass refers to the process used to make the glass.

    There are 214 observations in the dataset and the number of observations in each class is imbalanced. Note that there are no examples for class 4 (non-float processed vehicle windows) in the dataset.

    • Class 1: 70 examples
    • Class 2: 76 examples
    • Class 3: 17 examples
    • Class 4: 0 examples
    • Class 5: 13 examples
    • Class 6: 9 examples
    • Class 7: 29 examples

    Although there are minority classes, all classes are equally important in this prediction problem.

    The dataset can be divided into window glass (classes 1-4) and non-window glass (classes 5-7). There are 163 examples of window glass and 51 examples of non-window glass.

    • Window Glass: 163 examples
    • Non-Window Glass: 51 examples

    Another division of the observations would be between float processed glass and non-float processed glass, in the case of window glass only. This division is more balanced.

    • Float Glass: 87 examples
    • Non-Float Glass: 76 examples

    You can learn more about the dataset here:

    No need to download the dataset; we will download it automatically as part of the worked examples.

    Below is a sample of the first few rows of the data.


    We can see that all inputs are numeric and the target variable in the final column is the integer encoded class label.

    You can learn more about how to work through this dataset as part of a project in the tutorial:

    Now that we are familiar with the glass multi-class classification dataset, let’s explore how we can use standard imbalanced classification tools with it.



    Want to Get Started With Imbalance Classification?

    Take my free 7-day email crash course now (with sample code).

    Click to sign-up and also get a free PDF Ebook version of the course.

    Download Your FREE Mini-Course


    SMOTE Oversampling for Multi-Class Classification

    Oversampling refers to copying or synthesizing new examples of the minority classes so that the number of examples in the minority class better resembles or matches the number of examples in the majority classes.

    Perhaps the most widely used approach to synthesizing new examples is called the Synthetic Minority Oversampling TEchnique, or SMOTE for short. This technique was described by Nitesh Chawla, et al. in their 2002 paper named for the technique titled “SMOTE: Synthetic Minority Over-sampling Technique.”

    You can learn more about SMOTE in the tutorial:

    The imbalanced-learn library provides an implementation of SMOTE that we can use that is compatible with the popular scikit-learn library.

    First, the library must be installed. We can install it using pip as follows:

    sudo pip install imbalanced-learn

    We can confirm that the installation was successful by printing the version of the installed library:


    Running the example will print the version number of the installed library; for example:


    Before we apply SMOTE, let’s first load the dataset and confirm the number of examples in each class.


    Running the example first downloads the dataset and splits it into train and test sets.

    The number of rows in each class is then reported, confirming that some classes, such as 0 and 1, have many more examples (more than 70) than other classes, such as 3 and 4 (less than 15).


    A bar chart is created providing a visualization of the class breakdown of the dataset.

    This gives a clearer idea that classes 0 and 1 have many more examples than classes 2, 3, 4 and 5.

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset

    Next, we can apply SMOTE to oversample the dataset.

    By default, SMOTE will oversample all classes to have the same number of examples as the class with the most examples.

    In this case, class 1 has the most examples with 76, therefore, SMOTE will oversample all classes to have 76 examples.

    The complete example of oversampling the glass dataset with SMOTE is listed below.


    Running the example first loads the dataset and applies SMOTE to it.

    The distribution of examples in each class is then reported, confirming that each class now has 76 examples, as we expected.


    A bar chart of the class distribution is also created, providing a strong visual indication that all classes now have the same number of examples.

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Default SMOTE Oversampling

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Default SMOTE Oversampling

    Instead of using the default strategy of SMOTE to oversample all classes to the number of examples in the majority class, we could instead specify the number of examples to oversample in each class.

    For example, we could oversample to 100 examples in classes 0 and 1 and 200 examples in remaining classes. This can be achieved by creating a dictionary that maps class labels to the number of desired examples in each class, then specifying this via the “sampling_strategy” argument to the SMOTE class.


    Tying this together, the complete example of using a custom oversampling strategy for SMOTE is listed below.


    Running the example creates the desired sampling and summarizes the effect on the dataset, confirming the intended result.


    Note: you may see warnings that can be safely ignored for the purposes of this example, such as:


    A bar chart of the class distribution is also created confirming the specified class distribution after data sampling.

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Custom SMOTE Oversampling

    Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Custom SMOTE Oversampling

    Note: when using data sampling like SMOTE, it must only be applied to the training dataset, not the entire dataset. I recommend using a Pipeline to ensure that the SMOTE method is correctly used when evaluating models and making predictions with models.

    You can see an example of the correct usage of SMOTE in a Pipeline in this tutorial:

    Cost-Sensitive Learning for Multi-Class Classification

    Most machine learning algorithms assume that all classes have an equal number of examples.

    This is not the case in multi-class imbalanced classification. Algorithms can be modified to change the way learning is performed to bias towards those classes that have fewer examples in the training dataset. This is generally called cost-sensitive learning.

    For more on cost-sensitive learning, see the tutorial:

    The RandomForestClassifier class in scikit-learn supports cost-sensitive learning via the “class_weight” argument.

    By default, the random forest class assigns equal weight to each class.

    We can evaluate the classification accuracy of the default random forest class weighting on the glass imbalanced multi-class classification dataset.

    The complete example is listed below.


    Running the example evaluates the default random forest algorithm with 1,000 trees on the glass dataset using repeated stratified k-fold cross-validation.

    The mean and standard deviation classification accuracy are reported at the end of the run.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the default model achieved a classification accuracy of about 79.6 percent.


    We can specify the “class_weight” argument to the value “balanced” that will automatically calculates a class weighting that will ensure each class gets an equal weighting during the training of the model.


    Tying this together, the complete example is listed below.


    Running the example reports the mean and standard deviation classification accuracy of the cost-sensitive version of random forest on the glass dataset.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the default model achieved a lift in classification accuracy over the cost-insensitive version of the algorithm, with 80.2 percent classification accuracy vs. 79.6 percent.


    The “class_weight” argument takes a dictionary of class labels mapped to a class weighting value.

    We can use this to specify a custom weighting, such as a default weighting for classes 0 and 1.0 that have many examples and a double class weighting of 2.0 for the other classes.


    Tying this together, the complete example of using a custom class weighting for cost-sensitive learning on the glass multi-class imbalanced classification problem is listed below.


    Running the example reports the mean and standard deviation classification accuracy of the cost-sensitive version of random forest on the glass dataset with custom weights.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that we achieved a further lift in accuracy from about 80.2 percent with balanced class weighting to 80.8 percent with a more biased class weighting.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials
    APIs

    Summary

    In this tutorial, you discovered how to use the tools of imbalanced classification with a multi-class dataset.

    Specifically, you learned:

    • About the glass identification standard imbalanced multi-class prediction problem.
    • How to use SMOTE oversampling for imbalanced multi-class classification.
    • How to use cost-sensitive learning for imbalanced multi-class classification.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Imbalanced Classification!

    Imbalanced Classification with Python

    Develop Imbalanced Learning Models in Minutes

    …with just a few lines of python code

    Discover how in my new Ebook:
    Imbalanced Classification with Python

    It provides self-study tutorials and end-to-end projects on:
    Performance Metrics, Undersampling Methods, SMOTE, Threshold Moving, Probability Calibration, Cost-Sensitive Algorithms

    and much more…

    Bring Imbalanced Classification Methods to Your Machine Learning Projects

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Auto-Sklearn-for-Automated-Machine-Learning-in-Python.jpg

    Auto-Sklearn for Automated Machine Learning in Python

    Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

    Auto-Sklearn is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Bayesian Optimization search procedure to efficiently discover a top-performing model pipeline for a given dataset.

    In this tutorial, you will discover how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    After completing this tutorial, you will know:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Let’s get started.

    Auto-Sklearn for Automated Machine Learning in Python

    Auto-Sklearn for Automated Machine Learning in Python
    Photo by Richard, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • AutoML With Auto-Sklearn
  • Install and Using Auto-Sklearn
  • Auto-Sklearn for Classification
  • Auto-Sklearn for Regression
  • AutoML With Auto-Sklearn

    Automated Machine Learning, or AutoML for short, is a process of discovering the best-performing pipeline of data transforms, model, and model configuration for a dataset.

    AutoML often involves the use of sophisticated optimization algorithms, such as Bayesian Optimization, to efficiently navigate the space of possible models and model configurations and quickly discover what works well for a given predictive modeling task. It allows non-expert machine learning practitioners to quickly and easily discover what works well or even best for a given dataset with very little technical background or direct input.

    Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

    It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

    … we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

    — Efficient and Robust Automated Machine Learning, 2015.

    The benefit of Auto-Sklearn is that, in addition to discovering the data preparation and model that performs for a dataset, it also is able to learn from models that performed well on similar datasets and is able to automatically create an ensemble of top-performing models discovered as part of the optimization process.

    This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization.

    — Efficient and Robust Automated Machine Learning, 2015.

    The authors provide a useful depiction of their system in the paper, provided below.

    Overview of the Auto-Sklearn System

    Overview of the Auto-Sklearn System.
    Taken from: Efficient and Robust Automated Machine Learning, 2015.

    Install and Using Auto-Sklearn

    The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:


    Once installed, we can import the library and print the version number to confirm it was installed successfully:


    Running the example prints the version number.

    Your version number should be the same or higher.


    Using Auto-Sklearn is straightforward.

    Depending on whether your prediction task is classification or regression, you create and configure an instance of the AutoSklearnClassifier or AutoSklearnRegressor class, fit it on your dataset, and that’s it. The resulting model can then be used to make predictions directly or saved to file (using pickle) for later use.


    There are a ton of configuration options provided as arguments to the AutoSklearn class.

    By default, the search will use a train-test split of your dataset during the search, and this default is recommended both for speed and simplicity.

    Importantly, you should set the “n_jobs” argument to the number of cores in your system, e.g. 8 if you have 8 cores.

    The optimization process will run for as long as you allow, measure in minutes. By default, it will run for one hour.

    I recommend setting the “time_left_for_this_task” argument for the number of seconds you want the process to run. E.g. less than 5-10 minutes is probably plenty for many small predictive modeling tasks (sub 1,000 rows).

    We will use 5 minutes (300 seconds) for the examples in this tutorial. We will also limit the time allocated to each model evaluation to 30 seconds via the “per_run_time_limit” argument. For example:


    You can limit the algorithms considered in the search, as well as the data transforms.

    By default, the search will create an ensemble of top-performing models discovered as part of the search. Sometimes, this can lead to overfitting and can be disabled by setting the “ensemble_size” argument to 1 and “initial_configurations_via_metalearning” to 0.


    At the end of a run, the list of models can be accessed, as well as other details.

    Perhaps the most useful feature is the sprint_statistics() function that summarizes the search and the performance of the final model.


    Now that we are familiar with the Auto-Sklearn library, let’s look at some worked examples.

    Auto-Sklearn for Classification

    In this section, we will use Auto-Sklearn to discover a model for the sonar dataset.

    The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.


    We will use Auto-Sklearn to find a good model for the sonar dataset.

    First, we will split the dataset into train and test sets and allow the process to find a good model on the training set, then later evaluate the performance of what was found on the holdout test set.


    The AutoSklearnClassifier is configured to run for 5 minutes with 8 cores and limit each model evaluation to 30 seconds.


    The search is then performed on the training dataset.


    Afterward, a summary of the search and best-performing model is reported.


    Finally, we evaluate the performance of the model that was prepared on the holdout test dataset.


    Tying this together, the complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    At the end of the run, a summary is printed showing that 1,054 models were evaluated and the estimated performance of the final model was 91 percent.

    Your specific results may vary given the stochastic nature of the optimization algorithm.


    We then evaluate the model on the holdout dataset and see that classification accuracy of 81.2 percent was achieved, which is reasonably skillful.


    Auto-Sklearn for Regression

    In this section, we will use Auto-Sklearn to discover a model for the auto insurance dataset.

    The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.


    We will use Auto-Sklearn to find a good model for the auto insurance dataset.

    We can use the same process as was used in the previous section, although we will use the AutoSklearnRegressor class instead of the AutoSklearnClassifier.


    By default, the regressor will optimize the R^2 metric.

    In this case, we are interested in the mean absolute error, or MAE, which we can specify via the “metric” argument when calling the fit() function.


    The complete example is listed below.


    Running the example will take about five minutes, given the hard limit we imposed on the run.

    You might see some warning messages during the run and you can safely ignore them, such as:


    At the end of the run, a summary is printed showing that 1,759 models were evaluated and the estimated performance of the final model was a MAE of 29.


    We then evaluate the model on the holdout dataset and see that a MAE of 26 was achieved, which is a great result.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use Auto-Sklearn for AutoML with Scikit-Learn machine learning algorithms in Python.

    Specifically, you learned:

    • Auto-Sklearn is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
    • How to use Auto-Sklearn to automatically discover top-performing models for classification tasks.
    • How to use Auto-Sklearn to automatically discover top-performing models for regression tasks.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    How-to-Use-AutoKeras-for-Classification-and-Regression.jpg

    How to Use AutoKeras for Classification and Regression

    AutoML refers to techniques for automatically discovering the best-performing model for a given dataset.

    When applied to neural networks, this involves both discovering the model architecture and the hyperparameters used to train the model, generally referred to as neural architecture search.

    AutoKeras is an open-source library for performing AutoML for deep learning models. The search is performed using so-called Keras models via the TensorFlow tf.keras API.

    It provides a simple and effective approach for automatically finding top-performing models for a wide range of predictive modeling tasks, including tabular or so-called structured classification and regression datasets.

    In this tutorial, you will discover how to use AutoKeras to find good neural network models for classification and regression tasks.

    After completing this tutorial, you will know:

    • AutoKeras is an implementation of AutoML for deep learning that uses neural architecture search.
    • How to use AutoKeras to find a top-performing model for a binary classification dataset.
    • How to use AutoKeras to find a top-performing model for a regression dataset.

    Let’s get started.

    How to Use AutoKeras for Classification and Regression

    How to Use AutoKeras for Classification and Regression
    Photo by kanu101, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • AutoKeras for Deep Learning
  • AutoKeras for Classification
  • AutoKeras for Regression
  • AutoKeras for Deep Learning

    Automated Machine Learning, or AutoML for short, refers to automatically finding the best combination of data preparation, model, and model hyperparameters for a predictive modeling problem.

    The benefit of AutoML is allowing machine learning practitioners to quickly and effectively address predictive modeling tasks with very little input, e.g. fire and forget.

    Automated Machine Learning (AutoML) has become a very important research topic with wide applications of machine learning techniques. The goal of AutoML is to enable people with limited machine learning background knowledge to use machine learning models easily.

    — Auto-keras: An efficient neural architecture search system, 2019.

    AutoKeras is an implementation of AutoML for deep learning models using the Keras API, specifically the tf.keras API provided by TensorFlow 2.

    It uses a process of searching through neural network architectures to best address a modeling task, referred to more generally as Neural Architecture Search, or NAS for short.

    … we have developed a widely adopted open-source AutoML system based on our proposed method, namely Auto-Keras. It is an open-source AutoML system, which can be downloaded and installed locally.

    — Auto-keras: An efficient neural architecture search system, 2019.

    In the spirit of Keras, AutoKeras provides an easy-to-use interface for different tasks, such as image classification, structured data classification or regression, and more. The user is only required to specify the location of the data and the number of models to try and is returned a model that achieves the best performance (under the configured constraints) on that dataset.

    Note: AutoKeras provides a TensorFlow 2 Keras model (e.g. tf.keras) and not a Standalone Keras model. As such, the library assumes that you have Python 3 and TensorFlow 2.1 or higher installed.

    To install AutoKeras, you can use Pip, as follows:


    You can confirm the installation was successful and check the version number as follows:


    You should see output like the following:


    Once installed, you can then apply AutoKeras to find a good or great neural network model for your predictive modeling task.

    We will take a look at two common examples where you may want to use AutoKeras, classification and regression on tabular data, so-called structured data.

    AutoKeras for Classification

    AutoKeras can be used to discover a good or great model for classification tasks on tabular data.

    Recall tabular data are those datasets composed of rows and columns, such as a table or data as you would see in a spreadsheet.

    In this section, we will develop a model for the Sonar classification dataset for classifying sonar returns as rocks or mines. This dataset consists of 208 rows of data with 60 input features and a target class label of 0 (rock) or 1 (mine).

    A naive model can achieve a classification accuracy of about 53.4 percent via repeated 10-fold cross-validation, which provides a lower-bound. A good model can achieve an accuracy of about 88.2 percent, providing an upper-bound.

    You can learn more about the dataset here:

    No need to download the dataset; we will download it automatically as part of the example.

    First, we can download the dataset and split it into a randomly selected train and test set, holding 33 percent for test and using 67 percent for training.

    The complete example is listed below.


    Running the example first downloads the dataset and summarizes the shape, showing the expected number of rows and columns.

    The dataset is then split into input and output elements, then these elements are further split into train and test datasets.


    We can use AutoKeras to automatically discover an effective neural network model for this dataset.

    This can be achieved by using the StructuredDataClassifier class and specifying the number of models to search. This defines the search to perform.


    We can then execute the search using our loaded dataset.


    This may take a few minutes and will report the progress of the search.

    Next, we can evaluate the model on the test dataset to see how it performs on new data.


    We then use the model to make a prediction for a new row of data.


    We can retrieve the final model, which is an instance of a TensorFlow Keras model.


    We can then summarize the structure of the model to see what was selected.


    Finally, we can save the model to file for later use, which can be loaded using the TensorFlow load_model() function.


    Tying this together, the complete example of applying AutoKeras to find an effective neural network model for the Sonar dataset is listed below.


    Running the example will report a lot of debug information about the progress of the search.

    The models and results are all saved in a folder called “structured_data_classifier” in your current working directory.


    The best-performing model is then evaluated on the hold-out test dataset.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model achieved a classification accuracy of about 82.6 percent.


    Next, the architecture of the best-performing model is reported.

    We can see a model with two hidden layers with dropout and ReLU activation.


    AutoKeras for Regression

    AutoKeras can also be used for regression tasks, that is, predictive modeling problems where a numeric value is predicted.

    We will use the auto insurance dataset that involves predicting the total payment from claims given the total number of claims. The dataset has 63 rows and one input and one output variable.

    A naive model can achieve a mean absolute error (MAE) of about 66 using repeated 10-fold cross-validation, providing a lower-bound on expected performance. A good model can achieve a MAE of about 28, providing a performance upper-bound.

    You can learn more about this dataset here:

    We can load the dataset and split it into input and output elements and then train and test datasets.

    The complete example is listed below.


    Running the example loads the dataset, confirming the number of rows and columns, then splits the dataset into train and test sets.


    AutoKeras can be applied to a regression task using the StructuredDataRegressor class and configured for the number of models to trial.


    The search can then be run and the best model saved, much like in the classification case.


    We can then use the best-performing model and evaluate it on the hold out dataset, make a prediction on new data, and summarize its structure.


    Tying this together, the complete example of using AutoKeras to discover an effective neural network model for the auto insurance dataset is listed below.


    Running the example will report a lot of debug information about the progress of the search.

    The models and results are all saved in a folder called “structured_data_regressor” in your current working directory.


    The best-performing model is then evaluated on the hold-out test dataset.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model achieved a MAE of about 24.


    Next, the architecture of the best-performing model is reported.

    We can see a model with two hidden layers with ReLU activation.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to use AutoKeras to find good neural network models for classification and regression tasks.

    Specifically, you learned:

    • AutoKeras is an implementation of AutoML for deep learning that uses neural architecture search.
    • How to use AutoKeras to find a top-performing model for a binary classification dataset.
    • How to use AutoKeras to find a top-performing model for a regression dataset.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Develop Deep Learning Projects with Python!

    Deep Learning with Python
     What If You Could Develop A Network in Minutes

    …with just a few lines of Python

    Discover how in my new Ebook:
    Deep Learning With Python

    It covers end-to-end projects on topics like:
    Multilayer Perceptrons, Convolutional Nets and Recurrent Neural Nets, and more…

    Finally Bring Deep Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Scikit-Optimize-for-Hyperparameter-Tuning-in-Machine-Learning.jpg

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning

    Hyperparameter optimization refers to performing a search in order to discover the set of specific model configuration arguments that result in the best performance of the model on a specific dataset.

    There are many ways to perform hyperparameter optimization, although modern methods, such as Bayesian Optimization, are fast and effective. The Scikit-Optimize library is an open-source Python library that provides an implementation of Bayesian Optimization that can be used to tune the hyperparameters of machine learning models from the scikit-Learn Python library.

    You can easily use the Scikit-Optimize library to tune the models on your next machine learning project.

    In this tutorial, you will discover how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

    After completing this tutorial, you will know:

    • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
    • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
    • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

    Let’s get started.

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning

    Scikit-Optimize for Hyperparameter Tuning in Machine Learning
    Photo by Dan Nevill, some rights reserved.

    Tutorial Overview

    This tutorial is divided into four parts; they are:

  • Scikit-Optimize
  • Machine Learning Dataset and Model
  • Manually Tune Algorithm Hyperparameters
  • Automatically Tune Algorithm Hyperparameters
  • Scikit-Optimize

    Scikit-Optimize, or skopt for short, is an open-source Python library for performing optimization tasks.

    It offers efficient optimization algorithms, such as Bayesian Optimization, and can be used to find the minimum or maximum of arbitrary cost functions.

    Bayesian Optimization provides a principled technique based on Bayes Theorem to direct a search of a global optimization problem that is efficient and effective. It works by building a probabilistic model of the objective function, called the surrogate function, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.

    For more on the topic of Bayesian Optimization, see the tutorial:

    Importantly, the library provides support for tuning the hyperparameters of machine learning algorithms offered by the scikit-learn library, so-called hyperparameter optimization. As such, it offers an efficient alternative to less efficient hyperparameter optimization procedures such as grid search and random search.

    The scikit-optimize library can be installed using pip, as follows:


    Once installed, we can import the library and print the version number to confirm the library was installed successfully and can be accessed.

    The complete example is listed below.


    Running the example reports the currently installed version number of scikit-optimize.

    Your version number should be the same or higher.


    For more installation instructions, see the documentation:

    Now that we are familiar with what Scikit-Optimize is and how to install it, let’s explore how we can use it to tune the hyperparameters of a machine learning model.

    Machine Learning Dataset and Model

    First, let’s select a standard dataset and a model to address it.

    We will use the ionosphere machine learning dataset. This is a standard machine learning dataset comprising 351 rows of data with three numerical input variables and a target variable with two class values, e.g. binary classification.

    Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 64 percent. A top performing model can achieve accuracy on this same test harness of about 94 percent. This provides the bounds of expected performance on this dataset.

    The dataset involves predicting whether measurements of the ionosphere indicate a specific structure or not.

    You can learn more about the dataset here:

    No need to download the dataset; we will download it automatically as part of our worked examples.

    The example below downloads the dataset and summarizes its shape.


    Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 351 rows of data with 34 input variables.


    We can evaluate a support vector machine (SVM) model on this dataset using repeated stratified cross-validation.

    We can report the mean model performance on the dataset averaged over all folds and repeats, which will provide a reference for model hyperparameter tuning performed in later sections.

    The complete example is listed below.


    Running the example first loads and prepares the dataset, then evaluates the SVM model on the dataset.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the SVM with default hyperparameters achieved a mean classification accuracy of about 83.7 percent, which is skillful and close to the top performance on the problem of 94 percent.


    Next, let’s see if we can improve performance by tuning the model hyperparameters using the scikit-optimize library.

    Manually Tune Algorithm Hyperparameters

    The Scikit-Optimize library can be used to tune the hyperparameters of a machine learning model.

    We can achieve this manually by using the Bayesian Optimization capabilities of the library.

    This requires that we first define a search space. In this case, this will be the hyperparameters of the model that we wish to tune, and the scope or range of each hyperparameter.

    We will tune the following hyperparameters of the SVM model:

    • C, the regularization parameter.
    • kernel, the type of kernel used in the model.
    • degree, used for the polynomial kernel.
    • gamma, used in most other kernels.

    For the numeric hyperparameters C and gamma, we will define a log scale to search between a small value of 1e-6 and 100. Degree is an integer and we will search values between 1 and 5. Finally, the kernel is a categorical variable with specific named values.

    We can define the search space for these four hyperparameters, a list of data types from the skopt library, as follows:


    Note the data type, the range, and the name of the hyperparameter specified for each.

    We can then define a function that will be called by the search procedure. This is a function expected by the optimization procedure later and takes a model and set of specific hyperparameters for the model, evaluates it, and returns a score for the set of hyperparameters.

    In our case, we want to evaluate the model using repeated stratified 10-fold cross-validation on our ionosphere dataset. We want to maximize classification accuracy, e.g. find the set of model hyperparameters that give the best accuracy. By default, the process minimizes the score returned from this function, therefore, we will return one minus the accuracy, e.g. perfect skill will be (1 – accuracy) or 0.0, and the worst skill will be 1.0.

    The evaluate_model() function below implements this and takes a specific set of hyperparameters.


    Next, we can execute the search by calling the gp_minimize() function and passing the name of the function to call to evaluate each model and the search space to optimize.


    The procedure will run until it converges and returns a result.

    The result object contains lots of details, but importantly, we can access the score of the best performing configuration and the hyperparameters used by the best forming model.


    Tying this together, the complete example of manually tuning the hyperparameters of an SVM on the ionosphere dataset is listed below.


    Running the example may take a few moments, depending on the speed of your machine.

    You may see some warning messages that you can safely ignore, such as:


    At the end of the run, the best-performing configuration is reported.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that configuration, reported in order of the search space list, was a modest C value, a RBF kernel, a degree of 2 (ignored by the RBF kernel), and a modest gamma value.

    Importantly, we can see that the skill of this model was approximately 94.7 percent, which is a top-performing model


    This is not the only way to use the Scikit-Optimize library for hyperparameter tuning. In the next section, we can see a more automated approach.

    Automatically Tune Algorithm Hyperparameters

    The Scikit-Learn machine learning library provides tools for tuning model hyperparameters.

    Specifically, it provides the GridSearchCV and RandomizedSearchCV classes that take a model, a search space, and a cross-validation configuration.

    The benefit of these classes is that the search procedure is performed automatically, requiring minimal configuration.

    Similarly, the Scikit-Optimize library provides a similar interface for performing a Bayesian Optimization of model hyperparameters via the BayesSearchCV class.

    This class can be used in the same way as the Scikit-Learn equivalents.

    First, the search space must be defined as a dictionary with hyperparameter names used as the key and the scope of the variable as the value.


    We can then define the BayesSearchCV configuration taking the model we wish to evaluate, the hyperparameter search space, and the cross-validation configuration.


    We can then execute the search and report the best result and configuration at the end.


    Tying this together, the complete example of automatically tuning SVM hyperparameters using the BayesSearchCV class on the ionosphere dataset is listed below.


    Running the example may take a few moments, depending on the speed of your machine.

    You may see some warning messages that you can safely ignore, such as:


    At the end of the run, the best-performing configuration is reported.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model performed above top-performing models achieving a mean classification accuracy of about 95.2 percent.

    The search discovered a large C value, an RBF kernel, and a small gamma value.


    This provides a template that you can use to tune the hyperparameters on your machine learning project.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials
    APIs

    Summary

    In this tutorial, you discovered how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

    Specifically, you learned:

    • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
    • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
    • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    1598716808_524_Plot-a-Decision-Surface-for-Machine-Learning-Algorithms-in-Python.png

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Last Updated on August 26, 2020

    Classification algorithms learn how to assign class labels to examples, although their decisions can appear opaque.

    A popular diagnostic for understanding the decisions made by a classification algorithm is the decision surface. This is a plot that shows how a fit machine learning algorithm predicts a coarse grid across the input feature space.

    A decision surface plot is a powerful tool for understanding how a given model “sees” the prediction task and how it has decided to divide the input feature space by class label.

    In this tutorial, you will discover how to plot a decision surface for a classification machine learning algorithm.

    After completing this tutorial, you will know:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    Plot a Decision Surface for Machine Learning Algorithms in Python

    Plot a Decision Surface for Machine Learning Algorithms in Python
    Photo by Tony Webster, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Decision Surface
  • Dataset and Model
  • Plot a Decision Surface
  • Decision Surface

    Classification machine learning algorithms learn to assign labels to input examples.

    Consider numeric input features for the classification task defining a continuous input feature space.

    We can think of each input feature defining an axis or dimension on a feature space. Two input features would define a feature space that is a plane, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume.

    Each point in the space can be assigned a class label. In terms of a two-dimensional feature space, we can think of each point on the planing having a different color, according to their assigned class.

    The goal of a classification algorithm is to learn how to divide up the feature space such that labels are assigned correctly to points in the feature space, or at least, as correctly as is possible.

    This is a useful geometric understanding of classification predictive modeling. We can take it one step further.

    Once a classification machine learning algorithm divides a feature space, we can then classify each point in the feature space, on some arbitrary grid, to get an idea of how exactly the algorithm chose to divide up the feature space.

    This is called a decision surface or decision boundary, and it provides a diagnostic tool for understanding a model on a classification predictive modeling task.

    Although the notion of a “surface” suggests a two-dimensional feature space, the method can be used with feature spaces with more than two dimensions, where a surface is created for each pair of input features.

    Now that we are familiar with what a decision surface is, next, let’s define a dataset and model for which we later explore the decision surface.

    Dataset and Model

    In this section, we will define a classification task and predictive model to learn the task.

    Synthetic Classification Dataset

    We can use the make_blobs() scikit-learn function to define a classification task with a two-dimensional class numerical feature space and each point assigned one of two class labels, e.g. a binary classification task.


    Once defined, we can then create a scatter plot of the feature space with the first feature defining the x-axis, the second feature defining the y axis, and each sample represented as a point in the feature space.

    We can then color points in the scatter plot according to their class label as either 0 or 1.


    Tying this together, the complete example of defining and plotting a synthetic classification dataset is listed below.


    Running the example creates the dataset, then plots the dataset as a scatter plot with points colored by class label.

    We can see a clear separation between examples from the two classes and we can imagine how a machine learning model might draw a line to separate the two classes, e.g. perhaps a diagonal line right through the middle of the two groups.

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Scatter Plot of Binary Classification Dataset With 2D Feature Space

    Fit Classification Predictive Model

    We can now fit a model on our dataset.

    In this case, we will fit a logistic regression algorithm because we can predict both crisp class labels and probabilities, both of which we can use in our decision surface.

    We can define the model, then fit it on the training dataset.


    Once defined, we can use the model to make a prediction for the training dataset to get an idea of how well it learned to divide the feature space of the training dataset and assign labels.


    The predictions can be evaluated using classification accuracy.


    Tying this together, the complete example of fitting and evaluating a model on the synthetic binary classification dataset is listed below.


    Running the example fits the model and makes a prediction for each example.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    In this case, we can see that the model achieved a performance of about 97.2 percent.


    Now that we have a dataset and model, let’s explore how we can develop a decision surface.

    Plot a Decision Surface

    We can create a decision surface by fitting a model on the training dataset, then using the model to make predictions for a grid of values across the input domain.

    Once we have the grid of predictions, we can plot the values and their class label.

    A scatter plot could be used if a fine enough grid was taken. A better approach is to use a contour plot that can interpolate the colors between the points.

    The contourf() Matplotlib function can be used.

    This requires a few steps.

    First, we need to define a grid of points across the feature space.

    To do this, we can find the minimum and maximum values for each feature and expand the grid one step beyond that to ensure the whole feature space is covered.


    We can then create a uniform sample across each dimension using the arange() function at a chosen resolution. We will use a resolution of 0.1 in this case.


    Now we need to turn this into a grid.

    We can use the meshgrid() NumPy function to create a grid from these two vectors.

    If the first feature x1 is our x-axis of the feature space, then we need one row of x1 values of the grid for each point on the y-axis.

    Similarly, if we take x2 as our y-axis of the feature space, then we need one column of x2 values of the grid for each point on the x-axis.

    The meshgrid() function will do this for us, duplicating the rows and columns for us as needed. It returns two grids for the two input vectors. The first grid of x-values and the second of y-values, organized in an appropriately sized grid of rows and columns across the feature space.


    We then need to flatten out the grid to create samples that we can feed into the model and make a prediction.

    To do this, first, we flatten each grid into a vector.


    Then we stack the vectors side by side as columns in an input dataset, e.g. like our original training dataset, but at a much higher resolution.


    We can then feed this into our model and get a prediction for each point in the grid.


    So far, so good.

    We have a grid of values across the feature space and the class labels as predicted by our model.

    Next, we need to plot the grid of values as a contour plot.

    The contourf() function takes separate grids for each axis, just like what was returned from our prior call to meshgrid(). Great!

    So we can use xx and yy that we prepared earlier and simply reshape the predictions (yhat) from the model to have the same shape.


    We then plot the decision surface with a two-color colormap.


    We can then plot the actual points of the dataset over the top to see how well they were separated by the logistic regression decision surface.

    The complete example of plotting a decision surface for a logistic regression model on our synthetic binary classification dataset is listed below.


    Running the example fits the model and uses it to predict outcomes for the grid of values across the feature space and plots the result as a contour plot.

    We can see, as we might have suspected, logistic regression divides the feature space using a straight line. It is a linear model, after all; this is all it can do.

    Creating a decision surface is almost like magic. It gives immediate and meaningful insight into how the model has learned the task.

    Try it with different algorithms, like an SVM or decision tree.
    Post your resulting maps as links in the comments below!

    Decision Surface for Logistic Regression on a Binary Classification Task

    Decision Surface for Logistic Regression on a Binary Classification Task

    We can add more depth to the decision surface by using the model to predict probabilities instead of class labels.


    When plotted, we can see how confident or likely it is that each point in the feature space belongs to each of the class labels, as seen by the model.

    We can use a different color map that has gradations, and show a legend so we can interpret the colors.


    The complete example of creating a decision surface using probabilities is listed below.


    Running the example predicts the probability of class membership for each point on the grid across the feature space and plots the result.

    Here, we can see that the model is unsure (lighter colors) around the middle of the domain, given the sampling noise in that area of the feature space. We can also see that the model is very confident (full colors) in the bottom-left and top-right halves of the domain.

    Together, the crisp class and probability decision surfaces are powerful diagnostic tools for understanding your model and how it divides the feature space for your predictive modeling task.

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Probability Decision Surface for Logistic Regression on a Binary Classification Task

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Summary

    In this tutorial, you discovered how to plot a decision surface for a classification machine learning algorithm.

    Specifically, you learned:

    • Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space.
    • How to plot a decision surface for using crisp class labels for a machine learning algorithm.
    • How to plot and interpret a decision surface using predicted probabilities.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    1598763608_908_How-to-use-Seaborn-Data-Visualization-for-Machine-Learning.png

    How to use Seaborn Data Visualization for Machine Learning

    Last Updated on August 19, 2020

    Data visualization provides insight into the distribution and relationships between variables in a dataset.

    This insight can be helpful in selecting data preparation techniques to apply prior to modeling and the types of algorithms that may be most suited to the data.

    Seaborn is a data visualization library for Python that runs on top of the popular Matplotlib data visualization library, although it provides a simple interface and aesthetically better-looking plots.

    In this tutorial, you will discover a gentle introduction to Seaborn data visualization for machine learning.

    After completing this tutorial, you will know:

    • How to summarize the distribution of variables using bar charts, histograms, and box and whisker plots.
    • How to summarize relationships using line plots and scatter plots.
    • How to compare the distribution and relationships of variables for different class values on the same plot.

    Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    How to use Seaborn Data Visualization for Machine Learning

    How to use Seaborn Data Visualization for Machine Learning
    Photo by Martin Pettitt, some rights reserved.

    Tutorial Overview

    This tutorial is divided into six parts; they are:

    • Seaborn Data Visualization Library
    • Line Plots
    • Bar Chart Plots
    • Histogram Plots
    • Box and Whisker Plots
    • Scatter Plots

    Seaborn Data Visualization Library

    The primary plotting library for Python is called Matplotlib.

    Seaborn is a plotting library that offers a simpler interface, sensible defaults for plots needed for machine learning, and most importantly, the plots are aesthetically better looking than those in Matplotlib.

    Seaborn requires that Matplotlib is installed first.

    You can install Matplotlib directly using pip, as follows:


    Once installed, you can confirm that the library can be loaded and used by printing the version number, as follows:


    Running the example prints the current version of the Matplotlib library.


    Next, the Seaborn library can be installed, also using pip:


    Once installed, we can also confirm the library can be loaded and used by printing the version number, as follows:


    Running the example prints the current version of the Seaborn library.


    To create Seaborn plots, you must import the Seaborn library and call functions to create the plots.

    Importantly, Seaborn plotting functions expect data to be provided as Pandas DataFrames. This means that if you are loading your data from CSV files, you must use Pandas functions like read_csv() to load your data as a DataFrame. When plotting, columns can then be specified via the DataFrame name or column index.

    To show the plot, you can call the show() function on Matplotlib library.


    Alternatively, the plots can be saved to file, such as a PNG formatted image file. The savefig() Matplotlib function can be used to save images.


    Now that we have Seaborn installed, let’s look at some common plots we may need when working with machine learning data.

    Line Plots

    A line plot is generally used to present observations collected at regular intervals.

    The x-axis represents the regular interval, such as time. The y-axis shows the observations, ordered by the x-axis and connected by a line.

    A line plot can be created in Seaborn by calling the lineplot() function and passing the x-axis data for the regular interval, and y-axis for the observations.

    We can demonstrate a line plot using a time series dataset of monthly car sales.

    The dataset has two columns: “Month” and “Sales.” Month will be used as the x-axis and Sales will be plotted on the y-axis.


    Tying this together, the complete example is listed below.


    Running the example first loads the time series dataset and creates a line plot of the data, clearly showing a trend and seasonality in the sales data.

    Line Plot of a Time Series Dataset

    Line Plot of a Time Series Dataset

    For more great examples of line plots with Seaborn, see: Visualizing statistical relationships.

    Bar Chart Plots

    A bar chart is generally used to present relative quantities for multiple categories.

    The x-axis represents the categories that are spaced evenly. The y-axis represents the quantity for each category and is drawn as a bar from the baseline to the appropriate level on the y-axis.

    A bar chart can be created in Seaborn by calling the countplot() function and passing the data.

    We will demonstrate a bar chart with a variable from the breast cancer classification dataset that is comprised of categorical input variables.

    We will just plot one variable, in this case, the first variable which is the age bracket.


    Tying this together, the complete example is listed below.


    Running the example first loads the breast cancer dataset and creates a bar chart plot of the data, showing each age group and the number of individuals (samples) that fall within reach group.

    Bar Chart Plot of Age Range Categorical Variable

    Bar Chart Plot of Age Range Categorical Variable

    We might also want to plot the counts for each category for a variable, such as the first variable, against the class label.

    This can be achieved using the countplot() function and specifying the class variable (column index 9) via the “hue” argument, as follows:


    Tying this together, the complete example is listed below.


    Running the example first loads the breast cancer dataset and creates a bar chart plot of the data, showing each age group and the number of individuals (samples) that fall within each group separated by the two class labels for the dataset.

    Bar Chart Plot of Age Range Categorical Variable by Class Label

    Bar Chart Plot of Age Range Categorical Variable by Class Label

    For more great examples of bar chart plots with Seaborn, see: Plotting with categorical data.

    Histogram Plots

    A histogram plot is generally used to summarize the distribution of a numerical data sample.

    The x-axis represents discrete bins or intervals for the observations. For example, observations with values between 1 and 10 may be split into five bins, the values [1,2] would be allocated to the first bin, [3,4] would be allocated to the second bin, and so on.

    The y-axis represents the frequency or count of the number of observations in the dataset that belong to each bin.

    Essentially, a data sample is transformed into a bar chart where each category on the x-axis represents an interval of observation values.

    A histogram can be created in Seaborn by calling the distplot() function and passing the variable.

    We will demonstrate a boxplot with a numerical variable from the diabetes classification dataset. We will just plot one variable, in this case, the first variable, which is the number of times that a patient was pregnant.


    Tying this together, the complete example is listed below.


    Running the example first loads the diabetes dataset and creates a histogram plot of the variable, showing the distribution of the values with a hard cut-off at zero.

    The plot shows both the histogram (counts of bins) as well as a smooth estimate of the probability density function.

    Histogram Plot of Number of Times Pregnant Numerical Variable

    Histogram Plot of Number of Times Pregnant Numerical Variable

    For more great examples of histogram plots with Seaborn, see: Visualizing the distribution of a dataset.

    Box and Whisker Plots

    A box and whisker plot, or boxplot for short, is generally used to summarize the distribution of a data sample.

    The x-axis is used to represent the data sample, where multiple boxplots can be drawn side by side on the x-axis if desired.

    The y-axis represents the observation values. A box is drawn to summarize the middle 50 percent of the dataset starting at the observation at the 25th percentile and ending at the 75th percentile. This is called the interquartile range, or IQR. The median, or 50th percentile, is drawn with a line.

    Lines called whiskers are drawn extending from both ends of the box, calculated as (1.5 * IQR) to demonstrate the expected range of sensible values in the distribution. Observations outside the whiskers might be outliers and are drawn with small circles.

    A boxplot can be created in Seaborn by calling the boxplot() function and passing the data.

    We will demonstrate a boxplot with a numerical variable from the diabetes classification dataset. We will just plot one variable, in this case, the first variable, which is the number of times that a patient was pregnant.


    Tying this together, the complete example is listed below.


    Running the example first loads the diabetes dataset and creates a boxplot plot of the first input variable, showing the distribution of the number of times patients were pregnant.

    We can see the median just above 2.5 times, some outliers up around 15 times (wow!).

    Box and Whisker Plot of Number of Times Pregnant Numerical Variable

    Box and Whisker Plot of Number of Times Pregnant Numerical Variable

    We might also want to plot the distribution of the numerical variable for each value of a categorical variable, such as the first variable, against the class label.

    This can be achieved by calling the boxplot() function and passing the class variable as the x-axis and the numerical variable as the y-axis.


    Tying this together, the complete example is listed below.


    Running the example first loads the diabetes dataset and creates a boxplot of the data, showing the distribution of the number of times pregnant as a numerical variable for the two-class labels.

    Box and Whisker Plot of Number of Times Pregnant Numerical Variable by Class Label

    Box and Whisker Plot of Number of Times Pregnant Numerical Variable by Class Label

    Scatter Plots

    A scatter plot, or scatterplot, is generally used to summarize the relationship between two paired data samples.

    Paired data samples mean that two measures were recorded for a given observation, such as the weight and height of a person.

    The x-axis represents observation values for the first sample, and the y-axis represents the observation values for the second sample. Each point on the plot represents a single observation.

    A scatterplot can be created in Seaborn by calling the scatterplot() function and passing the two numerical variables.

    We will demonstrate a scatterplot with two numerical variables from the diabetes classification dataset. We will plot the first versus the second variable, in this case, the first variable, which is the number of times that a patient was pregnant, and the second is the plasma glucose concentration after a two hour oral glucose tolerance test (more details of the variables here).


    Tying this together, the complete example is listed below.


    Running the example first loads the diabetes dataset and creates a scatter plot of the first two input variables.

    We can see a somewhat uniform relationship between the two variables.

    Scatter Plot of Number of Times Pregnant vs. Plasma Glucose Numerical Variables

    Scatter Plot of Number of Times Pregnant vs. Plasma Glucose Numerical Variables

    We might also want to plot the relationship for the pair of numerical variables against the class label.

    This can be achieved using the scatterplot() function and specifying the class variable (column index 8) via the “hue” argument, as follows:


    Tying this together, the complete example is listed below.


    Running the example first loads the diabetes dataset and creates a scatter plot of the first two variables vs. class label.

    Scatter Plot of Number of Times Pregnant vs. Plasma Glucose Numerical Variables by Class Label

    Scatter Plot of Number of Times Pregnant vs. Plasma Glucose Numerical Variables by Class Label

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Tutorials
    APIs

    Summary

    In this tutorial, you discovered a gentle introduction to Seaborn data visualization for machine learning.

    Specifically, you learned:

    • How to summarize the distribution of variables using bar charts, histograms, and box and whisker plots.
    • How to summarize relationships using line plots and scatter plots.
    • How to compare the distribution and relationships of variables for different class values on the same plot.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo