1600935653_What-jumps-out-in-a-photo-changes-the-longer-we.png

What jumps out in a photo changes the longer we look | MIT News

What seizes your attention at first glance might change with a closer look. That elephant dressed in red wallpaper might initially grab your eye until your gaze moves to the woman on the living room couch and the surprising realization that the pair appear to be sharing a quiet moment together.

In a study being presented at the virtual Computer Vision and Pattern Recognition conference this week, researchers show that our attention moves in distinctive ways the longer we stare at an image, and that these viewing patterns can be replicated by artificial intelligence models. The work suggests immediate ways of improving how visual content is teased and eventually displayed online. For example, an automated cropping tool might zoom in on the elephant for a thumbnail preview or zoom out to include the intriguing details that become visible once a reader clicks on the story.

“In the real world, we look at the scenes around us and our attention also moves,” says Anelise Newman, the study’s co-lead author and a master’s student at MIT. “What captures our interest over time varies.” The study’s senior authors are Zoya Bylinskii PhD ’18, a research scientist at Adobe Research, and Aude Oliva, co-director of the MIT Quest for Intelligence and a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory.

What researchers know about saliency, and how humans perceive images, comes from experiments in which participants are shown pictures for a fixed period of time. But in the real world, human attention often shifts abruptly. To simulate this variability, the researchers used a crowdsourcing user interface called CodeCharts to show participants photos at three durations — half a second, 3 seconds, and 5 seconds — in a set of online experiments. 

When the image disappeared, participants were asked to report where they had last looked by typing in a three-digit code on a gridded map corresponding to the image. In the end, the researchers were able to gather heat maps of where in a given image participants had collectively focused their gaze at different moments in time. 

At the split-second interval, viewers focused on faces or a visually dominant animal or object. By 3 seconds, their gaze had shifted to action-oriented features, like a dog on a leash, an archery target, or an airborne frisbee. At 5 seconds, their gaze either shot back, boomerang-like, to the main subject, or it lingered on the suggestive details. 

“We were surprised at just how consistent these viewing patterns were at different durations,” says the study’s other lead author, Camilo Fosco, a PhD student at MIT.

With real-world data in hand, the researchers next trained a deep learning model to predict the focal points of images it had never seen before, at different viewing durations. To reduce the size of their model, they included a recurrent module that works on compressed representations of the input image, mimicking the human gaze as it explores an image at varying durations. When tested, their model outperformed the state of the art at predicting saliency across viewing durations.

The model has potential applications for editing and rendering compressed images and even improving the accuracy of automated image captioning. In addition to guiding an editing tool to crop an image for shorter or longer viewing durations, it could prioritize which elements in a compressed image to render first for viewers. By clearing away the visual clutter in a scene, it could improve the overall accuracy of current photo-captioning techniques. It could also generate captions for images meant for split-second viewing only. 

“The content that you consider most important depends on the time you have to look at it,” says Bylinskii. “If you see the full image at once, you may not have time to absorb it all.”

As more images and videos are shared online, the need for better tools to find and make sense of relevant content is growing. Research on human attention offers insights for technologists. Just as computers and camera-equipped mobile phones helped create the data overload, they are also giving researchers new platforms for studying human attention and designing better tools to help us cut through the noise.

In a related study accepted to the ACM Conference on Human Factors in Computing Systems, researchers outline the relative benefits of four web-based user interfaces, including CodeCharts, for gathering human attention data at scale. All four tools capture attention without relying on traditional eye-tracking hardware in a lab, either by collecting self-reported gaze data, as CodeCharts does, or by recording where subjects click their mouse or zoom in on an image.

“There’s no one-size-fits-all interface that works for all use cases, and our paper focuses on teasing apart these trade-offs,” says Newman, lead author of the study.

By making it faster and cheaper to gather human attention data, the platforms may help to generate new knowledge on human vision and cognition. “The more we learn about how humans see and understand the world, the more we can build these insights into our AI tools to make them more useful,” says Oliva.

Other authors of the CVPR paper are Pat Sukhum, Yun Bin Zhang, and Nanxuan Zhao. The research was supported by the Vannevar Bush Faculty Fellowship program, an Ignite grant from the SystemsThatLearn@CSAIL, and cloud computing services from MIT Quest.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Combined-Algorithm-Selection-and-Hyperparameter-Optimization-CASH-Optimization.jpg

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)

Machine learning model selection and configuration may be the biggest challenge in applied machine learning.

Controlled experiments must be performed in order to discover what works best for a given classification or regression predictive modeling task. This can feel overwhelming given the large number of data preparation schemes, learning algorithms, and model hyperparameters that could be considered.

The common approach is to use a shortcut, such as using a popular algorithm or testing a small number of algorithms with default hyperparameters.

A modern alternative is to consider the selection of data preparation, learning algorithm, and algorithm hyperparameters one large global optimization problem. This characterization is generally referred to as Combined Algorithm Selection and Hyperparameter Optimization, or “CASH Optimization” for short.

In this post, you will discover the challenge of machine learning model selection and the modern solution referred to CASH Optimization.

After reading this post, you will know:

  • The challenge of machine learning model and hyperparameter selection.
  • The shortcuts of using popular models or making a series of sequential decisions.
  • The characterization of Combined Algorithm Selection and Hyperparameter Optimization that underlies modern AutoML.

Let’s get started.

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)

Combined Algorithm Selection and Hyperparameter Optimization (CASH Optimization)
Photo by Bernard Spragg. NZ, some rights reserved.

Overview

This tutorial is divided into three parts; they are:

  • Challenge of Model and Hyperparameter Selection
  • Solutions to Model and Hyperparameter Selection
  • Combined Algorithm Selection and Hyperparameter Optimization
  • Challenge of Model and Hyperparameter Selection

    There is no definitive mapping of machine learning algorithms to predictive modeling tasks.

    We cannot look at a dataset and know the best algorithm to use, let alone the best data transforms to use to prepare the data or the best configuration for a given model.

    Instead, we must use controlled experiments to discover what works best for a given dataset.

    As such, applied machine learning is an empirical discipline. It is engineering and art more than science.

    The problem is that there are tens, if not hundreds, of machine learning algorithms to choose from. Each algorithm may have up to tens of hyperparameters to be configured.

    To a beginner, the scope of the problem is overwhelming.

    • Where do you start?
    • What do you start with?
    • When do you discard a model?
    • When do you double down on a model?

    There are a few standard solutions to this problem adopted by most practitioners, experienced and otherwise.

    Solutions to Model and Hyperparameter Selection

    Let’s look at two of the most common short-cuts to this problem of selecting data transforms, machine learning models, and model hyperparameters.

    Use a Popular Algorithm

    One approach is to use a popular machine learning algorithm.

    It can be challenging to make the right choice when faced with these degrees of freedom, leaving many users to select algorithms based on reputation or intuitive appeal, and/or to leave hyperparameters set to default values. Of course, this approach can yield performance far worse than that of the best method and hyperparameter settings.

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    For example, if it seems like everyone is talking about “random forest,” then random forest becomes the right algorithm for all classification and regression problems you encounter, and you limit the experimentation to the hyperparameters of the random forest algorithm.

    • Short-Cut #1: Use a popular algorithm like “random forest” or “xgboost“.

    Random forest indeed performs well on a wide range of prediction tasks. But we cannot know if it will be good or even best for a given dataset. The risk is that we may be able to achieve better results with a much simpler linear model.

    A workaround might be to test a range of popular algorithms, leading into the next shortcut.

    Sequentially Test Transforms, Models, and Hyperparameters

    Another approach is to approach the problem as a series of sequential decisions.

    For example, review the data and select data transforms that make data more Gaussian, remove outliers, etc. Then test a suite of algorithms with default hyperparameters and select one or a few that perform well. Then tune the hyperparameters of those top-performing models.

    • Short-Cut #2: Sequentially select data transforms, models, and model hyperparameters.

    This is the approach that I recommend for getting good results quickly; for example:

    This short-cut too can be effective and reduces the likelihood of missing an algorithm that performs well on your dataset. The downside here is more subtle and impacts you if you are seeking great or excellent results rather than merely good results quickly.

    The risk is selecting data transforms prior to selecting models might mean that you miss the data preparation sequence that gets the most out of an algorithm.

    Similarly, selecting a model or subset of models prior to selecting model hyperparameters means that you might be missing a model with hyperparameters other than the default values that performs better than any of the subset of models selected and their subsequent configurations.

    Two important problems in AutoML are that (1) no single machine learning method performs best on all datasets and (2) some machine learning methods (e.g., non-linear SVMs) crucially rely on hyperparameter optimization.

    — Page 115, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    A workaround might be to spot check good or well-performing configurations of each algorithm as part of the algorithm spot check. This is only a partial solution.

    There is a better approach.

    Combined Algorithm Selection and Hyperparameter Optimization

    Selecting a data preparation pipeline, machine learning model, and model hyperparameters is a search problem.

    The possible choices at each step define a search space, and a single combination represents a point in that space that can be evaluated with a dataset.

    Navigating the search space efficiently is referred to as global optimization.

    This has been well understood for a long time in the field of machine learning, although perhaps tacitly, with focus typically on one element of the problem, such as hyperparameter optimization.

    The important insight is that there are dependencies between each step, which influences the size and structure of the search space.

    … [the problem] can be viewed as a single hierarchical hyperparameter optimization problem, in which even the choice of algorithm itself is considered a hyperparameter.

    — Page 82, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    This requires that the data preparation and machine learning model, along with the model hyperparameters, must form the scope of the optimization problem and that the optimization algorithm must be aware of the dependencies between.

    This is a challenging global optimization problem, notably because of the dependencies, but also because estimating the performance of a machine learning model on a dataset is stochastic, resulting in a noisy distribution of performance scores (e.g. via repeated k-fold cross-validation).

    … the combined space of learning algorithms and their hyperparameters is very challenging to search: the response function is noisy and the space is high dimensional, involves both categorical and continuous choices, and contains hierarchical dependencies (e.g., the hyperparameters of a learning algorithm are only meaningful if that algorithm is chosen; the algorithm choices in an ensemble method are only meaningful if that ensemble method is chosen; etc).

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    This challenge was perhaps best characterized by Chris Thornton, et al. in their 2013 paper titled “Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms.” In the paper, they refer to this problem as “Combined Algorithm Selection And Hyperparameter Optimization,” or “CASH Optimization” for short.

    … a natural challenge for machine learning: given a dataset, to automatically and simultaneously choose a learning algorithm and set its hyperparameters to optimize empirical performance. We dub this the combined algorithm selection and hyperparameter optimization problem (short: CASH).

    — Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, 2012.

    This characterization is also sometimes referred to as “Full Model Selection,” or FMS for short.

    The FMS problem consists of the following: given a pool of preprocessing methods, feature selection and learning algorithms, select the combination of these that obtains the lowest classification error for a given data set. This task also includes the selection of hyperparameters for the considered methods, resulting in a vast search space that is well suited for stochastic optimization techniques.

    — Particle Swarm Model Selection, 2009.

    Thornton, et al. proceeded to use global optimization algorithms that are aware of the dependencies, so-called sequential global optimization algorithms, such as specific versions of Bayesian Optimization. They then proceeded to implement their approach for the WEKA machine learning workbench, called the AutoWEKA Projects.

    A promising approach is Bayesian Optimization, and in particular Sequential Model-Based Optimization (SMBO), a versatile stochastic optimization framework that can work with both categorical and continuous hyperparameters, and that can exploit hierarchical structure stemming from conditional parameters.

    — Page 85, Automated Machine Learning: Methods, Systems, Challenges, 2019.

    This now provides the dominant paradigm for a field of study referred to as “Automated Machine Learning,” or AutoML for short. AutoML is concerned with providing tools that allow practitioners with modest technical skill to quickly find effective solutions to machine learning tasks, such as classification and regression predictive modeling.

    AutoML aims to provide effective off-the-shelf learning systems to free experts and non-experts alike from the tedious and time-consuming tasks of selecting the right algorithm for a dataset at hand, along with the right preprocessing method and the various hyperparameters of all involved components.

    — Page 136,Automated Machine Learning: Methods, Systems, Challenges, 2019.

    AutoML techniques are provided by machine learning libraries and increasingly as services, so-called machine learning as a service, or MLaaS for short.

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Papers
    Books
    Articles

    Summary

    In this post, you discovered the challenge of machine learning model selection and the modern solution referred to as CASH Optimization.

    Specifically, you learned:

    • The challenge of machine learning model and hyperparameter selection.
    • The shortcuts of using popular models or making a series of sequential decisions.
    • The characterization of Combined Algorithm Selection and Hyperparameter Optimization that underlies modern AutoML.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Multi-Core-Machine-Learning-in-Python-With-Scikit-Learn.png

    Multi-Core Machine Learning in Python With Scikit-Learn

    Many computationally expensive tasks for machine learning can be made parallel by splitting the work across multiple CPU cores, referred to as multi-core processing.

    Common machine learning tasks that can be made parallel include training models like ensembles of decision trees, evaluating models using resampling procedures like k-fold cross-validation, and tuning model hyperparameters, such as grid and random search.

    Using multiple cores for common machine learning tasks can dramatically decrease the execution time as a factor of the number of cores available on your system. A common laptop and desktop computer may have 2, 4, or 8 cores. Larger server systems may have 32, 64, or more cores available, allowing machine learning tasks that take hours to be completed in minutes.

    In this tutorial, you will discover how to configure scikit-learn for multi-core machine learning.

    After completing this tutorial, you will know:

    • How to train machine learning models using multiple cores.
    • How to make the evaluation of machine learning models parallel.
    • How to use multiple cores to tune machine learning model hyperparameters.

    Let’s get started.

    Multi-Core Machine Learning in Python With Scikit-Learn

    Multi-Core Machine Learning in Python With Scikit-Learn
    Photo by ER Bauer, some rights reserved.

    Tutorial Overview

    This tutorial is divided into five parts; they are:

  • Multi-Core Scikit-Learn
  • Multi-Core Model Training
  • Multi-Core Model Evaluation
  • Multi-Core Hyperparameter Tuning
  • Recommendations
  • Multi-Core Scikit-Learn

    Machine learning can be computationally expensive.

    There are three main centers of this computational cost; they are:

    • Training machine learning models.
    • Evaluating machine learning models.
    • Hyperparameter tuning machine learning models.

    Worse, these concerns compound.

    For example, evaluating machine learning models using a resampling technique like k-fold cross-validation requires that the training process is repeated multiple times.

    • Evaluation Requires Repeated Training

    Tuning model hyperparameters compounds this further as it requires the evaluation procedure repeated for each combination of hyperparameters tested.

    • Tuning Requires Repeated Evaluation

    Most, if not all, modern computers have multi-core CPUs. This includes your workstation, your laptop, as well as larger servers.

    You can configure your machine learning models to harness multiple cores of your computer, dramatically speeding up computationally expensive operations.

    The scikit-learn Python machine learning library provides this capability via the n_jobs argument on key machine learning tasks, such as model training, model evaluation, and hyperparameter tuning.

    This configuration argument allows you to specify the number of cores to use for the task. The default is None, which will use a single core. You can also specify a number of cores as an integer, such as 1 or 2. Finally, you can specify -1, in which case the task will use all of the cores available on your system.

    • n_jobs: Specify the number of cores to use for key machine learning tasks.

    Common values are:

    • n_jobs=None: Use a single core or the default configured by your backend library.
    • n_jobs=4: Use the specified number of cores, in this case 4.
    • n_jobs=-1: Use all available cores.

    What is a core?

    A CPU may have multiple physical CPU cores, which is essentially like having multiple CPUs. Each core may also have hyper-threading, a technology that under many circumstances allows you to double the number of cores.

    For example, my workstation has four physical cores, which are doubled to eight cores due to hyper-threading. Therefore, I can experiment with 1-8 cores or specify -1 to use all cores on my workstation.

    Now that we are familiar with the scikit-learn library’s capability to support multi-core parallel processing for machine learning, let’s work through some examples.

    You will get different timings for all of the examples in this tutorial; share your results in the comments. You may also need to change the number of cores to match the number of cores on your system.

    Note: Yes, I am aware of the timeit API, but chose against it for this tutorial. We are not profiling the code examples per se; instead, I want you to focus on how and when to use the multi-core capabilities of scikit-learn and that they offer real benefits. I wanted the code examples to be clean and simple to read, even for beginners. I set it as an extension to update all examples to use the timeit API and get more accurate timings. Share your results in the comments.

    Multi-Core Model Training

    Many machine learning algorithms support multi-core training via an n_jobs argument when the model is defined.

    This affects not just the training of the model, but also the use of the model when making predictions.

    A popular example is the ensemble of decision trees, such as bagged decision trees, random forest, and gradient boosting.

    In this section we will explore accelerating the training of a RandomForestClassifier model using multiple cores. We will use a synthetic classification task for our experiments.

    In this case, we will define a random forest model with 500 trees and use a single core to train the model.


    We can record the time before and after the call to the train() function using the time() function. We can then subtract the start time from the end time and report the execution time in the number of seconds.

    The complete example of evaluating the execution time of training a random forest model with a single core is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that it takes about 10 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now change the example to use all of the physical cores on the system, in this case, four.


    The complete example of multi-core training of the model with four cores is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that the speed of execution more than halved to about 3.151 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now change the number of cores to eight to account for the hyper-threading supported by the four physical cores.


    We can achieve the same effect by setting n_jobs to -1 to automatically use all cores; for example:


    We will stick to manually specifying the number of cores for now.

    The complete example of multi-core training of the model with eight cores is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that we got another drop in execution speed from about 3.151 to about 2.521 by using all cores.

    How long does it take on your system? Share your results in the comments below.


    We can make the relationship between the number of cores used during training and execution speed more concrete by comparing all values between one and eight and plotting the result.

    The complete example is listed below.


    Running the example first reports the execution speed for each number of cores used during training.

    We can see a steady decrease in execution speed from one to eight cores, although the dramatic benefits stop after four physical cores.

    How long does it take on your system? Share your results in the comments below.


    A plot is also created to show the relationship between the number of cores used during training and the execution speed, showing that we continue to see a benefit all the way to eight cores.

    Line Plot of Number of Cores Used During Training vs. Execution Speed

    Line Plot of Number of Cores Used During Training vs. Execution Speed

    Now that we are familiar with the benefit of multi-core training of machine learning models, let’s look at multi-core model evaluation.

    Multi-Core Model Evaluation

    The gold standard for model evaluation is k-fold cross-validation.

    This is a resampling procedure that requires that the model is trained and evaluated k times on different partitioned subsets of the dataset. The result is an estimate of the performance of a model when making predictions on data not used during training that can be used to compare and select a good or best model for a dataset.

    In addition, it is also a good practice to repeat this evaluation process multiple times, referred to as repeated k-fold cross-validation.

    The evaluation procedure can be configured to use multiple cores, where each model training and evaluation happens on a separate core. This can be done by setting the n_jobs argument on the call to cross_val_score() function; for example:

    We can explore the effect of multiple cores on model evaluation.

    First, let’s evaluate the model using a single core.


    We will evaluate the random forest model and use a single core in the training of the model (for now).


    The complete example is listed below.


    Running the example evaluates the model using 10-fold cross-validation with three repeats.

    In this case, we see that the evaluation of the model took about 6.412 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can update the example to use all eight cores of the system and expect a large speedup.


    The complete example is listed below.


    Running the example evaluates the model using multiple cores.

    In this case, we can see the execution timing dropped from 6.412 seconds to about 2.371 seconds, giving a welcome speedup.

    How long does it take on your system? Share your results in the comments below.


    As we did in the previous section, we can time the execution speed for each number of cores from one to eight to get an idea of the relationship.

    The complete example is listed below.


    Running the example first reports the execution time in seconds for each number of cores for evaluating the model.

    We can see that there is not a dramatic improvement above four physical cores.

    We can also see a difference here when training with eight cores from the previous experiment. In this case, evaluating performance took 1.492 seconds whereas the standalone case took about 2.371 seconds.

    This highlights the limitation of the evaluation methodology we are using where we are reporting the performance of a single run rather than repeated runs. There is some spin-up time required to load classes into memory and perform any JIT optimization.

    Regardless of the accuracy of our flimsy profiling, we do see the familiar speedup of model evaluation with the increase of cores used during the process.

    How long does it take on your system? Share your results in the comments below.


    A plot of the relationship between the number of cores and the execution speed is also created.

    Line Plot of Number of Cores Used During Evaluation vs. Execution Speed

    Line Plot of Number of Cores Used During Evaluation vs. Execution Speed

    We can also make the model training process parallel during the model evaluation procedure.

    Although this is possible, should we?

    To explore this question, let’s first consider the case where model training uses all cores and model evaluation uses a single core.


    The complete example is listed below.


    Running the example evaluates the model using a single core, but each trained model uses a single core.

    In this case, we can see that the model evaluation takes more than 10 seconds, much longer than the 1 or 2 seconds when we use a single core for training and all cores for parallel model evaluation.

    How long does it take on your system? Share your results in the comments below.


    What if we split the number of cores between the training and evaluation procedures?


    The complete example is listed below.


    Running the example evaluates the model using four cores, and each model is trained using four different cores.

    We can see an improvement over training with all cores and evaluating with one core, but at least for this model on this dataset, it is more efficient to use all cores for model evaluation and a single core for model training.

    How long does it take on your system? Share your results in the comments below.


    Multi-Core Hyperparameter Tuning

    It is common to tune the hyperparameters of a machine learning model using a grid search or a random search.

    The scikit-learn library provides these capabilities via the GridSearchCV and RandomizedSearchCV classes respectively.

    Both of these search procedures can be made parallel by setting the n_jobs argument, assigning each hyperparameter configuration to a core for evaluation.

    The model evaluation itself could also be multi-core, as we saw in the previous section, and the model training for a given evaluation can also be training as we saw in the second before that. Therefore, the stack of potentially multi-core processes is starting to get challenging to configure.

    In this specific implementation, we can make the model training parallel, but we don’t have control over how each model hyperparameter and how each model evaluation is made multi-core. The documentation is not clear at the time of writing, but I would guess that each model evaluation using a single core hyperparameter configuration is split into jobs.

    Let’s explore the benefits of performing model hyperparameter tuning using multiple cores.

    First, let’s evaluate a grid of different configurations of the random forest algorithm using a single core.


    The complete example is listed below.


    Running the example tests different values of the max_features configuration for random forest, where each configuration is evaluated using repeated k-fold cross-validation.

    In this case, the grid search on a single core takes about 28.838 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now configure the grid search to use all available cores on the system, in this case, eight cores.


    We can then evaluate how long this multi-core grids search takes to execute. The complete example is listed below.


    Running the example reports execution time for the grid search.

    In this case, we see a factor of about four speed up from roughly 28.838 seconds to around 7.418 seconds.

    How long does it take on your system? Share your results in the comments below.


    Intuitively, we would expect that making the grid search multi-core should be the focus and not model training.

    Nevertheless, we can divide the number of cores between model training and the grid search to see if it offers a benefit for this model on this dataset.


    The complete example of multi-core model training and multi-core hyperparameter tuning is listed below.


    In this case, we do see a decrease in execution speed compared to a single core case, but not as much benefit as assigning all cores to the grid search process.

    How long does it take on your system? Share your results in the comments below.


    Recommendations

    This section lists some general recommendations when using multiple cores for machine learning.

    • Confirm the number of cores available on your system.
    • Consider using an AWS EC2 instance with many cores to get an immediate speed up.
    • Check the API documentation to see if the model/s you are using support multi-core training.
    • Confirm multi-core training offers a measurable benefit on your system.
    • When using k-fold cross-validation, it is probably better to assign cores to the resampling procedure and leave model training single core.
    • When using hyperparamter tuning, it is probably better to make the search multi-core and leave the model training and evaluation single core.

    Do you have any recommendations of your own?

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials
    APIs
    Articles

    Summary

    In this tutorial, you discovered how to configure scikit-learn for multi-core machine learning.

    Specifically, you learned:

    • How to train machine learning models using multiple cores.
    • How to make the evaluation of machine learning models parallel.
    • How to use multiple cores to tune machine learning model hyperparameters.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives

    wpChatIcon