Featured Image

Learning the ropes and throwing lifelines | MIT News

In March, as her friends and neighbors were scrambling to pack up and leave campus due to the Covid-19 pandemic, Geeticka Chauhan found her world upended in yet another way. Just weeks earlier, she had been elected council president of MIT’s largest graduate residence, Sidney-Pacific. Suddenly the fourth-year PhD student was plunged into rounds of emergency meetings with MIT administrators.

From her apartment in Sidney-Pacific, where she has stayed put due to travel restrictions in her home country of India, Chauhan is still learning the ropes of her new position. With others, she has been busy preparing to meet the future challenge of safely redensifying the living space of more than 1,000 people: how to regulate high-density common areas, handle noise complaints as people spend more time in their rooms, and care for the mental and physical well-being of a community that can only congregate virtually. “It’s just such a crazy time,” she says.

She’s prepared for the challenge. During her time at MIT, while pursuing her research using artificial intelligence to understand human language, Chauhan has worked to strengthen the bonds of her community in numerous ways, often drawing on her experience as an international student to do so.

Adventures in brunching

When Chauhan first came to MIT in 2017, she quickly fell in love with Sidney-Pacific’s thriving and freewheeling “helper culture.” “These are all researchers, but they’re maybe making brownies, doing crazy experiments that they would do in lab, except in the kitchen,” she says. “That was my first introduction to the MIT spirit.”

Next thing she knew, she was teaching Budokon yoga, mashing chickpeas into guacamole, and immersing herself in the complex operations of a monthly brunch attended by hundreds of graduate students, many of whom came to MIT from outside the U.S. In addition to the genuine thrill of cracking 300 eggs in 30 minutes, working on the brunches kept her grounded in a place thousands of miles from her home in New Delhi. “It gave me a sense of community and made me feel like I have a family here,” she says.

Chauhan has found additional ways to address the particular difficulties that international students face. As a member of the Presidential Advisory Council this year, she gathered international student testimonies on visa difficulties and presented them to MIT’s president and the director of the International Students Office. And when a friend from mainland China had to self-quarantine on Valentine’s Day, Chauhan knew she had to act. As brunch chair, she organized food delivery, complete with chocolates and notes, for Sidney-Pacific residents who couldn’t make it to the monthly event. “Initially when you come back to the U.S. from your home country, you really miss your family,” she says. “I thought self-quarantining students should feel their MIT community cares for them.”

Culture shock

Growing up in New Delhi, math was initially one of her weaknesses, Chauhan says, and she was scared and confused by her early introduction to coding. Her mother and grandmother, with stern kindness and chocolates, encouraged her to face these fears. “My mom used to teach me that with hard work, you can make your biggest weakness your biggest strength,” she explains. She soon set her sights on a future in computer science.

However, as Chauhan found her life increasingly dominated by the high-pressure culture of preparing for college, she began to long for a feeling of wholeness, and for the person she left behind on the way. “I used to have a lot of artistic interests but didn’t get to explore them,” she says. She quit her weekend engineering classes, enrolled in a black and white photography class, and after learning about the extracurricular options at American universities, landed a full scholarship to attend Florida International University.

It was a culture shock. She didn’t know many Indian students in Miami and felt herself struggling to reconcile the individualistic mindset around her with the community and family-centered life at home. She says the people she met got her through, including Mark Finlayson, a professor studying the science of narrative from the viewpoint of natural language processing. Under Finlayson’s guidance she developed a fascination with the way AI techniques could be used to better understand the patterns and structures in human narratives. She learned that studying AI wasn’t just a way of imitating human thinking, but rather an approach for deepening our understanding of ourselves as reflected by our language. “It was due to Mark’s mentorship that I got involved in research” and applied to MIT, she says.

The holistic researcher

Chauan now works in the Clinical Decision Making Group led by Peter Szolovits at the Computer Science and Artificial Intelligence Laboratory, where she is focusing on the ways natural language processing can address health care problems. For her master’s project, she worked on the problem of relation extraction and built a tool to digest clinical literature that would, for example, help pharamacologists easily assess negative drug interactions. Now, she’s finishing up a project integrating visual analysis of chest radiographs and textual analysis of radiology reports for quantifying pulmonary edema, to help clinicians manage the fluid status of their patients who have suffered acute heart failure.

“In routine clinical practice, patient care is interweaved with a lot of bureaucratic work,” she says. “The goal of my lab is to assist with clinical decision making and give clinicians the full freedom and time to devote to patient care.”

It’s an exciting moment for Chauhan, who recently submitted a paper she co-first authored with another grad student, and is starting to think about her next project: interpretability, or how to elucidate a decision-making model’s “thought process” by highlighting the data from which it draws its conclusions. She continues to find the intersection of computer vision and natural language processing an exciting area of research. But there have been challenges along the way.

After the initial flurry of excitement her first year, personal and faculty expectations of students’ independence and publishing success grew, and she began to experience uncertainty and imposter syndrome. “I didn’t know what I was capable of,” she says. “That initial period of convincing yourself that you belong is difficult. I am fortunate to have a supportive advisor that understands that.”

Finally, one of her first-year projects showed promise, and she came up with a master’s thesis plan in a month and submitted the project that semester. To get through, she says, she drew on her “survival skills”: allowing herself to be a full person beyond her work as a researcher so that one setback didn’t become a sense of complete failure. For Chauhan, that meant working as a teaching assistant, drawing henna designs, singing, enjoying yoga, and staying involved in student government. “I used to try to separate that part of myself with my work side,” she says. “I needed to give myself some space to learn and grow, rather than compare myself to others.”

Citing a study showing that women are more likely to drop out of STEM disciplines when they receive a B grade in a challenging course, Chauhan says she wishes she could tell her younger self not to compare herself with an ideal version of herself. Dismantling imposter syndrome requires an understanding that qualification and success can come from a broad range of experiences, she says: It’s about “seeing people for who they are holistically, rather than what is seen on the resume.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Featured Image

Multi-Core Machine Learning in Python With Scikit-Learn

Many computationally expensive tasks for machine learning can be made parallel by splitting the work across multiple CPU cores, referred to as multi-core processing.

Common machine learning tasks that can be made parallel include training models like ensembles of decision trees, evaluating models using resampling procedures like k-fold cross-validation, and tuning model hyperparameters, such as grid and random search.

Using multiple cores for common machine learning tasks can dramatically decrease the execution time as a factor of the number of cores available on your system. A common laptop and desktop computer may have 2, 4, or 8 cores. Larger server systems may have 32, 64, or more cores available, allowing machine learning tasks that take hours to be completed in minutes.

In this tutorial, you will discover how to configure scikit-learn for multi-core machine learning.

After completing this tutorial, you will know:

  • How to train machine learning models using multiple cores.
  • How to make the evaluation of machine learning models parallel.
  • How to use multiple cores to tune machine learning model hyperparameters.

Let’s get started.

Multi-Core Machine Learning in Python With Scikit-Learn

Multi-Core Machine Learning in Python With Scikit-Learn
Photo by ER Bauer, some rights reserved.

Tutorial Overview

This tutorial is divided into five parts; they are:

  • Multi-Core Scikit-Learn
  • Multi-Core Model Training
  • Multi-Core Model Evaluation
  • Multi-Core Hyperparameter Tuning
  • Recommendations
  • Multi-Core Scikit-Learn

    Machine learning can be computationally expensive.

    There are three main centers of this computational cost; they are:

    • Training machine learning models.
    • Evaluating machine learning models.
    • Hyperparameter tuning machine learning models.

    Worse, these concerns compound.

    For example, evaluating machine learning models using a resampling technique like k-fold cross-validation requires that the training process is repeated multiple times.

    • Evaluation Requires Repeated Training

    Tuning model hyperparameters compounds this further as it requires the evaluation procedure repeated for each combination of hyperparameters tested.

    • Tuning Requires Repeated Evaluation

    Most, if not all, modern computers have multi-core CPUs. This includes your workstation, your laptop, as well as larger servers.

    You can configure your machine learning models to harness multiple cores of your computer, dramatically speeding up computationally expensive operations.

    The scikit-learn Python machine learning library provides this capability via the n_jobs argument on key machine learning tasks, such as model training, model evaluation, and hyperparameter tuning.

    This configuration argument allows you to specify the number of cores to use for the task. The default is None, which will use a single core. You can also specify a number of cores as an integer, such as 1 or 2. Finally, you can specify -1, in which case the task will use all of the cores available on your system.

    • n_jobs: Specify the number of cores to use for key machine learning tasks.

    Common values are:

    • n_jobs=None: Use a single core or the default configured by your backend library.
    • n_jobs=4: Use the specified number of cores, in this case 4.
    • n_jobs=-1: Use all available cores.

    What is a core?

    A CPU may have multiple physical CPU cores, which is essentially like having multiple CPUs. Each core may also have hyper-threading, a technology that under many circumstances allows you to double the number of cores.

    For example, my workstation has four physical cores, which are doubled to eight cores due to hyper-threading. Therefore, I can experiment with 1-8 cores or specify -1 to use all cores on my workstation.

    Now that we are familiar with the scikit-learn library’s capability to support multi-core parallel processing for machine learning, let’s work through some examples.

    You will get different timings for all of the examples in this tutorial; share your results in the comments. You may also need to change the number of cores to match the number of cores on your system.

    Note: Yes, I am aware of the timeit API, but chose against it for this tutorial. We are not profiling the code examples per se; instead, I want you to focus on how and when to use the multi-core capabilities of scikit-learn and that they offer real benefits. I wanted the code examples to be clean and simple to read, even for beginners. I set it as an extension to update all examples to use the timeit API and get more accurate timings. Share your results in the comments.

    Multi-Core Model Training

    Many machine learning algorithms support multi-core training via an n_jobs argument when the model is defined.

    This affects not just the training of the model, but also the use of the model when making predictions.

    A popular example is the ensemble of decision trees, such as bagged decision trees, random forest, and gradient boosting.

    In this section we will explore accelerating the training of a RandomForestClassifier model using multiple cores. We will use a synthetic classification task for our experiments.

    In this case, we will define a random forest model with 500 trees and use a single core to train the model.


    We can record the time before and after the call to the train() function using the time() function. We can then subtract the start time from the end time and report the execution time in the number of seconds.

    The complete example of evaluating the execution time of training a random forest model with a single core is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that it takes about 10 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now change the example to use all of the physical cores on the system, in this case, four.


    The complete example of multi-core training of the model with four cores is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that the speed of execution more than halved to about 3.151 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now change the number of cores to eight to account for the hyper-threading supported by the four physical cores.


    We can achieve the same effect by setting n_jobs to -1 to automatically use all cores; for example:


    We will stick to manually specifying the number of cores for now.

    The complete example of multi-core training of the model with eight cores is listed below.


    Running the example reports the time taken to train the model with a single core.

    In this case, we can see that we got another drop in execution speed from about 3.151 to about 2.521 by using all cores.

    How long does it take on your system? Share your results in the comments below.


    We can make the relationship between the number of cores used during training and execution speed more concrete by comparing all values between one and eight and plotting the result.

    The complete example is listed below.


    Running the example first reports the execution speed for each number of cores used during training.

    We can see a steady decrease in execution speed from one to eight cores, although the dramatic benefits stop after four physical cores.

    How long does it take on your system? Share your results in the comments below.


    A plot is also created to show the relationship between the number of cores used during training and the execution speed, showing that we continue to see a benefit all the way to eight cores.

    Line Plot of Number of Cores Used During Training vs. Execution Speed

    Line Plot of Number of Cores Used During Training vs. Execution Speed

    Now that we are familiar with the benefit of multi-core training of machine learning models, let’s look at multi-core model evaluation.

    Multi-Core Model Evaluation

    The gold standard for model evaluation is k-fold cross-validation.

    This is a resampling procedure that requires that the model is trained and evaluated k times on different partitioned subsets of the dataset. The result is an estimate of the performance of a model when making predictions on data not used during training that can be used to compare and select a good or best model for a dataset.

    In addition, it is also a good practice to repeat this evaluation process multiple times, referred to as repeated k-fold cross-validation.

    The evaluation procedure can be configured to use multiple cores, where each model training and evaluation happens on a separate core. This can be done by setting the n_jobs argument on the call to cross_val_score() function; for example:

    We can explore the effect of multiple cores on model evaluation.

    First, let’s evaluate the model using a single core.


    We will evaluate the random forest model and use a single core in the training of the model (for now).


    The complete example is listed below.


    Running the example evaluates the model using 10-fold cross-validation with three repeats.

    In this case, we see that the evaluation of the model took about 6.412 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can update the example to use all eight cores of the system and expect a large speedup.


    The complete example is listed below.


    Running the example evaluates the model using multiple cores.

    In this case, we can see the execution timing dropped from 6.412 seconds to about 2.371 seconds, giving a welcome speedup.

    How long does it take on your system? Share your results in the comments below.


    As we did in the previous section, we can time the execution speed for each number of cores from one to eight to get an idea of the relationship.

    The complete example is listed below.


    Running the example first reports the execution time in seconds for each number of cores for evaluating the model.

    We can see that there is not a dramatic improvement above four physical cores.

    We can also see a difference here when training with eight cores from the previous experiment. In this case, evaluating performance took 1.492 seconds whereas the standalone case took about 2.371 seconds.

    This highlights the limitation of the evaluation methodology we are using where we are reporting the performance of a single run rather than repeated runs. There is some spin-up time required to load classes into memory and perform any JIT optimization.

    Regardless of the accuracy of our flimsy profiling, we do see the familiar speedup of model evaluation with the increase of cores used during the process.

    How long does it take on your system? Share your results in the comments below.


    A plot of the relationship between the number of cores and the execution speed is also created.

    Line Plot of Number of Cores Used During Evaluation vs. Execution Speed

    Line Plot of Number of Cores Used During Evaluation vs. Execution Speed

    We can also make the model training process parallel during the model evaluation procedure.

    Although this is possible, should we?

    To explore this question, let’s first consider the case where model training uses all cores and model evaluation uses a single core.


    The complete example is listed below.


    Running the example evaluates the model using a single core, but each trained model uses a single core.

    In this case, we can see that the model evaluation takes more than 10 seconds, much longer than the 1 or 2 seconds when we use a single core for training and all cores for parallel model evaluation.

    How long does it take on your system? Share your results in the comments below.


    What if we split the number of cores between the training and evaluation procedures?


    The complete example is listed below.


    Running the example evaluates the model using four cores, and each model is trained using four different cores.

    We can see an improvement over training with all cores and evaluating with one core, but at least for this model on this dataset, it is more efficient to use all cores for model evaluation and a single core for model training.

    How long does it take on your system? Share your results in the comments below.


    Multi-Core Hyperparameter Tuning

    It is common to tune the hyperparameters of a machine learning model using a grid search or a random search.

    The scikit-learn library provides these capabilities via the GridSearchCV and RandomizedSearchCV classes respectively.

    Both of these search procedures can be made parallel by setting the n_jobs argument, assigning each hyperparameter configuration to a core for evaluation.

    The model evaluation itself could also be multi-core, as we saw in the previous section, and the model training for a given evaluation can also be training as we saw in the second before that. Therefore, the stack of potentially multi-core processes is starting to get challenging to configure.

    In this specific implementation, we can make the model training parallel, but we don’t have control over how each model hyperparameter and how each model evaluation is made multi-core. The documentation is not clear at the time of writing, but I would guess that each model evaluation using a single core hyperparameter configuration is split into jobs.

    Let’s explore the benefits of performing model hyperparameter tuning using multiple cores.

    First, let’s evaluate a grid of different configurations of the random forest algorithm using a single core.


    The complete example is listed below.


    Running the example tests different values of the max_features configuration for random forest, where each configuration is evaluated using repeated k-fold cross-validation.

    In this case, the grid search on a single core takes about 28.838 seconds.

    How long does it take on your system? Share your results in the comments below.


    We can now configure the grid search to use all available cores on the system, in this case, eight cores.


    We can then evaluate how long this multi-core grids search takes to execute. The complete example is listed below.


    Running the example reports execution time for the grid search.

    In this case, we see a factor of about four speed up from roughly 28.838 seconds to around 7.418 seconds.

    How long does it take on your system? Share your results in the comments below.


    Intuitively, we would expect that making the grid search multi-core should be the focus and not model training.

    Nevertheless, we can divide the number of cores between model training and the grid search to see if it offers a benefit for this model on this dataset.


    The complete example of multi-core model training and multi-core hyperparameter tuning is listed below.


    In this case, we do see a decrease in execution speed compared to a single core case, but not as much benefit as assigning all cores to the grid search process.

    How long does it take on your system? Share your results in the comments below.


    Recommendations

    This section lists some general recommendations when using multiple cores for machine learning.

    • Confirm the number of cores available on your system.
    • Consider using an AWS EC2 instance with many cores to get an immediate speed up.
    • Check the API documentation to see if the model/s you are using support multi-core training.
    • Confirm multi-core training offers a measurable benefit on your system.
    • When using k-fold cross-validation, it is probably better to assign cores to the resampling procedure and leave model training single core.
    • When using hyperparamter tuning, it is probably better to make the search multi-core and leave the model training and evaluation single core.

    Do you have any recommendations of your own?

    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Related Tutorials
    APIs
    Articles

    Summary

    In this tutorial, you discovered how to configure scikit-learn for multi-core machine learning.

    Specifically, you learned:

    • How to train machine learning models using multiple cores.
    • How to make the evaluation of machine learning models parallel.
    • How to use multiple cores to tune machine learning model hyperparameters.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Discover Fast Machine Learning in Python!

    Master Machine Learning With Python
    Develop Your Own Models in Minutes

    …with just a few lines of scikit-learn code

    Learn how in my new Ebook:
    Machine Learning Mastery With Python

    Covers self-study tutorials and end-to-end projects like:
    Loading data, visualization, modeling, tuning, and much more…

    Finally Bring Machine Learning To

    Your Own Projects

    Skip the Academics. Just Results.

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Featured Image

    How to Train to the Test Set in Machine Learning

    Training to the test set is a type of overfitting where a model is prepared that intentionally achieves good performance on a given test set at the expense of increased generalization error.

    It is a type of overfitting that is common in machine learning competitions where a complete training dataset is provided and where only the input portion of a test set is provided. One approach to training to the test set involves constructing a training set that most resembles the test set and then using it as the basis for training a model. The model is expected to have better performance on the test set, but most likely worse performance on the training dataset and on any new data in the future.

    Although overfitting the test set is not desirable, it can be interesting to explore as a thought experiment and provide more insight into both machine learning competitions and avoiding overfitting generally.

    In this tutorial, you will discover how to intentionally train to the test set for classification and regression problems.

    After completing this tutorial, you will know:

    • Training to the test set is a type of data leakage that may occur in machine learning competitions.
    • One approach to training to the test set involves creating a training dataset that is most similar to a provided test set.
    • How to use a KNN model to construct a training dataset and train to the test set with a real dataset.

    Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    How to Train to the Test Set in Machine Learning

    How to Train to the Test Set in Machine Learning
    Photo by ND Strupler, some rights reserved.

    Tutorial Overview

    This tutorial is divided into three parts; they are:

  • Train to the Test Set
  • Train to Test Set for Classification
  • Train to Test Set for Regression
  • Train to the Test Set

    In applied machine learning, we seek a model that learns the relationship between the input and output variables using the training dataset.

    The hope and goal is that we learn a relationship that generalizes to new examples beyond the training dataset. This goal motivates why we use resampling techniques like k-fold cross-validation to estimate the performance of the model when making predictions on data not used during training.

    In the case of machine learning competitions, like those on Kaggle, we are given access to the complete training dataset and the inputs of the test dataset and are required to make predictions for the test dataset.

    This leads to a possible situation where we may accidentally or choose to train a model to the test set. That is, tune the model behavior to achieve the best performance on the test dataset rather than develop a model that performs well in general, using a technique like k-fold cross-validation.

    Another, more overt path to information leakage, can sometimes be seen in machine learning competitions where the training and test set data are given at the same time.

    — Page 56, Feature Engineering and Selection: A Practical Approach for Predictive Models, 2019.

    Training to the test set is often a bad idea.

    It is an explicit type of data leakage. Nevertheless, it is an interesting thought experiment.

    One approach to training to the test set is to contrive a training dataset that is most similar to the test set. For example, we could discard all rows in the training set that are too different from the test set and only train on those rows in the training set that are maximally similar to rows in the test set.

    While the test set data often have the outcome data blinded, it is possible to “train to the test” by only using the training set samples that are most similar to the test set data. This may very well improve the model’s performance scores for this particular test set but might ruin the model for predicting on a broader data set.

    — Page 56, Feature Engineering and Selection: A Practical Approach for Predictive Models, 2019.

    We would expect the model to overfit the test set, but this is the whole point of this thought experiment.

    Let’s explore this approach to training to the test set in this tutorial.

    We can use a k-nearest neighbor model to select those instances of the training set that are most similar to the test set. The KNeighborsRegressor and KNeighborsClassifier both provide the kneighbors() function that will return indexes into the training dataset for rows that are most similar to a given data, such as a test set.


    We might want to try removing duplicates from the selected row indexes.


    We can then use those row indexes to construct a custom training dataset and fit a model.


    Given that we are using a KNN model to construct the training set from the test set, we will also use the same type of model to make predictions on the test set. This is not required, but it makes the examples simpler.

    Using this approach, we can now experiment with training to the test set for both classification and regression datasets.



    Want to Get Started With Data Preparation?

    Take my free 7-day email crash course now (with sample code).

    Click to sign-up and also get a free PDF Ebook version of the course.

    Download Your FREE Mini-Course


    Train to Test Set for Classification

    We will use the diabetes dataset as the basis for exploring training for the test set for classification problems.

    Each record describes the medical details of a female and the prediction is the onset of diabetes within the next five years.

    The dataset has eight input variables and 768 rows of data; the input variables are all numeric and the target has two class labels, e.g. it is a binary classification task.

    Below provides a sample of the first five rows of the dataset.


    First, we can load the dataset directly from the URL, split it into input and output elements, then split the dataset into train and test sets, holding thirty percent back for the test set. We can then evaluate a KNN model with default model hyperparameters by training it on the training set and making predictions on the test set.

    The complete example is listed below.


    Running the example first loads the dataset and summarizes the number of rows and columns, matching our expectations. The shape of the train and test sets are then reported, showing we have about 230 rows in the test set.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Finally, the classification accuracy of the model is reported to be about 77.056 percent.


    Now, let’s see if we can achieve better performance on the test set by preparing a model that is trained directly for it.

    First, we will construct a training dataset using the simpler example in the training set for each row in the test set.


    Next, we will train the model on this new dataset and evaluate it on the test set as we did before.


    The complete example is listed below.


    Running the example, we can see that the reported size of the new training dataset is the same size as the test set, as we expected.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    We can see that we have achieved a lift in performance by training to the test set over training the model on the entire training dataset. In this case, we achieved a classification accuracy of about 79.654 percent compared to 77.056 percent when the entire training dataset is used.


    You might want to try selecting different numbers of neighbors from the training set for each example in the test set to see if you can achieve better performance.

    Also, you might want to try keeping unique row indexes in the training set and see if that makes a difference.

    Finally, it might be interesting to hold back a final validation dataset and compare how different “train-to-the-test-set” techniques affect performance on the holdout dataset. E.g. see how training to the test set impacts generalization error.

    Report your findings in the comments below.

    Now that we know how to train to the test set for classification, let’s look at an example for regression.

    Train to Test Set for Regression

    We will use the housing dataset as the basis for exploring training for the test set for regression problems.

    The housing dataset involves the prediction of a house price in thousands of dollars given details of the house and its neighborhood.

    It is a regression problem, meaning we are predicting a numerical value. There are 506 observations with 13 input variables and one output variable.

    A sample of the first five rows is listed below.


    First, we can load the dataset, split it, and evaluate a KNN model on it directly using the entire training dataset. We will report performance on this regression class using mean absolute error (MAE).

    The complete example is listed below.


    Running the example first loads the dataset and summarizes the number of rows and columns, matching our expectations. The shape of the train and test sets are then reported, showing we have about 150 rows in the test set.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Finally, the MAE of the model is reported to be about 4.488.


    Now, let’s see if we can achieve better performance on the test set by preparing a model that is trained to it.

    First, we will construct a training dataset using the simpler example in the training set for each row in the test set.


    Next, we will train the model on this new dataset and evaluate it on the test set as we did before.


    The complete example is listed below.


    Running the example, we can see that the reported size of the new training dataset is the same size as the test set, as we expected.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    We can see that we have achieved a lift in performance by training to the test set over training the model on the entire training dataset. In this case, we achieved a MAE of about 4.433 compared to 4.488 when the entire training dataset is used.

    Again, you might want to explore using a different number of neighbors when constructing the new training set and see if keeping unique rows in the training dataset makes a difference. Report your findings in the comments below.


    Further Reading

    This section provides more resources on the topic if you are looking to go deeper.

    Books
    APIs

    Summary

    In this tutorial, you discovered how to intentionally train to the test set for classification and regression problems.

    Specifically, you learned:

    • Training to the test set is a type of data leakage that may occur in machine learning competitions.
    • One approach to training to the test set involves creating a training dataset that is most similar to a provided test set.
    • How to use a KNN model to construct a training dataset and train to the test set with a real dataset.

    Do you have any questions?
    Ask your questions in the comments below and I will do my best to answer.

    Get a Handle on Modern Data Preparation!

    Data Preparation for Machine Learning

    Prepare Your Machine Learning Data in Minutes

    …with just a few lines of python code

    Discover how in my new Ebook:
    Data Preparation for Machine Learning

    It provides self-study tutorials with full working code on:
    Feature Selection, RFE, Data Cleaning, Data Transforms, Scaling, Dimensionality Reduction,
    and much more…

    Bring Modern Data Preparation Techniques to
    Your Machine Learning Projects

    See What’s Inside

    Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

    Recent Posts

    Archives