Loading
Malick A. Sarr

Data Scientist

Data Analyst

Malick A. Sarr

Data Scientist

Data Analyst

Blog Post

How to Organize your Machine Learning Project [ML Project Planning]

How to Organize your Machine Learning Project [ML Project Planning]

Organize your machine learning project sounds like something straightforward to do. However, I’ve never paid much attention to it until I had to do it myself. 

 

When you are a fresher in ML, you tend to write dirty code which fits the problem you are facing. Yet, in the professional world, it is not all about solving the problem. Indeed, you are working with other people and within a system. Consequently, in addition to solving your situation, you are supposed to make sure that your work is organized in a way that makes it easy for a reader to understand.

 

One of the best ways to get started is to get hands-on and create a project, and there are several free materials available online. You will discover why it is vital to arrange your project in this tutorial. And how should you structure your machine learning project? Below is a common framework that was thought to me when I was at university and was originally made by a data scientist named Jeremy Jordan.

 

This article presents to you a common machine learning project management framework. If you do not have time to read the whole article, it’s fine. You can find the project sample structure here:

 

Why Is It Important To Organize your Machine Learning Project?

1, Increased Productivity.

You don’t waste time looking for files, datasets, programs, models, and so on if your project is well-organized and everything is in the same directory.

2. Replicability:

You’ll notice that many of your data science projects have at least some repetition. So, for instance, you could go back with good organization and find/use the same script to separate your data into sections.

3. Comprehensibility:

When a well-organized project is uploaded on Github, other data scientists may readily understand it.

 

How to Setup and Plan your Machine Learning Project?

It may be enticing to skip this section and see what the models can accomplish. But, all too frequently, you’ll waste time by postponing talks on the project’s goals and model assessment criteria. Instead, from the outset of the project, everyone should be working toward a single purpose.

 

Machine learning projects are very iterative; as you advance through the ML lifecycle, you’ll find yourself iterating on a piece until it meets your expectations, then moving on to the next job. Furthermore, a project is not complete after the initial version is out; you gain feedback from real-world encounters.

 

It’s worth mentioning that establishing the model task isn’t always easy. There are frequently several options to fixing an issue, and it is not always obvious which is the best. If your challenge is hazy and the modeling objective is unclear, read my piece on setting requirements for machine learning projects first.

 

How to Organize the priority of your Machine Learning Projects?

Here are models to think about for assessing and prioritizing your machine learning projects: 

  • Look for sophisticated rule-based software that allows you to learn rules rather than writing them.
  • Rank project by how easy it would be to have a Minimum Viable Product; Rank by difficulty, time and resources required for the completion of the task
  • Weight the criteria (like the previous point), evaluate and score your projects. Then calculate your prioritized list of projects
  • When reviewing projects, having a common vocabulary and awareness of traditional software and machine learning software distinctions can be beneficial.

 

When building a Machine Leaning Project, there are 2 general software engineering paradigm that you should now, namely Sotware 1.0 and Software 2.0

 

Software 1.0:

It is created with explicit computer instructions provided by a programmer in a programming language. Then, a person develops the logic so that it will perform the desired behavior when the system is given data.

 

Software 2.0:

By giving data, an optimization algorithm writes implicit instructions utilizing parameters of the defined model architecture. The system logic infers from a set of data samples and the desired behavior they represent.

 

A short word about Software 1.0 and Software 2.0: they are not mutually exclusive concepts. Typically, Software 2.0 is used to scale the logic component of classic software systems by using massive volumes of data to allow more complicated or nuanced decision logic.

 

To summarise, machine learning has the potential to provide significant value in situations where decision logic is complex or sophisticated for humans to define but relatively simple for computers to learn. Next, let’s establish how to determine if a task is reasonably easy for computers to learn.

 

 

How to determine if a task is reasonably easy for a Machine Learning System to Learn?

When considering the feasibility of a machine learning project, consider the following questions:

  • Consequences of incorrect forecasts
    • How often does the system need to be correct to be helpful?
    • What situation will make us lose a lot of money in case of a wrong prediction?
    • What metric should we use to establish the performance of our predictions?
    • How does a wrong classification affect our stakeholders?
    • What are the risks involved when our system incorrectly predicts?
  • Hardware requirements
    • What is the minimum interaction speed between our system and the host?
    • Where should the model be deployed? What are the advantages and disadvantages of various hosting service providers?
    • Will the model be used in a resource-constrained setting?
    • What is the maximum cost threshold you are willing to invest in for a Minimum Viable Product?
  • Data Acquisition
    • How difficult is it to obtain the data?
    • How many sources would we use to obtain the data?
    • What is the data labeling process? How much would it cost?
    • How much data is enough data?
    • Are there any privacy concerns related to the data and its handling?
    • Is accessing the data paid or free? How can we access the data?
  • Similar Work
    • Is there any other project that did the same? How did they do it? Where did they fail? Where did they succeed? How should we differ?
    • Is there enough literature on the subject?
    • Was the issue reduced to practice by someone?
    • Are there any pre-trained models we can use?
    • Is there any framework that could be useful and integrated into our system?

 

You can ask more questions, and discuss them with your team. We use the above question within our team prior to the start of any machine learning project. We end up asking and checking up more questions as the discussion goes on.

 

How do you Specifying Machine Learning Project Requirements?

As software engineers, we use use-cases to specify machine learning project requirements. Unlike software engineers, our measure of success is how well the project predicts data. So we focus on the metric itself (Accuracy, Precision, Recall, etc).

 

Indeed, we create a unified value optimization metric for the project. It can additionally add numerous different satisficing criteria to evaluate models but can only optimize one.

 

As an example, 

Optimize for Recall.
60 percent coverage.
Prediction latency is less than 10 milliseconds.
The model requires no more than 500MB of RAM.

 

The optimization measure might be a weighted sum of numerous elements that are important to us. Then, as performance improves, revisit this measure. Some teams may opt to overlook a specific need at the start of the project to revise their solution when they have discovered it.

 

Next, determine when you will deploy your first model. The reasoning for this method is that the first deployment should entail a modest model, with emphasis on creating the necessary machine learning pipeline for prediction. Thus, it enables you to give value rapidly while avoiding the pitfall of wasting too much time attempting to “squeeze the juice.

 

How to Set Up a Machine Learning Codebase

To organize a Machine Learning Project, you will need to set up a Machine Learning Codebase. Data processing, model creation, model training, and experiment management should modularize in a well-organized machine learning codebase. Check the links at the beginning of the article to clone such structure straight onto your machine.

 

Example codebase organization:

configs/
    baseline.yaml
    latest.yaml
data/
docker/ 
project_name/
  api/
    app.py
  models/
    base.py
    simple_baseline.py
    rnn.py
    cnn.py
    mlp.py
  datasets.py
  train.py
  experiment.py
scripts/
README.md

 

  • data/ is a directory where you may store raw and processed data for your project.  
  • docker/ is where you may define one or more Dockerfiles for the project. Docker aid in ensuring consistency across various computers and deployments.
  • For predictions, api/app.py exposes the model through a REST client. Rather than importing straight from your library, you will most likely want to load the (trained) model from a model registry.
  • models/ provides a set of machine learning models for the job, linked together via a standard API provided in base.py. These models include code for any data preparation and output normalization that is required.
  • datasets.py is in charge of the dataset’s creation. Handles data pipelining/staging, shuffling, and disc accessing.
  • experiment.py controls the process of analyzing numerous models/ideas during the experiment. It creates the dataset and models for a specific investigation.
  • train.py defines the model’s actual training loop. This code communicates with the planner and handles training logging.
  • /README.md discusses your project’s data.

 

How do you organize Data Collection And Labeling in your Machine Learning Project?

A perfect machine learning pipeline employs data that classifies itself. Tesla Autopilot, for example, has a model running that predicts when cars are likely to come into your lane. To collect labeled data systematically, notice when a vehicle moves from a nearby lane into the Tesla’s lane and then rewind the video stream to label that a car will cut into the lane.

 

As another example, imagine Facebook is developing a model to predict user involvement when selecting how to arrange items in the newsfeed. They can monitor engagement and transform this interaction into a tagged observation without any human effort after delivering the user stuff based on a forecast.

 

However, make essential to think through this process to ensure that your self-labeling system does not become trapped in a feedback loop with itself.

 

Many other scenarios need us to manually label data for the task we want to automate. Therefore, the accuracy of your data labels has a significant impact on the upper bound of model performance.

 

Most data labeling efforts demand the involvement of numerous persons, necessitating labeling documentation. Even if you’re the only one labeling the data, it’s a good idea to record your labeling criteria to ensure consistency. One challenging situation is when you decide to modify your labeling process after you’ve already tagged data. If this happens, tag “hard-to-label” instances in some way so that you can readily identify all comparable instances if you decide to modify your labeling process in the future. You should also version your dataset and tie each model with a specific dataset version.

 

How to organize a large amount of unlabelled data in your Machine Learning Project?

When you have a large amount of unlabeled data and need to pick which data to classify, active learning comes in handy. However, labeling data may be costly. Therefore keep the time spent on this process to a minimum.

In contrast, if you can afford to label your whole dataset, you should probably do so. Active learning adds a new level of complication.

General Approach:

  • Begin with an unlabeled dataset and create a “seed” dataset by labeling a small selection of occurrences.
  • On the seed dataset, train the basic model.
  • Predict the labels of the unlabeled observations that remain.
  • Use the model’s forecast uncertainty to prioritize the labeling of the remaining data.

 

 How to Organize People Doing the Data Labelling for your Machine Learning Project?

Assigning humans to provide ground truth labels is costly. How can you optimize the value of your data when you have access to enormous expanses of unlabeled data and a limited labeling budget? In certain circumstances, your data may contain information that offers a noisy approximation of the correct value.

For example, if you’re classifying Instagram photographs, you might be able to see the hashtags used in the image’s caption. Other times, you may have subject matter specialists who may assist you in developing data heuristics.

As a company, you usually have 4 options for data labeling:

  • Employees: Ask one of your employees to do the data labeling for your
  • Crowdsource: Use a platform with a large number of data labelers at ones
  • Contractors: Use freelancers to do the labeling
  • Managed team: Use a specialized vetted team of data labelers.

The choice of who is going to do the data labeling depends on the scope, size, and funds available for your machine learning project.

 

How to organize the Model Exploration of your Machine Learning Project?

Create performance baselines for your problem. Baselines can use to set a lower bound of predicted performance as well as a goal performance level. Important baselines can be out-of-the-box sci-kit-learn models or simply simple heuristics. It is hard to assess the usefulness of increased model complexity without these baselines.

 

Here’s how you can establish a baseline:

  • Given that your issue has been thoroughly researched, examine the literature to approximate a baseline based on published results for extremely comparable tasks/data.
  • If your topic has been well researched, examine the literature to approximate a baseline based on published results for extremely comparable tasks/datasets.
  • If at all achievable, estimate human-level performance on the given job. Don’t expect someone to complete the task thoroughly; many seemingly simple tasks are complex!

 

Practical Advice on Model Exploration

Begin and progressively increase in complexity

 It usually entails utilizing a basic model, but it might also include beginning with a simplified version of your work. Sometimes a simpler model would work better than a more complicated one. As a result, non-tech people tend to understand easier model better than complex ones.

Examine the literature.

Look for articles on ResearchGate,  that describe model architectures for similar challenges, and talk to other practitioners to find out which techniques have been most effective in practice. Then, determine a cutting-edge method and utilize it as a baseline model.

Recreate a known outcome.

 If you’re employing a well-studied model, ensure that your model’s performance on a regularly used dataset matches what’s stated in the literature.

Overfit, a single batch of data after a model, has been run.

 We’re not going to employ regularisation just yet since we want to check if the unconstrained model can learn from the data.

Learn how model performance scales when additional data is added. 

 First, plot the model performance as a function of dataset size for the baseline models you’ve investigated. Then, examine how the working of each model scales.

 

How to Refine your Model in your Machine Learning Project?

Refining your models is a usual occurrence when you organize your machine learning project. Once you have a rough notion of effective model architectures and methodologies for your problem, you should now focus your efforts on extracting performance advantages from the model.

 

There are 4 general aspects to think about when refining your models.

 

1. use the bias-variance decomposition. 

Divide error into 4 categories:

  • An irreducible error
  • Avoidable bias (the difference between train error and irreducible error)
  • Variance (the difference between validation error and train error)
  • Validation set overfitting (difference between test error and validation error)

2. Create a scalable data pipeline. 

You’ve identified which sorts of data are required for your model by this time, and you can now focus on designing a performant pipeline.

Addressing Underfitting

Increase model capacity
Get  more variables
Decrease regularization
Error analysis
Choose a more advanced architecture.
Tune hyperparameters

Addressing Overfitting

 
Get more training data
Use regularization
Reduce model size
Implement data augmentation
Analyse the Error
Implement hyperparameters tuning

Addressing Distribution Shift

– Conduct an error analysis to understand the nature of the distribution change.
– Implement domain adaptation strategies.
– Data should be synthesized to fit the test distribution better.

 

3. Collect data in a targeted manner to address current failure modes. 

Create a mechanism for systematically assessing flaws in your present model. If possible, categorize these mistakes and obtain more data.

 

4. Use coarse-to-fine approach for intelligent random searches for hyperparameters. 

Begin with a broad hyperparameter space and iteratively narrow it down to the best-performing part of the hyperparameter space.

 

 

How to Debug a Machine Learning Project:

1. Understand why is your model underperforming?

You can look for clues in:

  • Implementation flaws
  • Dataset creation
  • Hyperparameter options
  • Data/model compatibility

Start simply and gradually increase complexity to accomplish machine learning tasks effectively. Begin with a solid foundation and build incrementally on it.

 

According to Andrej Karpathy, the most common neural net mistakes are:

1) you didn’t try to overfit a single batch first.

2) you forgot to toggle train/eval mode for the net.

3) you forgot to .zero_grad() (in PyTorch) before .backward().

4) you passed softmaxed outputs to a loss that expects raw logits.

5) you didn’t use bias=False for your Linear/Conv2d layer when using BatchNorm, or conversely, forget to include it for the output layer.

 

2. Discover Failure Modes in your Machine Learning Project

Clustering may be used to find failure modes and enhance error analysis:

  • Choose all of the wrong guesses. (Alternatively, arrange your observations by their predicted loss to identify the most critical mistakes.
  • Apply a clustering method, such as DBSCAN, on a subset of data.
  • Manually search the clusters for common characteristics that make prediction difficult.

Categorize observations with inaccurate predictions and decide what action to take in the model refining step to enhance performance in these situations.

 

How to Test and Evaluate your Machine Learning Project:

 

Unit tests are the way to go here. If you have not done so yet, you should start doing it now.

To examine several components of an ML product:

 

  1. Training System:

The training system analyses the input data, performs the experiments, handles findings, and maintains weights.

 

Required Tests:

Test the whole training pipeline (from raw data to trained model) to check that no changes to how data from our application is stored have occurred upstream. These tests should be conducted on a nightly/weekly basis.

 

  1. Prediction System:

The prediction system builds the network, loads the previously saved weights, and produces predictions.

 

Required Tests:

Run inference on the validation data to check that the model score does not decline with the new model/weights. It should activate with each code push.

You should also have a simple functionality test that runs on a few essential cases to confirm that you haven’t broken functionality while developing rapidly. These tests serve as a sanity check while creating new code.

Consider scenarios that your model could experience, and create tests to check that new models continue to work well. The “test case” is a human-defined scenario represented by a selected set of observations.

 

  1. Serving System:

It means deploying the serving system exposed to “real world” input and inference on production data. In addition, this system must be able to scale in response to demand.

 

Required Tests:

Downtime and error notifications

Examine the data for any shifts in distribution.

 

How to organize your machine learning project to evaluate its Production Readiness

Google set a pretty detailed document about how to ensure that your system is production-ready. The summary is below and you can find the original document here.

 

Model Monitoring Checklist

– Dependency changes necessitate notice.
– Training and service are not slanted in any way.
– Models aren’t too old.
– Models are mathematically stable.
– The computing performance has not deteriorated.
– The accuracy of predictions has not deteriorated.

Data Production Readiness Checklist

– A schema is used to store feature expectations.
– All of the features are advantageous.
– No feature is too expensive.
– The features address Meta-level criteria.
– Proper privacy protections protect the data pipeline.
– New features may be readily added.
– The code for all input features has been tested.

Model Production Readiness Checklist

– The model specifications have been evaluated and submitted.
– Metrics collected offline, and online are related.
– All of the hyperparameters have been fine-tuned.
– The effects of model staleness are well understood.
– A basic model is not preferable.
– On relevant data slices, the model quality is adequate.
– The model evaluates for inclusion considerations.

Infrastructure Production Readiness Checklist

-Training is repeatable.
-Model specifications are unit tested.
-The ML pipeline has undergone integration testing.
-Before serving, the model’s quality is confirmed.
-The model can be debugged.
-Before serving, models are canaried.
-Serving models are reversible.

 

Organize the Model Deployment Phase in your Machine Learning Project:

When you organize your machine learning project, make sure that you have a versioning system in place for:

  • Dataset for training
  • Dataset for validation
  • Pipeline of features
  • The parameters of the model
  • Configuration of the model

 

A popular method for deploying a model is to bundle it into a Docker container and offer a REST API for inference.

 

Shadow Mode: 

Ship a new model with the current model, continuing to use the previous model for predictions while storing the results from both models. Measuring the difference between the new and existing model forecasts will give you an idea of how much will change when you transition to the new model.

 

Canarying

Serve the new model to a limited group of users (i.e., 5%) while serving the previous model to the rest. Check to ensure a smooth rollout before deploying the new model to the remainder of the users.

 

 

How to Perform Ongoing Model Maintenance:

 

1. Model’s performance will decrease over time

After time passes, there is always a distribution shift. Because of this shift, your model performance is likely to decrease. You should add to your plan periods in which to retrain your models so that they constantly learn from real-world data.

 

2. CACE: Changing Anything Changes Everything

Machine learning systems are very dense. They share a lot of bonds as per se. Consequently, changes in the hyperparameter, learning rate, models, etc. will have a direct or indirect impact on the dataset.

How can you deal with that?

  • Work in blocks. Compartmentalize. Separate the problem into distinct components and test them.
  • Add some validation tests anytime you add new code.

 

3. Outside components can affect your model

Assuming that your end product is widely used, other components within your production infrastructure can end up depending on your model without you knowing it.

How can you deal with that?

Set up access control and monitoring for any components outside your model.

 

4. Do not depend on input that changes over time

Any external input that may change over time can negatively impact the performance of your model. If you take inputs from another website or a source that is not controlled by you, you may end up with a non-performing model.

How do you resolve that?

Create a copy (versioned) of every external data set within your project pipeline.

 

5. Do not use features that are not useful

There is a need for constant monitoring of your feature space. If after some time, removing certain redundant features will be necessary. Remember, a model should always contain just the relevant/important variable for the business use case you are solving.

There already exist many resources online giving a detailed account of feature selection. I suggest you have a quick look at them.

 

A final word

These are most of the things one should take into account when organizing and planning your machine learning project. Building a model is the easy part, it what goes around building that model that is difficult. If not done properly, it can cause your project to fail. So do take into account the concept mentioned above, and let me know in the comment section below if you do anything differently.

 

If you made this far in the article, thank you very much.

 

I hope this information was of use to you. 

Feel free to use any information from this page. I’d appreciate it if you can simply link to this article as the source. If you have any additional questions, you can reach out to malick@malicksarr.com  or message me on Twitter. If you want more content like this, join my email list to receive the latest articles. I promise I do not spam. 

 

[boldgrid_component type=”wp_mc4wp_form_widget”]

 

Tags:
Write a comment