Regularization In Machine Learning

Authors
  • Amit Shekhar
    Name
    Amit Shekhar
    Published on
Regularization In Machine Learning

I am Amit Shekhar, Co-Founder @ Outcome School, I have taught and mentored many developers, and their efforts landed them high-paying tech jobs, helped many tech companies in solving their unique problems, and created many open-source libraries being used by top companies. I am passionate about sharing knowledge through open-source, blogs, and videos.

Join Outcome School and get high paying tech job: Outcome School

Before we begin, we’d like to mention that we’ve launched our YouTube channel. Subscribe to the Outcome School YouTube Channel.

In this blog, we will learn about the Regularization In Machine Learning.

Regularization is a technique which is used to solve the overfitting problem of the machine learning models.

Solving overfitting problem with regularization

What is overfitting?

Overfitting is a phenomenon which occurs when a model learns the detail and noise in the training data to an extent that it negatively impacts the performance of the model on new data.

So the overfitting is a major problem as it negatively impacts the performance.

Regularization technique to the rescue.

Generally, a good model does not give more weight to a particular feature. The weights are evenly distributed. This can be achieved by doing regularization.

There are two types of regularization as follows:

  • L1 Regularization or Lasso Regularization
  • L2 Regularization or Ridge Regularization

L1 Regularization or Lasso Regularization

L1 Regularization or Lasso Regularization adds a penalty to the error function. The penalty is the sum of the absolute values of weights.

l1 regularization

p is the tuning parameter which decides how much we want to penalize the model.

L2 Regularization or Ridge Regularization

L2 Regularization or Ridge Regularization also adds a penalty to the error function. But the penalty here is the sum of the squared values of weights.

l2 regularization

Similar to L1, in L2 also, p is the tuning parameter which decides how much we want to penalize the model.

This is Regularization.

Prepare yourself for Machine Learning Interview: Machine Learning Interview Questions

That's it for now.

Thanks

Amit Shekhar
Co-Founder @ Outcome School

You can connect with me on:

Follow Outcome School on:

Read all of our high-quality blogs here.