What is L1 & L2 Regularization? Why is it useful?
What is L1 & L2 Regularization? Why is it useful?
L1 regularization penalizes the absolute value of the weight and tends to make it zero. L2 regularization penalizes the squared value of the weight and tends to make it smaller during training. Both of the regularizations assume that models with smaller weights are better.