What does the Learning rate determine in Gradient Descent?

What does the Learning rate determine in Gradient Descent?




How big the steps should be. Nonlinear activations will have local minima, SGD works better in practice for optimizing non-convex functions.

Popular posts from this blog

After analyzing the model, your manager has informed that your regression model is suffering from multicollinearity. How would you check if he's true? Without losing any information, can you still build a better model?

Is rotation necessary in PCA? If yes, Why? What will happen if you don't rotate the components?

What does Latency mean?