Thursday, June 28, 2018
Neural Networks common ways to improve generalization and reduce overfitting
Neural Networks common ways to improve generalization and reduce overfitting
1. Data augmentation
It is the easiest and most common way to reduce overfitting in Machine learning. For example, for images, you can generate data by translating, flipping the images on the training set. For another example, you can augment your data with PCA variables and feature selection.
2. Regulation
Add regulation term L1 or L2, and weight decay to loss function in order to penalize certain parameter configurations.
Reference:
http://deeplearning.net/tutorial/gettingstarted.html
http://www.quora.com/What-is-the-difference-between-L1-and-L2-regularization
3. Early stopping
Its a strategy to stop training before the learner begins to over-fit. Simply stated, "Early stopping" stops training the learner when the error on the validation set is increasing instead of decreasing.
Reference:
https://en.wikipedia.org/wiki/Early_stopping
4. Dropout
Dropout works completely on the level of the activation functions by setting the neuron randomly to 0 with a probability of 0.5. In some research, the researchers tried to use dropout, and they found that dropout helps prevent overfitting to a large extent in terms of long-term performance during training�the decrease of validation accuracy due to overfitting is much smaller than networks without dropout.
Reference:
[Srivastavaetal.2014] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15: 929-1958, 2014
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.