@nrailgun
2016-05-30T17:07:23.000000Z
字数 1026
阅读 1403
论文笔记
这是 Feifei Li 在斯坦福的计算机视觉课程上所引用的一片文章,讲述了一些 "Black art that is hard to find in textbooks"。
A common misconception is about overfitting is that it is caused by noise.
ML is not a one-shot process of building a data set and running a learner, but rather an iterative processes of running the learner, analyzing the results, modifying the data and/or the learner, and repeating.
Suppose you have constructed the best set of features you can, but the classifier you're getting are still not accurate enough. There are 2 main choices: design a better learning algorithm, or gather more data.
As a rule, it pays to try the simplest learner first.
There is no necessary connection between the number of parameter of a model and its tendency to overfit.