Professor Cai reviewed for us the Adaptive Lasso function. She also provided the framework to understand the similarities between various machine learning formalisms (e.g. Hinge Loss function and SVM)
Also discussed the fundamental bias vs variance tradeoff. Approximation error (i.e. from selection from the feature space). Variance (i.e. sample error). If you have lower approximation error, for example, you can afford a high sample error (and vice versa).
No comments:
Post a Comment