-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
should scale_penalty_with_samples = true be defualt? #149
Comments
@tlienart thanks for the link to the previous discussion. To me it seems likely a smaller value for |
Isn't that what the end of the discussion and the relevant commit did? In any case if you look at scikit learn for instance, which is one of the reference implementation out there, they use scaling and they use an L2 with a nontrivial lambda by default for logreg for instance. I personally don't have a strong opinion on this, to me it's a matter of having correct docs and potentially guiding the user in what they should do (eg HP tuning) ; if you feel that the docs were unclear please consider opening a PR for it. |
I think expanding the docs with examples of the different regression options would be a good idea. I'll open a new issue for this with the hopes of supplying a PR soon. |
Just sinking my teeth into MLLinearModels and I see that
scale_penalty_with_samples = true
is the default. Playing around with a number of toy datasets it seems that scale_penalty_with_samples = true does not produce intuitive results whilescale_penalty_with_samples = false
does, e.g.:Given this, should
scale_penalty_with_samples = false
be made the default or is there a logical reason that it is not?The text was updated successfully, but these errors were encountered: