-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider sklearn way of fitting #10
Comments
I don't think there is a necessity to be married to the sklearn architecture; the more recent ML frameworks are moving away from that paradigm (particularly Pytorch). It can also be memory hungry if we keep copying data structures in the sklearn way. For all intents and purposes, if we have a means to compile, fit models in parallel, perform cross-validation and enable predictions, that should be ok. |
The primary concern was that currently in Also, I do not think PyTorch's interface, or even pytorch-lightning, could be considered "moving away" from the One of the huge benefits of sticking to, or at least having an adapter for, the |
Got it. I'm pretty ambivalent to the design choice here; I think the current implementation is certainly off to a good start. |
I agree that the wrinkle of the Bayesian implementation makes taking advantage of the traditional |
One thing to note to this is that having a single round of cross validation is highly advantageous for posterior predictive checks. I see that the arviz loo and posterior predictive checks appear to address this - but it is not obvious how to specify which samples are used to fit the model and which samples are used for validation. |
Brought up by @gwarmstrong
In sklearn the
fit
function is where the data is input rather than the model constructor. Would be more intuitive to users familiar with sklearn but could require some rethinking of keeping feature names, sample names, etc.The text was updated successfully, but these errors were encountered: