-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrap models not machines? #5
Comments
@ablaom currently MLJ models can give deterministic & probabilistic predictions (GLM predicts an entire distribution). For example: Note: at some point it would be great to compare these w/ predictions/Prediction Intervals from NGBoost.py etc |
Yes, I guess that's the design I am suggesting. So, similar to the way |
Hi @ablaom 👋🏽 Thanks very much for this suggestion and sorry for the delayed response (been battling Covid this week while also trying to finish my JuliaCon proceedings submission 😅 ). I will implement this first thing once I turn back to working on this package some time next week 👍🏽 |
Did you not need new abstract model subtype(s) at MLJModelInterface? For set-predictions (we already have |
Yup, you're right. Have done that now in #20 |
@pat-alt
Congratualations on the launch of this new package 🎉 Great to have the integration with MLJ!
I'm not familiar with conformal prediction, but I nevertheless wonder why this package wraps MLJ machines rather than models. If you wrap models, then you will buy into MLJ's model composition. So, a "conformally wrapped model" will behave like any other model: you can insert in pipeline, can wrap in tuning strategy, and so forth.
New models in MLJ generally implement the "basement level" model API. Machines are a higher level abstraction for: (i) user interaction; and (ii) syntax for building learning networks which are ultimately "exported" as standalone model types.
Here are other examples of model wrapping in MLJ:
EnsembleModel
(docs),BinaryThresholdPredictor
,TunedModel
,IteratedModel
. What makes things a little complicated is the model hierarchy: the model supertype for the wrapped model depends on the supertype of the atomic model. So for example, we don't just haveEnsembleModel
we haveDeterministicEnsembleModel
(for ordinary point predictors) andProbabilisticEnsembleModel
(for probabilistic predictors) but the user only sees a single constructorEnsembleModel
; see here. (A longer term goal is to drop the hierarchy in favour of pure trait interface, which will simplify things, but that's a little ways off yet.)Happy to provide further guidance.
cc @azev77
The text was updated successfully, but these errors were encountered: