-
-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix bug in TuneWrapper for param in used in predict function, issue 2472 #2479
Conversation
R/TuneWrapper.R
Outdated
predictLearner(lrn, .model$learner.model$next.model, .newdata, ...) | ||
arglist = list(.learner = lrn, .model = .model$learner.model$next.model, .newdata = .newdata) | ||
arglist = insert(arglist, list(...)) | ||
arglist = insert(arglist, .model$learner.model$opt.result$x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably only insert the parameters that have when %in% c("predict", "both")
.
Maybe one could just copy the lines from predict.R, if we are sure there are no other wrappers that introduce a discrepancy between the par.vals
of a learner and the ...
parameters on purpose.
Some notes regarding my observation:
|
we should probably not rely on this; right now the parameter values are given both inside the
How would that happen (assuming the user does not directly modify the
You mean the ones being tuned over? It might be worth considering to remove the parameters found in the tuning param set from the wrapped learner's param set, so the user gets an error message when he tries to set a param that will be overwritten later.
I'm confused again, how would that happen? Does any part of |
That is the architecture of mlr at the moment and we are relying on it everywhere. In
Like that, yes., ?
A warning somewhere might be useful. But I am afraid we would generate warnings for cases where we have set the par.vals in the definition of the learner. This can entail a lot of work.
My suggestion: We should merge this PR asap and can add convenience later. |
Do you consider this to be a bug?
has " |
i do not understand the unit test. can you please add at least some comments? |
also please note: a PR for fixing a certain bug is probably not the best place to dicuss general architecture questions. please move this to a clean issue if you want to discuss this... |
I would not call it a bug as it's just affecting some error message. But the whole code is strange. How do we know that it's really the wrong family that leads to NAs? But this is another issue and should not be discussed here.
Better now? |
@berndbischl ping |
R/TuneWrapper.R
Outdated
@@ -74,8 +74,16 @@ trainLearner.TuneWrapper = function(.learner, .task, .subset = NULL, ...) { | |||
|
|||
#' @export | |||
predictLearner.TuneWrapper = function(.learner, .model, .newdata, ...) { | |||
lrn = setHyperPars(.learner$next.learner, par.vals = .model$learner.model$opt.result$x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I am relatively sure that removing this line (and not putting .learner = lrn
further down etc) will break things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean .learner$next.learner
? That moved to another line. But I added setHyperPars now again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2a0d9d0 is exactly what I meant, with .learner = lrn
I meant the list entry ".learner"
👍
@jakob-r If the build passes, can this be merged? I would add an entry to NEWS.md |
fixes issue #2472
unit tests are missing. please help and merge