-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add temporary params to Learners / Pipeops #304
Comments
API suggestions: PipeOP::add_hyper_par(par, map) NB: "input" is a bit problematic and needs to be properly defined. currently i see this as the input which is given to the pipeop. but we have training and test phase. i guess this the input durring the training phase, only and exactly... |
So, is this something we want to do in mlr3pipelines or in paradox? |
i think it must be in mlr3pipelines. some code from paradox could then then MAYBE removed. and i alos guess this eliminates ALL our problems we had with expressions. but i would like some feedback from @mllg @jakob-r @ja-thomas |
I like the idea. Some thoughts
|
yupp, my names were absolutely just an initial proposal. "fun" might be good for now?
as it acts on a PipeOp, yes.
sure, it takes the current input (which will be a task here) and compute on it. the "p" of the task if different, before and after feature filtering.
good point! in the par_vals it should be stored, after it is computed i guess.
i dont get this question |
Question: Which k is now in effect for train? This is just a stupid example but I could imagine that such problems can occur in more complicated pipelines. You do not even have to have two |
i still totally dont get what you are asking... your code is imcomplete. can you please write down the complete pipeline and the pipeops? |
It's just an imaginary example where you have defined the parameter k at two different positions of the pipeline and the question now is how that should be handled. |
i am not trying to be mean or obtuse, i really dont understand your example. can you PLEASE make it more complete? you can make stuff up as you want. it does not need to run. but right now it does not make sense |
We talked and the design looks like that: Example: op1 = PipeOpLearner("classif.randomForest")
op1 = PipeOpLerner$new("classif.rf")
op1$add_temp_hyperpar(par = ParamNum$new("p.perc", fun = function(x, input) list(p = round(x * input$n.features)))
|
I think that For me,it would be nice to have something like: op1 = PipeOpXXX$new()
op1$params$[TAB] # gives me a hit about all available parametes
# however all elements of params are active bindings
op1$params$p.perc = ParamRemap$new(
new = paradox::ParamReal$new("p.perc"),
replace = op1$params$p,
function(val, task) {
return(val * length(task$feature_names))
}) And after running the code above, the Then, when the user would like to replace more parameters with one hyperparam, it will be as easy as: op1 = PipeOpXXX$new()
# however all elements of params are active bindings
op1$params$p.perc = ParamRemap$new(
new = paradox::ParamReal$new("p.perc"),
replace = list(
op1$params$p.
op1$params$p2
),
function(val, task) {
return(val * length(task$feature_names), val * sqrt(length(task$feature_names)))
}) And in this case both |
This has some disadvantages
Bernd proposed something like this:
Then you directly have the information which param get's replaced but you have to write a function for each and it will get tiresome to set multiple params based on one single temporary one. Although I see that it is obviously more elegant to hide the parameter that gets overwritten I think it is too much effort. And maybe sometimes you want to set the param directly to a specific value and don't want to modify the pipeline. who knows. simply throwing an error is the easiest solution that allows the most flexibility and needs less coding. |
I am still not convinced, that replacing parameters is something we should do in mlr3pipelines and not paradox. This is more or less an operation that works on ParamSets or Params.
This would IMHO be concise and sensible. I am not sure I agree with points raised by @jakob-r:
Then you just set it?
I get this, but we might want to be as robust as we sensibly can. |
100% agree. We could even modify the
It's nice composition. |
i discussed this with martin idea was this:
Problem is: when do we call the trafo? in a pipeline we have multiple steps..... |
Usecase1: you want to tune RF::mtry, but from [0, 1] as percentage, not as an integer from 1..k
Usecase 2: You have created somesmart heuristics to set params, like a,b,c which execute some code.
Proposal: You can add a HP + some piece of code to a pipeop, which maps the setting of that new param to values of already existing ones. you then can use (or tune) the new one
The text was updated successfully, but these errors were encountered: