-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding the cv_results_ #503
Comments
The other question is, is it possible to estimate the |
It's the mean of one repetition -> it's the score on the holdout set.
In case you're using cv, the time limit is for all five folds. If you use
Possible reasons for running over the memory limit are
Yes. The time and memory limit are for the execution of the complete pipeline.
Potentially yes, but we're not doing this. |
@mfeurer Thanks for answering all my earlier questions. I do find out that the target algorithms like random forest has its parameter Apart from that, when Thank you |
Not really. You could either change the code or create a new component with this hyperparameter activated and then deactivate the original random forest.
No, they are chosen according a kNN algorithm as described in Initializing Bayesian Hyperparameter Optimization via Meta-Learning. |
Hi,
I was reading the result from cv_results_ and trying to understand
ml_memory_limit
is ~3GB (3072MB) which is quite large. I'm wondering why some of the algorithm fittings will take more than that and hence, cause the memory out, even my dataset is less than 50 mb? Can you provide some of the scenarios?Thank you
The text was updated successfully, but these errors were encountered: