Improved error message from tune_sim_anneal() when values in the supplied param_info do not encompass all values evaluated in the initial grid. This most often happens when a user mistakenly supplies different parameter sets to the function that generated the initial results and tune_sim_anneal().
-
Fixed bug where tune_sim_anneal() would fail when supplied parameters needing finalization. The function will now finalize needed parameter ranges internally (#39).
-
Fixed bug where packages specified in control_race(pkgs) were not actually loaded in tune_race_anova() (#74).
+
+
+
finetune 1.2.0
CRAN release: 2024-03-21
+
+
New Features
+
finetune now fully supports models in the “censored regression” mode. These models can be fit, tuned, and evaluated like the regression and classification modes. tidymodels.org has more information and tutorials on how to work with survival analysis models.
+
Improved error message from tune_sim_anneal() when values in the supplied param_info do not encompass all values evaluated in the initial grid. This most often happens when a user mistakenly supplies different parameter sets to the function that generated the initial results and tune_sim_anneal().
autoplot() methods for racing objects will now use integers in x-axis breaks (#75).
Enabling the verbose_elim control option for tune_race_anova() will now additionally introduce a message confirming that the function is evaluating against the burn-in resamples.
Updates based on the new version of tune, primarily for survival analysis models (#104).
-
-
Breaking Change
+
+
+
Bug Fixes
+
Fixed bug where tune_sim_anneal() would fail when supplied parameters needing finalization. The function will now finalize needed parameter ranges internally (#39).
+
Fixed bug where packages specified in control_race(pkgs) were not actually loaded in tune_race_anova() (#74).
+
+
+
Breaking Change
Ellipses (…) are now used consistently in the package to require optional arguments to be named. collect_predictions(), collect_metrics() and show_best() methods previously had ellipses at the end of the function signature that have been moved to follow the last argument without a default value. Optional arguments previously passed by position will now error informatively prompting them to be named (#105).
diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml
index 901ea62..d634821 100644
--- a/dev/pkgdown.yml
+++ b/dev/pkgdown.yml
@@ -2,7 +2,7 @@ pandoc: 3.1.11
pkgdown: 2.0.7
pkgdown_sha: ~
articles: {}
-last_built: 2024-03-20T21:16Z
+last_built: 2024-03-21T10:36Z
urls:
reference: https://finetune.tidymodels.org/reference
article: https://finetune.tidymodels.org/articles
diff --git a/dev/reference/collect_predictions.html b/dev/reference/collect_predictions.html
index f7ce3fa..6c2dbee 100644
--- a/dev/reference/collect_predictions.html
+++ b/dev/reference/collect_predictions.html
@@ -10,7 +10,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/control_race.html b/dev/reference/control_race.html
index 7b289cf..3ce04d0 100644
--- a/dev/reference/control_race.html
+++ b/dev/reference/control_race.html
@@ -10,7 +10,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/control_sim_anneal.html b/dev/reference/control_sim_anneal.html
index f9866be..80864fe 100644
--- a/dev/reference/control_sim_anneal.html
+++ b/dev/reference/control_sim_anneal.html
@@ -10,7 +10,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/finetune-package.html b/dev/reference/finetune-package.html
index d75cbbc..9b9b38e 100644
--- a/dev/reference/finetune-package.html
+++ b/dev/reference/finetune-package.html
@@ -14,7 +14,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/index.html b/dev/reference/index.html
index 3a7de12..c412351 100644
--- a/dev/reference/index.html
+++ b/dev/reference/index.html
@@ -10,7 +10,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/plot_race.html b/dev/reference/plot_race.html
index cd965c7..bece7b2 100644
--- a/dev/reference/plot_race.html
+++ b/dev/reference/plot_race.html
@@ -12,7 +12,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/show_best.html b/dev/reference/show_best.html
index cf6bcaa..ea38126 100644
--- a/dev/reference/show_best.html
+++ b/dev/reference/show_best.html
@@ -10,7 +10,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/tune_race_anova.html b/dev/reference/tune_race_anova.html
index a56e659..334bdb5 100644
--- a/dev/reference/tune_race_anova.html
+++ b/dev/reference/tune_race_anova.html
@@ -20,7 +20,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/tune_race_win_loss.html b/dev/reference/tune_race_win_loss.html
index ae30e99..d49cef8 100644
--- a/dev/reference/tune_race_win_loss.html
+++ b/dev/reference/tune_race_win_loss.html
@@ -24,7 +24,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/reference/tune_sim_anneal.html b/dev/reference/tune_sim_anneal.html
index 797ccf5..0ea2b3f 100644
--- a/dev/reference/tune_sim_anneal.html
+++ b/dev/reference/tune_sim_anneal.html
@@ -16,7 +16,7 @@
finetune
- 1.1.0.9006
+ 1.2.0.9000
diff --git a/dev/search.json b/dev/search.json
index 7a3228a..be5b428 100644
--- a/dev/search.json
+++ b/dev/search.json
@@ -1 +1 @@
-[{"path":[]},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement codeofconduct@posit.co. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://finetune.tidymodels.org/dev/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2023 finetune authors Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://finetune.tidymodels.org/dev/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Max Kuhn. Author, maintainer. . Copyright holder, funder.","code":""},{"path":"https://finetune.tidymodels.org/dev/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Kuhn M (2024). finetune: Additional Functions Model Tuning. R package version 1.1.0.9006, https://finetune.tidymodels.org, https://github.com/tidymodels/finetune.","code":"@Manual{, title = {finetune: Additional Functions for Model Tuning}, author = {Max Kuhn}, year = {2024}, note = {R package version 1.1.0.9006, https://finetune.tidymodels.org}, url = {https://github.com/tidymodels/finetune}, }"},{"path":"https://finetune.tidymodels.org/dev/index.html","id":"finetune-","dir":"","previous_headings":"","what":"Additional Functions for Model Tuning","title":"Additional Functions for Model Tuning","text":"finetune contains extra functions model tuning extend currently tune package. can install CRAN version package following code: install development version package, run: two main sets tools package: simulated annealing racing. Tuning via simulated annealing optimization iterative search tool finding good values: second set methods racing. start small set resamples grid points, statistically testing see ones dropped investigated . two methods based Kuhn (2014). example, using ANOVA-type analysis filter parameter combinations: tune_race_win_loss() can also used. treats tuning parameters sports teams tournament computed win/loss statistics.","code":"install.packages(\"finetune\") # install.packages(\"pak\") pak::pak(\"tidymodels/finetune\") library(tidymodels) library(finetune) # Syntax very similar to `tune_grid()` or `tune_bayes()`: ## ----------------------------------------------------------------------------- data(two_class_dat, package = \"modeldata\") set.seed(1) rs <- bootstraps(two_class_dat, times = 10) # more resamples usually needed # Optimize a regularized discriminant analysis model library(discrim) rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- set.seed(2) sa_res <- rda_spec %>% tune_sim_anneal(Class ~ ., resamples = rs, iter = 20, initial = 4) #> Optimizing roc_auc #> Initial best: 0.86480 #> 1 ♥ new best roc_auc=0.87327 (+/-0.004592) #> 2 ♥ new best roc_auc=0.87915 (+/-0.003864) #> 3 ◯ accept suboptimal roc_auc=0.87029 (+/-0.004994) #> 4 + better suboptimal roc_auc=0.87171 (+/-0.004717) #> 5 ◯ accept suboptimal roc_auc=0.86944 (+/-0.005081) #> 6 ◯ accept suboptimal roc_auc=0.86812 (+/-0.0052) #> 7 ♥ new best roc_auc=0.88172 (+/-0.003647) #> 8 ◯ accept suboptimal roc_auc=0.87678 (+/-0.004276) #> 9 ◯ accept suboptimal roc_auc=0.8627 (+/-0.005784) #> 10 + better suboptimal roc_auc=0.87003 (+/-0.005106) #> 11 + better suboptimal roc_auc=0.87088 (+/-0.004962) #> 12 ◯ accept suboptimal roc_auc=0.86803 (+/-0.005195) #> 13 ◯ accept suboptimal roc_auc=0.85294 (+/-0.006498) #> 14 ─ discard suboptimal roc_auc=0.84689 (+/-0.006867) #> 15 ✖ restart from best roc_auc=0.85021 (+/-0.006623) #> 16 ◯ accept suboptimal roc_auc=0.87607 (+/-0.004318) #> 17 ◯ accept suboptimal roc_auc=0.87245 (+/-0.004799) #> 18 + better suboptimal roc_auc=0.87706 (+/-0.004131) #> 19 ◯ accept suboptimal roc_auc=0.87213 (+/-0.004791) #> 20 ◯ accept suboptimal roc_auc=0.86218 (+/-0.005773) show_best(sa_res, metric = \"roc_auc\", n = 2) #> # A tibble: 2 × 9 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.308 0.0166 roc_auc binary 0.882 10 0.00365 Iter7 #> 2 0.121 0.0474 roc_auc binary 0.879 10 0.00386 Iter2 #> # ℹ 1 more variable: .iter set.seed(3) grid <- rda_spec %>% extract_parameter_set_dials() %>% grid_max_entropy(size = 20) ctrl <- control_race(verbose_elim = TRUE) set.seed(4) grid_anova <- rda_spec %>% tune_race_anova(Class ~ ., resamples = rs, grid = grid, control = ctrl) #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap10: 14 eliminated; 6 candidates remain. #> ℹ Bootstrap04: 2 eliminated; 4 candidates remain. #> ℹ Bootstrap03: All but one parameter combination were eliminated. show_best(grid_anova, metric = \"roc_auc\", n = 2) #> # A tibble: 1 × 8 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.831 0.0207 roc_auc binary 0.881 10 0.00386 Preproce… set.seed(4) grid_win_loss<- rda_spec %>% tune_race_win_loss(Class ~ ., resamples = rs, grid = grid, control = ctrl) #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap10: 3 eliminated; 17 candidates remain. #> ℹ Bootstrap04: 2 eliminated; 15 candidates remain. #> ℹ Bootstrap03: 2 eliminated; 13 candidates remain. #> ℹ Bootstrap01: 1 eliminated; 12 candidates remain. #> ℹ Bootstrap07: 1 eliminated; 11 candidates remain. #> ℹ Bootstrap05: 1 eliminated; 10 candidates remain. #> ℹ Bootstrap08: 1 eliminated; 9 candidates remain. show_best(grid_win_loss, metric = \"roc_auc\", n = 2) #> # A tibble: 2 × 8 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.831 0.0207 roc_auc binary 0.881 10 0.00386 Preproce… #> 2 0.119 0.0470 roc_auc binary 0.879 10 0.00387 Preproce…"},{"path":"https://finetune.tidymodels.org/dev/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Additional Functions for Model Tuning","text":"project released Contributor Code Conduct. contributing project, agree abide terms. questions discussions tidymodels packages, modeling, machine learning, please post Posit Community. think encountered bug, please submit issue. Either way, learn create share reprex (minimal, reproducible example), clearly communicate code. Check details contributing guidelines tidymodels packages get help.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":null,"dir":"Reference","previous_headings":"","what":"Obtain and format results produced by racing functions — collect_predictions","title":"Obtain and format results produced by racing functions — collect_predictions","text":"Obtain format results produced racing functions","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Obtain and format results produced by racing functions — collect_predictions","text":"","code":"# S3 method for tune_race collect_predictions( x, ..., summarize = FALSE, parameters = NULL, all_configs = FALSE ) # S3 method for tune_race collect_metrics( x, ..., summarize = TRUE, type = c(\"long\", \"wide\"), all_configs = FALSE )"},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Obtain and format results produced by racing functions — collect_predictions","text":"x results tune_grid(), tune_bayes(), fit_resamples(), last_fit(). collect_predictions(), control option save_pred = TRUE used. ... currently used. summarize logical; metrics summarized resamples (TRUE) return values individual resample. Note , x created last_fit(), summarize effect. object types, method summarizing predictions detailed . parameters optional tibble tuning parameter values can used filter predicted values processing. tibble columns tuning parameter identifier (e.g. \"my_param\" tune(\"my_param\") used). all_configs logical: return complete set model configurations just made end race (default). type One \"long\" (default) \"wide\". type = \"long\", output columns .metric one .estimate mean. .estimate/mean gives values .metric. type = \"wide\", metric column n std_err columns removed, exist.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Obtain and format results produced by racing functions — collect_predictions","text":"tibble. column names depend results mode model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Obtain and format results produced by racing functions — collect_predictions","text":"collect_metrics() collect_predictions(), unsummarized, columns tuning parameter (using id tune(), ). collect_metrics() also columns .metric, .estimator. results summarized, columns mean, n, std_err. summarized, additional columns resampling identifier(s) .estimate. collect_predictions(), additional columns resampling identifier(s), columns predicted values (e.g., .pred, .pred_class, etc.), column outcome(s) using original column name(s) data. collect_predictions() can summarize various results replicate --sample predictions. example, using bootstrap, row original training set multiple holdout predictions (across assessment sets). convert results format every training set single predicted value, results averaged replicate predictions. regression cases, numeric predictions simply averaged. classification models, problem complex. class probabilities used, averaged re-normalized make sure add one. hard class predictions also exist data, determined summarized probability estimates (match). hard class predictions results, mode used summarize. racing results, best collect model configurations finished race (.e., completely resampled). Comparing performance metrics configurations averaged different resamples likely lead inappropriate results.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":null,"dir":"Reference","previous_headings":"","what":"Control aspects of the grid search racing process — control_race","title":"Control aspects of the grid search racing process — control_race","text":"Control aspects grid search racing process","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Control aspects of the grid search racing process — control_race","text":"","code":"control_race( verbose = FALSE, verbose_elim = FALSE, allow_par = TRUE, extract = NULL, save_pred = FALSE, burn_in = 3, num_ties = 10, alpha = 0.05, randomize = TRUE, pkgs = NULL, save_workflow = FALSE, event_level = \"first\", parallel_over = \"everything\", backend_options = NULL )"},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Control aspects of the grid search racing process — control_race","text":"verbose logical logging results (warnings errors, always shown) generated training single R process. using parallel backends, argument typically result logging. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. verbose_elim logical whether logging elimination tuning parameter combinations occur. allow_par logical allow parallel processing (parallel backend registered). extract optional function least one argument (NULL) can used retain arbitrary objects model fit object, recipe, elements workflow. save_pred logical whether --sample predictions saved model evaluated. burn_in integer many resamples completed grid combinations parameter filtering begins. num_ties integer tie-breaking occur. two final parameter combinations evaluated, num_ties specified many resampling iterations evaluated. num_ties iterations, parameter combination current best results retained. alpha alpha level one-sided confidence interval parameter combination. randomize resamples evaluated random order? default, resamples evaluated random order random number seed control prior calling method (reproducible). repeated cross-validation randomization occurs within repeat. pkgs optional character string R package names loaded (namespace) parallel processing. save_workflow logical whether workflow appended output attribute. event_level single string containing either \"first\" \"second\". argument passed yardstick metric functions type class prediction made, specifies level outcome considered \"event\". parallel_over single string containing either \"resamples\" \"everything\" describing use parallel processing. Alternatively, NULL allowed, chooses \"resamples\" \"everything\" automatically. \"resamples\", tuning performed parallel resamples alone. Within resample, preprocessor (.e. recipe formula) processed , reused across models need fit. \"everything\", tuning performed parallel two levels. outer parallel loop iterate resamples. Additionally, inner parallel loop iterate unique combinations preprocessor model tuning parameters specific resample. result preprocessor re-processed multiple times, can faster processing extremely fast. NULL, chooses \"resamples\" one resample, otherwise chooses \"everything\" attempt maximize core utilization. Note switching parallel_over strategies guaranteed use random number generation schemes. However, re-tuning model using parallel_over strategy guaranteed reproducible runs. backend_options object class \"tune_backend_options\" created tune::new_backend_options(), used pass arguments specific tuning backend. Defaults NULL default backend options.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Control aspects of the grid search racing process — control_race","text":"object class control_race echos argument values.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Control aspects of the grid search racing process — control_race","text":"","code":"control_race() #> Racing method control object"},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":null,"dir":"Reference","previous_headings":"","what":"Control aspects of the simulated annealing search process — control_sim_anneal","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"Control aspects simulated annealing search process","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"","code":"control_sim_anneal( verbose = FALSE, verbose_iter = TRUE, no_improve = Inf, restart = 8L, radius = c(0.05, 0.15), flip = 3/4, cooling_coef = 0.02, extract = NULL, save_pred = FALSE, time_limit = NA, pkgs = NULL, save_workflow = FALSE, save_history = FALSE, event_level = \"first\", parallel_over = NULL, allow_par = TRUE, backend_options = NULL )"},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"verbose logical logging results (warnings errors, always shown) generated training single R process. using parallel backends, argument typically result logging. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. verbose_iter logical logging results search process. Defaults FALSE. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. no_improve integer cutoff number iterations without better results. restart number iterations improvement new tuning parameter candidates generated last, overall best conditions. radius Two real numbers (0, 1) describing value \"neighborhood\" current result . numeric parameters scaled [0, 1] scale, values set min. max. radius circle used generate new numeric parameter values. flip real number [0, 1] probability changing non-numeric parameter values iteration. cooling_coef real, positive number influence cooling schedule. Larger values decrease probability accepting sub-optimal parameter setting. extract optional function least one argument (NULL) can used retain arbitrary objects model fit object, recipe, elements workflow. save_pred logical whether --sample predictions saved model evaluated. time_limit number minimum number minutes (elapsed) function execute. elapsed time evaluated internal checkpoints , time, results time returned (warning). means time_limit exact limit, minimum time limit. pkgs optional character string R package names loaded (namespace) parallel processing. save_workflow logical whether workflow appended output attribute. save_history logical save iteration details search. saved tempdir() named sa_history.RData. results deleted R session ends. option useful teaching purposes. event_level single string containing either \"first\" \"second\". argument passed yardstick metric functions type class prediction made, specifies level outcome considered \"event\". parallel_over single string containing either \"resamples\" \"everything\" describing use parallel processing. Alternatively, NULL allowed, chooses \"resamples\" \"everything\" automatically. \"resamples\", tuning performed parallel resamples alone. Within resample, preprocessor (.e. recipe formula) processed , reused across models need fit. \"everything\", tuning performed parallel two levels. outer parallel loop iterate resamples. Additionally, inner parallel loop iterate unique combinations preprocessor model tuning parameters specific resample. result preprocessor re-processed multiple times, can faster processing extremely fast. NULL, chooses \"resamples\" one resample, otherwise chooses \"everything\" attempt maximize core utilization. Note switching parallel_over strategies guaranteed use random number generation schemes. However, re-tuning model using parallel_over strategy guaranteed reproducible runs. allow_par logical allow parallel processing (parallel backend registered). backend_options object class \"tune_backend_options\" created tune::new_backend_options(), used pass arguments specific tuning backend. Defaults NULL default backend options.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"object class control_sim_anneal echos argument values.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"","code":"control_sim_anneal() #> Simulated annealing control object"},{"path":"https://finetune.tidymodels.org/dev/reference/finetune-package.html","id":null,"dir":"Reference","previous_headings":"","what":"finetune: Additional Functions for Model Tuning — finetune-package","title":"finetune: Additional Functions for Model Tuning — finetune-package","text":"ability tune models important. 'finetune' enhances 'tune' package providing specialized methods finding reasonable values model tuning parameters. Two racing methods described Kuhn (2014) arXiv:1405.6974 included. iterative search method using generalized simulated annealing (Bohachevsky, Johnson Stein, 1986) doi:10.1080/00401706.1986.10488128 also included.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/finetune-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"finetune: Additional Functions for Model Tuning — finetune-package","text":"Maintainer: Max Kuhn max@posit.co (ORCID) contributors: Posit Software, PBC [copyright holder, funder]","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot racing results — plot_race","title":"Plot racing results — plot_race","text":"Plot model results stages racing results. line given submodel tested.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot racing results — plot_race","text":"","code":"plot_race(x)"},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot racing results — plot_race","text":"x object class tune_results","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot racing results — plot_race","text":"ggplot object.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":null,"dir":"Reference","previous_headings":"","what":"Investigate best tuning parameters — show_best.tune_race","title":"Investigate best tuning parameters — show_best.tune_race","text":"show_best() displays top sub-models performance estimates.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Investigate best tuning parameters — show_best.tune_race","text":"","code":"# S3 method for tune_race show_best( x, ..., metric = NULL, eval_time = NULL, n = 5, call = rlang::current_env() )"},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Investigate best tuning parameters — show_best.tune_race","text":"x results tune_grid() tune_bayes(). ... select_by_one_std_err() select_by_pct_loss(), argument passed directly dplyr::arrange() user can sort models simple complex. , parameter p, pass unquoted expression p smaller values p indicate simpler model, desc(p) larger values indicate simpler model. least one term required two functions. See examples . metric character value metric used sort models. (See https://yardstick.tidymodels.org/articles/metric-types.html details). required single metric exists x. multiple metric none given, first metric set used (warning issued). eval_time single numeric time point dynamic event time metrics chosen (e.g., time-dependent ROC curve, etc). values consistent values used create x. NULL default automatically use first evaluation time used x. n integer maximum number top results/rows return. call call shown errors warnings.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Investigate best tuning parameters — show_best.tune_race","text":"racing results (finetune package), best report configurations finished race (.e., completely resampled). Comparing performance metrics configurations averaged different resamples likely lead inappropriate results.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":null,"dir":"Reference","previous_headings":"","what":"Efficient grid search via racing with ANOVA models — tune_race_anova","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"tune_race_anova() computes set performance metrics (e.g. accuracy RMSE) pre-defined set tuning parameters correspond model recipe across one resamples data. initial number resamples evaluated, process eliminates tuning parameter combinations unlikely best results using repeated measure ANOVA model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"","code":"tune_race_anova(object, ...) # S3 method for model_spec tune_race_anova( object, preprocessor, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() ) # S3 method for workflow tune_race_anova( object, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object multiple resamples (.e., validation set). param_info dials::parameters() object NULL. none given, parameters set derived arguments. Passing argument can useful parameter ranges need customized. grid data frame tuning combinations positive integer. data frame columns parameter tuned rows tuning parameter candidates. integer denotes number candidate parameter sets created automatically. metrics yardstick::metric_set() NULL. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). control object used modify tuning process. See control_race() details.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"object primary class tune_race standard format objects produced tune::tune_grid().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"technical details method described Kuhn (2014). Racing methods efficient approaches grid search. Initially, function evaluates tuning parameters small initial set resamples. burn_in argument control_race() sets number initial resamples. performance statistics resamples analyzed determine tuning parameters statistically different current best setting. parameter statistically different, excluded resampling. next resample used remaining parameter combinations statistical analysis updated. candidate parameters may excluded new resample processed. function determines statistical significance using repeated measures ANOVA model performance statistic (e.g., RMSE, accuracy, etc.) outcome data random effect due resamples. control_race() function contains parameter significance cutoff applied ANOVA results well relevant arguments. benefit using racing methods conjunction parallel processing. following section shows benchmark results one dataset model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) analyzed racing. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"benchmarking-results","dir":"Reference","previous_headings":"","what":"Benchmarking results","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"demonstrate, use SVM model kernlab package. ’ll tune model parameters (.e., recipe tuning): ’ll get times grid search ANOVA racing without parallel processing: Speed-5.56-fold racing. Parallel processing grid search 6.01-fold faster sequential grid search. Parallel processing racing 35.51-fold faster sequential grid search. compounding effect racing parallel processing magnitude depends type model, number resamples, number tuning parameters, .","code":"library(kernlab) library(tidymodels) library(finetune) library(doParallel) ## ----------------------------------------------------------------------------- data(cells, package = \"modeldata\") cells <- cells %>% select(-case) ## ----------------------------------------------------------------------------- set.seed(6376) rs <- bootstraps(cells, times = 25) ## ----------------------------------------------------------------------------- svm_spec <- svm_rbf(cost = tune(), rbf_sigma = tune()) %>% set_engine(\"kernlab\") %>% set_mode(\"classification\") svm_rec <- recipe(class ~ ., data = cells) %>% step_YeoJohnson(all_predictors()) %>% step_normalize(all_predictors()) svm_wflow <- workflow() %>% add_model(svm_spec) %>% add_recipe(svm_rec) set.seed(1) svm_grid <- svm_spec %>% parameters() %>% grid_latin_hypercube(size = 25) ## ----------------------------------------------------------------------------- ## Regular grid search system.time({ set.seed(2) svm_wflow %>% tune_grid(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 741.660 19.654 761.357 ## ----------------------------------------------------------------------------- ## With racing system.time({ set.seed(2) svm_wflow %>% tune_race_anova(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 133.143 3.675 136.822 ## ----------------------------------------------------------------------------- ## Parallel processing setup cores <- parallel::detectCores(logical = FALSE) cores ## [1] 10 cl <- makePSOCKcluster(cores) registerDoParallel(cl) ## ----------------------------------------------------------------------------- ## Parallel grid search system.time({ set.seed(2) svm_wflow %>% tune_grid(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 1.112 0.190 126.650 ## ----------------------------------------------------------------------------- ## Parallel racing system.time({ set.seed(2) svm_wflow %>% tune_race_anova(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 1.908 0.261 21.442"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"Kuhn, M 2014. \"Futility Analysis Cross-Validation Machine Learning Models.\" https://arxiv.org/abs/1405.6974.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"","code":"# \\donttest{ library(parsnip) library(rsample) library(dials) #> Loading required package: scales ## ----------------------------------------------------------------------------- if (rlang::is_installed(c(\"discrim\", \"lme4\", \"modeldata\"))) { library(discrim) data(two_class_dat, package = \"modeldata\") set.seed(6376) rs <- bootstraps(two_class_dat, times = 10) ## ----------------------------------------------------------------------------- # optimize an regularized discriminant analysis model rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- ctrl <- control_race(verbose_elim = TRUE) set.seed(11) grid_anova <- rda_spec %>% tune_race_anova(Class ~ ., resamples = rs, grid = 10, control = ctrl) # Shows only the fully resampled parameters show_best(grid_anova, metric = \"roc_auc\", n = 2) plot_race(grid_anova) } #> #> Attaching package: ‘discrim’ #> The following object is masked from ‘package:dials’: #> #> smoothness #> ℹ Evaluating against the initial 3 burn-in resamples. #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap05: All but one parameter combination were eliminated. # }"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":null,"dir":"Reference","previous_headings":"","what":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"tune_race_win_loss() computes set performance metrics (e.g. accuracy RMSE) pre-defined set tuning parameters correspond model recipe across one resamples data. initial number resamples evaluated, process eliminates tuning parameter combinations unlikely best results using statistical model. pairwise combinations tuning parameters, win/loss statistics calculated logistic regression model used measure likely combination win overall.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"","code":"tune_race_win_loss(object, ...) # S3 method for model_spec tune_race_win_loss( object, preprocessor, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() ) # S3 method for workflow tune_race_win_loss( object, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object multiple resamples (.e., validation set). param_info dials::parameters() object NULL. none given, parameters set derived arguments. Passing argument can useful parameter ranges need customized. grid data frame tuning combinations positive integer. data frame columns parameter tuned rows tuning parameter candidates. integer denotes number candidate parameter sets created automatically. metrics yardstick::metric_set() NULL. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). control object used modify tuning process. See control_race() details.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"object primary class tune_race standard format objects produced tune::tune_grid().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"technical details method described Kuhn (2014). Racing methods efficient approaches grid search. Initially, function evaluates tuning parameters small initial set resamples. burn_in argument control_race() sets number initial resamples. performance statistics current set resamples converted win/loss/tie results. example, two parameters (j k) classification model resampled three times: third resample, parameter k 2:1 win/loss ratio versus j. Parameters equal results treated half-win setting. statistics determined pairwise combinations parameters Bradley-Terry model used model win/loss/tie statistics. model can compute ability parameter combination win overall. confidence interval winning ability computed settings whose interval includes zero retained future resamples (since statistically different form best results). next resample used remaining parameter combinations statistical analysis updated. candidate parameters may excluded new resample processed. control_race() function contains parameter significance cutoff applied Bradley-Terry model results well relevant arguments.","code":"| area under the ROC curve | ----------------------------- resample | parameter j | parameter k | winner --------------------------------------------- 1 | 0.81 | 0.92 | k 2 | 0.95 | 0.94 | j 3 | 0.79 | 0.81 | k ---------------------------------------------"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) analyzed racing. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"Kuhn, M 2014. \"Futility Analysis Cross-Validation Machine Learning Models.\" https://arxiv.org/abs/1405.6974.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"","code":"# \\donttest{ library(parsnip) library(rsample) library(dials) ## ----------------------------------------------------------------------------- if (rlang::is_installed(c(\"discrim\", \"modeldata\"))) { library(discrim) data(two_class_dat, package = \"modeldata\") set.seed(6376) rs <- bootstraps(two_class_dat, times = 10) ## ----------------------------------------------------------------------------- # optimize an regularized discriminant analysis model rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- ctrl <- control_race(verbose_elim = TRUE) set.seed(11) grid_wl <- rda_spec %>% tune_race_win_loss(Class ~ ., resamples = rs, grid = 10, control = ctrl) # Shows only the fully resampled parameters show_best(grid_wl, metric = \"roc_auc\") plot_race(grid_wl) } #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap05: 1 eliminated; 9 candidates remain. #> ℹ Bootstrap07: 1 eliminated; 8 candidates remain. #> ℹ Bootstrap10: 1 eliminated; 7 candidates remain. #> ℹ Bootstrap01: 1 eliminated; 6 candidates remain. #> ℹ Bootstrap08: 1 eliminated; 5 candidates remain. #> ℹ Bootstrap03: 1 eliminated; 4 candidates remain. #> ℹ Bootstrap09: 1 eliminated; 3 candidates remain. # }"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":null,"dir":"Reference","previous_headings":"","what":"Optimization of model parameters via simulated annealing — tune_sim_anneal","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tune_sim_anneal() uses iterative search procedure generate new candidate tuning parameter combinations based previous results. uses generalized simulated annealing method Bohachevsky, Johnson, Stein (1986).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"","code":"tune_sim_anneal(object, ...) # S3 method for model_spec tune_sim_anneal( object, preprocessor, resamples, ..., iter = 10, param_info = NULL, metrics = NULL, eval_time = NULL, initial = 1, control = control_sim_anneal() ) # S3 method for workflow tune_sim_anneal( object, resamples, ..., iter = 10, param_info = NULL, metrics = NULL, eval_time = NULL, initial = 1, control = control_sim_anneal() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object. iter maximum number search iterations. param_info dials::parameters() object NULL. none given, parameter set derived arguments. Passing argument can useful parameter ranges need customized. metrics yardstick::metric_set() object containing information models evaluated performance. first metric metrics one optimized. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). initial initial set results tidy format (result tune_grid(), tune_bayes(), tune_race_win_loss(), tune_race_anova()) positive integer. initial object sequential search method, simulated annealing iterations start last iteration initial results. control results control_sim_anneal().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tibble results mirror generated tune_grid(). However, results contain .iter column replicate rset object multiple times iterations (limited additional memory costs).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"Simulated annealing global optimization method. model tuning, can used iteratively search parameter space optimal tuning parameter combinations. iteration, new parameter combination created perturbing current parameters small way within small neighborhood. new parameter combination used fit model model's performance measured using resampling (simple validation set). new settings better results current settings, accepted process continues. new settings worse performance, probability threshold computed accepting sub-optimal values. probability function sub-optimal results well many iterations elapsed. referred \"cooling schedule\" algorithm. sub-optimal results accepted, next iterations settings based inferior results. Otherwise, new parameter values generated previous iteration's settings. process continues pre-defined number iterations overall best settings recommended use. control_sim_anneal() function can specify number iterations without improvement early stopping. Also, function can used specify restart threshold; globally best results discovered within certain number iterations, process can restart using last known settings globally best.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"creating-new-settings","dir":"Reference","previous_headings":"","what":"Creating new settings","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"numeric parameter, range possible values known well transformations. current values transformed scaled values zero one (based possible range values). candidate set values sphere random radii rmin rmax generated. Infeasible values removed one value chosen random. value back transformed original units scale used new settings. argument radius control_sim_anneal() controls range neighborhood sizes. categorical integer parameters, changes pre-defined probability. flip argument control_sim_anneal() can used specify probability. integer parameters, nearby integer value used. Simulated annealing search may preferred method many parameters non-numeric integers unique values. cases, likely candidate set may tested .","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"cooling-schedule","dir":"Reference","previous_headings":"","what":"Cooling schedule","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"determine probability accepting new value, percent difference performance calculated. performance metric maximized, d = (new-old)/old*100. probability calculated p = exp(d * coef * iter) coef user-defined constant can used increase decrease probabilities. cooling_coef control_sim_anneal() can used purpose.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"termination-criterion","dir":"Reference","previous_headings":"","what":"Termination criterion","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"restart counter reset new global best results found. termination counter resets new global best located suboptimal result improved.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"parallelism","dir":"Reference","previous_headings":"","what":"Parallelism","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tune finetune packages currently parallelize resamples. Specifying parallel back-end improve generation initial set sub-models (). iteration search also run parallel parallel backend registered.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) used guide optimization. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"Bohachevsky, Johnson, Stein (1986) \"Generalized Simulated Annealing Function Optimization\", Technometrics, 28:3, 209-217","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"","code":"# \\donttest{ library(finetune) library(rpart) #> #> Attaching package: ‘rpart’ #> The following object is masked from ‘package:dials’: #> #> prune library(dplyr) #> #> Attaching package: ‘dplyr’ #> The following object is masked from ‘package:MASS’: #> #> select #> The following objects are masked from ‘package:stats’: #> #> filter, lag #> The following objects are masked from ‘package:base’: #> #> intersect, setdiff, setequal, union library(tune) library(rsample) library(parsnip) library(workflows) library(ggplot2) ## ----------------------------------------------------------------------------- if (rlang::is_installed(\"modeldata\")) { data(two_class_dat, package = \"modeldata\") set.seed(5046) bt <- bootstraps(two_class_dat, times = 5) ## ----------------------------------------------------------------------------- cart_mod <- decision_tree(cost_complexity = tune(), min_n = tune()) %>% set_engine(\"rpart\") %>% set_mode(\"classification\") ## ----------------------------------------------------------------------------- # For reproducibility, set the seed before running. set.seed(10) sa_search <- cart_mod %>% tune_sim_anneal(Class ~ ., resamples = bt, iter = 10) autoplot(sa_search, metric = \"roc_auc\", type = \"parameters\") + theme_bw() ## ----------------------------------------------------------------------------- # More iterations. `initial` can be any other tune_* object or an integer # (for new values). set.seed(11) more_search <- cart_mod %>% tune_sim_anneal(Class ~ ., resamples = bt, iter = 10, initial = sa_search) autoplot(more_search, metric = \"roc_auc\", type = \"performance\") + theme_bw() } #> Optimizing roc_auc #> Initial best: 0.81147 #> 1 ♥ new best roc_auc=0.82651 (+/-0.004242) #> 2 ♥ new best roc_auc=0.83439 (+/-0.009983) #> 3 ♥ new best roc_auc=0.83912 (+/-0.007893) #> 4 ♥ new best roc_auc=0.84267 (+/-0.006609) #> 5 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 6 ◯ accept suboptimal roc_auc=0.83751 (+/-0.01065) #> 7 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 8 ◯ accept suboptimal roc_auc=0.83736 (+/-0.01063) #> 9 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 10 ◯ accept suboptimal roc_auc=0.84225 (+/-0.009044) #> There were 10 previous iterations #> Optimizing roc_auc #> 10 ✔ initial roc_auc=0.84267 (+/-0.006609) #> 11 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 12 ◯ accept suboptimal roc_auc=0.83736 (+/-0.01063) #> 13 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 14 ◯ accept suboptimal roc_auc=0.84225 (+/-0.009044) #> 15 ◯ accept suboptimal roc_auc=0.83751 (+/-0.01065) #> 16 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 17 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 18 ✖ restart from best roc_auc=0.8421 (+/-0.007206) #> 19 ◯ accept suboptimal roc_auc=0.83928 (+/-0.007538) #> 20 ─ discard suboptimal roc_auc=0.82378 (+/-0.01017) # }"},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-development-version","dir":"Changelog","previous_headings":"","what":"finetune (development version)","title":"finetune (development version)","text":"Improved error message tune_sim_anneal() values supplied param_info encompass values evaluated initial grid. often happens user mistakenly supplies different parameter sets function generated initial results tune_sim_anneal(). Fixed bug tune_sim_anneal() fail supplied parameters needing finalization. function now finalize needed parameter ranges internally (#39). Fixed bug packages specified control_race(pkgs) actually loaded tune_race_anova() (#74). autoplot() methods racing objects now use integers x-axis breaks (#75). Enabling verbose_elim control option tune_race_anova() now additionally introduce message confirming function evaluating burn-resamples. Updates based new version tune, primarily survival analysis models (#104).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"breaking-change-development-version","dir":"Changelog","previous_headings":"","what":"Breaking Change","title":"finetune (development version)","text":"Ellipses (…) now used consistently package require optional arguments named. collect_predictions(), collect_metrics() show_best() methods previously ellipses end function signature moved follow last argument without default value. Optional arguments previously passed position now error informatively prompting named (#105).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-110","dir":"Changelog","previous_headings":"","what":"finetune 1.1.0","title":"finetune 1.1.0","text":"CRAN release: 2023-04-19 Various minor changes keep developments tune dplyr packages (#60) (#62) (#67) (#68). Corrects .config output save_pred = TRUE tune_sim_anneal(). function previously outputted constant Model1_Preprocessor1 .predictions slot, now provides .config values align .metrics (#57). eval_time attribute added tune objects produced finetune.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-101","dir":"Changelog","previous_headings":"","what":"finetune 1.0.1","title":"finetune 1.0.1","text":"CRAN release: 2022-10-12 racing: collect_metrics() collect_predictions() 'complete' argument returns results model configurations fully resampled. select_best() show_best() now show results model configurations fully resampled. tune_race_anova(), tune_race_win_loss(), tune_sim_anneal() longer error control argument isn’t corresponding control_*() object. work long object passed control includes elements required control_*() object. control_sim_anneal() got new argument verbose_iter used control verbosity iterative calculations. change means verbose argument passed tune_grid() control verbosity.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-100","dir":"Changelog","previous_headings":"","what":"finetune 1.0.0","title":"finetune 1.0.0","text":"CRAN release: 2022-09-05 informative error given enough resamples racing (#33). tune_sim_anneal() passing arguments tune_grid() (#40).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-020","dir":"Changelog","previous_headings":"","what":"finetune 0.2.0","title":"finetune 0.2.0","text":"CRAN release: 2022-03-24 Maintenance release CRAN requirements. Use extract_parameter_set_dials() instead parameters() get parameter sets. Removed pillar-related S3 methods currently live tune.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-011","dir":"Changelog","previous_headings":"","what":"finetune 0.1.1","title":"finetune 0.1.1","text":"CRAN release: 2022-02-21 tune_sim_anneal() overwrites tuning parameter information originally contain unknowns.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-010","dir":"Changelog","previous_headings":"","what":"finetune 0.1.0","title":"finetune 0.1.0","text":"CRAN release: 2021-07-21 check added make sure lme4 BradleyTerry2 installed (#8) Added pillar methods formatting tune objects list columns. Fixed bug random_integer_neighbor_calc() keep values inside range (#10) tune_sim_anneal() now retains finalized parameter set replaces existing parameter set finalized (#14) bug win/loss racing fixed cases one tuning parameter results bad broke Bradley-Terry model (#7)","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-001","dir":"Changelog","previous_headings":"","what":"finetune 0.0.1","title":"finetune 0.0.1","text":"CRAN release: 2020-11-20 First CRAN release","code":""}]
+[{"path":[]},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement codeofconduct@posit.co. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://finetune.tidymodels.org/dev/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://finetune.tidymodels.org/dev/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2023 finetune authors Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://finetune.tidymodels.org/dev/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Max Kuhn. Author, maintainer. . Copyright holder, funder.","code":""},{"path":"https://finetune.tidymodels.org/dev/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Kuhn M (2024). finetune: Additional Functions Model Tuning. R package version 1.2.0.9000, https://finetune.tidymodels.org, https://github.com/tidymodels/finetune.","code":"@Manual{, title = {finetune: Additional Functions for Model Tuning}, author = {Max Kuhn}, year = {2024}, note = {R package version 1.2.0.9000, https://finetune.tidymodels.org}, url = {https://github.com/tidymodels/finetune}, }"},{"path":"https://finetune.tidymodels.org/dev/index.html","id":"finetune-","dir":"","previous_headings":"","what":"Additional Functions for Model Tuning","title":"Additional Functions for Model Tuning","text":"finetune contains extra functions model tuning extend currently tune package. can install CRAN version package following code: install development version package, run: two main sets tools package: simulated annealing racing. Tuning via simulated annealing optimization iterative search tool finding good values: second set methods racing. start small set resamples grid points, statistically testing see ones dropped investigated . two methods based Kuhn (2014). example, using ANOVA-type analysis filter parameter combinations: tune_race_win_loss() can also used. treats tuning parameters sports teams tournament computed win/loss statistics.","code":"install.packages(\"finetune\") # install.packages(\"pak\") pak::pak(\"tidymodels/finetune\") library(tidymodels) library(finetune) # Syntax very similar to `tune_grid()` or `tune_bayes()`: ## ----------------------------------------------------------------------------- data(two_class_dat, package = \"modeldata\") set.seed(1) rs <- bootstraps(two_class_dat, times = 10) # more resamples usually needed # Optimize a regularized discriminant analysis model library(discrim) rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- set.seed(2) sa_res <- rda_spec %>% tune_sim_anneal(Class ~ ., resamples = rs, iter = 20, initial = 4) #> Optimizing roc_auc #> Initial best: 0.86480 #> 1 ♥ new best roc_auc=0.87327 (+/-0.004592) #> 2 ♥ new best roc_auc=0.87915 (+/-0.003864) #> 3 ◯ accept suboptimal roc_auc=0.87029 (+/-0.004994) #> 4 + better suboptimal roc_auc=0.87171 (+/-0.004717) #> 5 ◯ accept suboptimal roc_auc=0.86944 (+/-0.005081) #> 6 ◯ accept suboptimal roc_auc=0.86812 (+/-0.0052) #> 7 ♥ new best roc_auc=0.88172 (+/-0.003647) #> 8 ◯ accept suboptimal roc_auc=0.87678 (+/-0.004276) #> 9 ◯ accept suboptimal roc_auc=0.8627 (+/-0.005784) #> 10 + better suboptimal roc_auc=0.87003 (+/-0.005106) #> 11 + better suboptimal roc_auc=0.87088 (+/-0.004962) #> 12 ◯ accept suboptimal roc_auc=0.86803 (+/-0.005195) #> 13 ◯ accept suboptimal roc_auc=0.85294 (+/-0.006498) #> 14 ─ discard suboptimal roc_auc=0.84689 (+/-0.006867) #> 15 ✖ restart from best roc_auc=0.85021 (+/-0.006623) #> 16 ◯ accept suboptimal roc_auc=0.87607 (+/-0.004318) #> 17 ◯ accept suboptimal roc_auc=0.87245 (+/-0.004799) #> 18 + better suboptimal roc_auc=0.87706 (+/-0.004131) #> 19 ◯ accept suboptimal roc_auc=0.87213 (+/-0.004791) #> 20 ◯ accept suboptimal roc_auc=0.86218 (+/-0.005773) show_best(sa_res, metric = \"roc_auc\", n = 2) #> # A tibble: 2 × 9 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.308 0.0166 roc_auc binary 0.882 10 0.00365 Iter7 #> 2 0.121 0.0474 roc_auc binary 0.879 10 0.00386 Iter2 #> # ℹ 1 more variable: .iter set.seed(3) grid <- rda_spec %>% extract_parameter_set_dials() %>% grid_max_entropy(size = 20) ctrl <- control_race(verbose_elim = TRUE) set.seed(4) grid_anova <- rda_spec %>% tune_race_anova(Class ~ ., resamples = rs, grid = grid, control = ctrl) #> ℹ Evaluating against the initial 3 burn-in resamples. #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap10: 14 eliminated; 6 candidates remain. #> #> ℹ Bootstrap04: 2 eliminated; 4 candidates remain. #> #> ℹ Bootstrap03: All but one parameter combination were eliminated. show_best(grid_anova, metric = \"roc_auc\", n = 2) #> # A tibble: 1 × 8 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.831 0.0207 roc_auc binary 0.881 10 0.00386 Preproce… set.seed(4) grid_win_loss<- rda_spec %>% tune_race_win_loss(Class ~ ., resamples = rs, grid = grid, control = ctrl) #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap10: 3 eliminated; 17 candidates remain. #> #> ℹ Bootstrap04: 2 eliminated; 15 candidates remain. #> #> ℹ Bootstrap03: 2 eliminated; 13 candidates remain. #> #> ℹ Bootstrap01: 1 eliminated; 12 candidates remain. #> #> ℹ Bootstrap07: 1 eliminated; 11 candidates remain. #> #> ℹ Bootstrap05: 1 eliminated; 10 candidates remain. #> #> ℹ Bootstrap08: 1 eliminated; 9 candidates remain. show_best(grid_win_loss, metric = \"roc_auc\", n = 2) #> # A tibble: 2 × 8 #> frac_common_cov frac_identity .metric .estimator mean n std_err .config #> #> 1 0.831 0.0207 roc_auc binary 0.881 10 0.00386 Preproce… #> 2 0.119 0.0470 roc_auc binary 0.879 10 0.00387 Preproce…"},{"path":"https://finetune.tidymodels.org/dev/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Additional Functions for Model Tuning","text":"project released Contributor Code Conduct. contributing project, agree abide terms. questions discussions tidymodels packages, modeling, machine learning, please post Posit Community. think encountered bug, please submit issue. Either way, learn create share reprex (minimal, reproducible example), clearly communicate code. Check details contributing guidelines tidymodels packages get help.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":null,"dir":"Reference","previous_headings":"","what":"Obtain and format results produced by racing functions — collect_predictions","title":"Obtain and format results produced by racing functions — collect_predictions","text":"Obtain format results produced racing functions","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Obtain and format results produced by racing functions — collect_predictions","text":"","code":"# S3 method for tune_race collect_predictions( x, ..., summarize = FALSE, parameters = NULL, all_configs = FALSE ) # S3 method for tune_race collect_metrics( x, ..., summarize = TRUE, type = c(\"long\", \"wide\"), all_configs = FALSE )"},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Obtain and format results produced by racing functions — collect_predictions","text":"x results tune_grid(), tune_bayes(), fit_resamples(), last_fit(). collect_predictions(), control option save_pred = TRUE used. ... currently used. summarize logical; metrics summarized resamples (TRUE) return values individual resample. Note , x created last_fit(), summarize effect. object types, method summarizing predictions detailed . parameters optional tibble tuning parameter values can used filter predicted values processing. tibble columns tuning parameter identifier (e.g. \"my_param\" tune(\"my_param\") used). all_configs logical: return complete set model configurations just made end race (default). type One \"long\" (default) \"wide\". type = \"long\", output columns .metric one .estimate mean. .estimate/mean gives values .metric. type = \"wide\", metric column n std_err columns removed, exist.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Obtain and format results produced by racing functions — collect_predictions","text":"tibble. column names depend results mode model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/collect_predictions.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Obtain and format results produced by racing functions — collect_predictions","text":"collect_metrics() collect_predictions(), unsummarized, columns tuning parameter (using id tune(), ). collect_metrics() also columns .metric, .estimator. results summarized, columns mean, n, std_err. summarized, additional columns resampling identifier(s) .estimate. collect_predictions(), additional columns resampling identifier(s), columns predicted values (e.g., .pred, .pred_class, etc.), column outcome(s) using original column name(s) data. collect_predictions() can summarize various results replicate --sample predictions. example, using bootstrap, row original training set multiple holdout predictions (across assessment sets). convert results format every training set single predicted value, results averaged replicate predictions. regression cases, numeric predictions simply averaged. classification models, problem complex. class probabilities used, averaged re-normalized make sure add one. hard class predictions also exist data, determined summarized probability estimates (match). hard class predictions results, mode used summarize. racing results, best collect model configurations finished race (.e., completely resampled). Comparing performance metrics configurations averaged different resamples likely lead inappropriate results.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":null,"dir":"Reference","previous_headings":"","what":"Control aspects of the grid search racing process — control_race","title":"Control aspects of the grid search racing process — control_race","text":"Control aspects grid search racing process","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Control aspects of the grid search racing process — control_race","text":"","code":"control_race( verbose = FALSE, verbose_elim = FALSE, allow_par = TRUE, extract = NULL, save_pred = FALSE, burn_in = 3, num_ties = 10, alpha = 0.05, randomize = TRUE, pkgs = NULL, save_workflow = FALSE, event_level = \"first\", parallel_over = \"everything\", backend_options = NULL )"},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Control aspects of the grid search racing process — control_race","text":"verbose logical logging results (warnings errors, always shown) generated training single R process. using parallel backends, argument typically result logging. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. verbose_elim logical whether logging elimination tuning parameter combinations occur. allow_par logical allow parallel processing (parallel backend registered). extract optional function least one argument (NULL) can used retain arbitrary objects model fit object, recipe, elements workflow. save_pred logical whether --sample predictions saved model evaluated. burn_in integer many resamples completed grid combinations parameter filtering begins. num_ties integer tie-breaking occur. two final parameter combinations evaluated, num_ties specified many resampling iterations evaluated. num_ties iterations, parameter combination current best results retained. alpha alpha level one-sided confidence interval parameter combination. randomize resamples evaluated random order? default, resamples evaluated random order random number seed control prior calling method (reproducible). repeated cross-validation randomization occurs within repeat. pkgs optional character string R package names loaded (namespace) parallel processing. save_workflow logical whether workflow appended output attribute. event_level single string containing either \"first\" \"second\". argument passed yardstick metric functions type class prediction made, specifies level outcome considered \"event\". parallel_over single string containing either \"resamples\" \"everything\" describing use parallel processing. Alternatively, NULL allowed, chooses \"resamples\" \"everything\" automatically. \"resamples\", tuning performed parallel resamples alone. Within resample, preprocessor (.e. recipe formula) processed , reused across models need fit. \"everything\", tuning performed parallel two levels. outer parallel loop iterate resamples. Additionally, inner parallel loop iterate unique combinations preprocessor model tuning parameters specific resample. result preprocessor re-processed multiple times, can faster processing extremely fast. NULL, chooses \"resamples\" one resample, otherwise chooses \"everything\" attempt maximize core utilization. Note switching parallel_over strategies guaranteed use random number generation schemes. However, re-tuning model using parallel_over strategy guaranteed reproducible runs. backend_options object class \"tune_backend_options\" created tune::new_backend_options(), used pass arguments specific tuning backend. Defaults NULL default backend options.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Control aspects of the grid search racing process — control_race","text":"object class control_race echos argument values.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_race.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Control aspects of the grid search racing process — control_race","text":"","code":"control_race() #> Racing method control object"},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":null,"dir":"Reference","previous_headings":"","what":"Control aspects of the simulated annealing search process — control_sim_anneal","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"Control aspects simulated annealing search process","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"","code":"control_sim_anneal( verbose = FALSE, verbose_iter = TRUE, no_improve = Inf, restart = 8L, radius = c(0.05, 0.15), flip = 3/4, cooling_coef = 0.02, extract = NULL, save_pred = FALSE, time_limit = NA, pkgs = NULL, save_workflow = FALSE, save_history = FALSE, event_level = \"first\", parallel_over = NULL, allow_par = TRUE, backend_options = NULL )"},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"verbose logical logging results (warnings errors, always shown) generated training single R process. using parallel backends, argument typically result logging. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. verbose_iter logical logging results search process. Defaults FALSE. using dark IDE theme, logging messages might hard see; try setting tidymodels.dark option options(tidymodels.dark = TRUE) print lighter colors. no_improve integer cutoff number iterations without better results. restart number iterations improvement new tuning parameter candidates generated last, overall best conditions. radius Two real numbers (0, 1) describing value \"neighborhood\" current result . numeric parameters scaled [0, 1] scale, values set min. max. radius circle used generate new numeric parameter values. flip real number [0, 1] probability changing non-numeric parameter values iteration. cooling_coef real, positive number influence cooling schedule. Larger values decrease probability accepting sub-optimal parameter setting. extract optional function least one argument (NULL) can used retain arbitrary objects model fit object, recipe, elements workflow. save_pred logical whether --sample predictions saved model evaluated. time_limit number minimum number minutes (elapsed) function execute. elapsed time evaluated internal checkpoints , time, results time returned (warning). means time_limit exact limit, minimum time limit. pkgs optional character string R package names loaded (namespace) parallel processing. save_workflow logical whether workflow appended output attribute. save_history logical save iteration details search. saved tempdir() named sa_history.RData. results deleted R session ends. option useful teaching purposes. event_level single string containing either \"first\" \"second\". argument passed yardstick metric functions type class prediction made, specifies level outcome considered \"event\". parallel_over single string containing either \"resamples\" \"everything\" describing use parallel processing. Alternatively, NULL allowed, chooses \"resamples\" \"everything\" automatically. \"resamples\", tuning performed parallel resamples alone. Within resample, preprocessor (.e. recipe formula) processed , reused across models need fit. \"everything\", tuning performed parallel two levels. outer parallel loop iterate resamples. Additionally, inner parallel loop iterate unique combinations preprocessor model tuning parameters specific resample. result preprocessor re-processed multiple times, can faster processing extremely fast. NULL, chooses \"resamples\" one resample, otherwise chooses \"everything\" attempt maximize core utilization. Note switching parallel_over strategies guaranteed use random number generation schemes. However, re-tuning model using parallel_over strategy guaranteed reproducible runs. allow_par logical allow parallel processing (parallel backend registered). backend_options object class \"tune_backend_options\" created tune::new_backend_options(), used pass arguments specific tuning backend. Defaults NULL default backend options.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"object class control_sim_anneal echos argument values.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/control_sim_anneal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Control aspects of the simulated annealing search process — control_sim_anneal","text":"","code":"control_sim_anneal() #> Simulated annealing control object"},{"path":"https://finetune.tidymodels.org/dev/reference/finetune-package.html","id":null,"dir":"Reference","previous_headings":"","what":"finetune: Additional Functions for Model Tuning — finetune-package","title":"finetune: Additional Functions for Model Tuning — finetune-package","text":"ability tune models important. 'finetune' enhances 'tune' package providing specialized methods finding reasonable values model tuning parameters. Two racing methods described Kuhn (2014) arXiv:1405.6974 included. iterative search method using generalized simulated annealing (Bohachevsky, Johnson Stein, 1986) doi:10.1080/00401706.1986.10488128 also included.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/finetune-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"finetune: Additional Functions for Model Tuning — finetune-package","text":"Maintainer: Max Kuhn max@posit.co (ORCID) contributors: Posit Software, PBC [copyright holder, funder]","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot racing results — plot_race","title":"Plot racing results — plot_race","text":"Plot model results stages racing results. line given submodel tested.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot racing results — plot_race","text":"","code":"plot_race(x)"},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot racing results — plot_race","text":"x object class tune_results","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/plot_race.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot racing results — plot_race","text":"ggplot object.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":null,"dir":"Reference","previous_headings":"","what":"Investigate best tuning parameters — show_best.tune_race","title":"Investigate best tuning parameters — show_best.tune_race","text":"show_best() displays top sub-models performance estimates.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Investigate best tuning parameters — show_best.tune_race","text":"","code":"# S3 method for tune_race show_best( x, ..., metric = NULL, eval_time = NULL, n = 5, call = rlang::current_env() )"},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Investigate best tuning parameters — show_best.tune_race","text":"x results tune_grid() tune_bayes(). ... select_by_one_std_err() select_by_pct_loss(), argument passed directly dplyr::arrange() user can sort models simple complex. , parameter p, pass unquoted expression p smaller values p indicate simpler model, desc(p) larger values indicate simpler model. least one term required two functions. See examples . metric character value metric used sort models. (See https://yardstick.tidymodels.org/articles/metric-types.html details). required single metric exists x. multiple metric none given, first metric set used (warning issued). eval_time single numeric time point dynamic event time metrics chosen (e.g., time-dependent ROC curve, etc). values consistent values used create x. NULL default automatically use first evaluation time used x. n integer maximum number top results/rows return. call call shown errors warnings.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/show_best.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Investigate best tuning parameters — show_best.tune_race","text":"racing results (finetune package), best report configurations finished race (.e., completely resampled). Comparing performance metrics configurations averaged different resamples likely lead inappropriate results.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":null,"dir":"Reference","previous_headings":"","what":"Efficient grid search via racing with ANOVA models — tune_race_anova","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"tune_race_anova() computes set performance metrics (e.g. accuracy RMSE) pre-defined set tuning parameters correspond model recipe across one resamples data. initial number resamples evaluated, process eliminates tuning parameter combinations unlikely best results using repeated measure ANOVA model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"","code":"tune_race_anova(object, ...) # S3 method for model_spec tune_race_anova( object, preprocessor, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() ) # S3 method for workflow tune_race_anova( object, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object multiple resamples (.e., validation set). param_info dials::parameters() object NULL. none given, parameters set derived arguments. Passing argument can useful parameter ranges need customized. grid data frame tuning combinations positive integer. data frame columns parameter tuned rows tuning parameter candidates. integer denotes number candidate parameter sets created automatically. metrics yardstick::metric_set() NULL. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). control object used modify tuning process. See control_race() details.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"object primary class tune_race standard format objects produced tune::tune_grid().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"technical details method described Kuhn (2014). Racing methods efficient approaches grid search. Initially, function evaluates tuning parameters small initial set resamples. burn_in argument control_race() sets number initial resamples. performance statistics resamples analyzed determine tuning parameters statistically different current best setting. parameter statistically different, excluded resampling. next resample used remaining parameter combinations statistical analysis updated. candidate parameters may excluded new resample processed. function determines statistical significance using repeated measures ANOVA model performance statistic (e.g., RMSE, accuracy, etc.) outcome data random effect due resamples. control_race() function contains parameter significance cutoff applied ANOVA results well relevant arguments. benefit using racing methods conjunction parallel processing. following section shows benchmark results one dataset model.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) analyzed racing. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"benchmarking-results","dir":"Reference","previous_headings":"","what":"Benchmarking results","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"demonstrate, use SVM model kernlab package. ’ll tune model parameters (.e., recipe tuning): ’ll get times grid search ANOVA racing without parallel processing: Speed-5.56-fold racing. Parallel processing grid search 6.01-fold faster sequential grid search. Parallel processing racing 35.51-fold faster sequential grid search. compounding effect racing parallel processing magnitude depends type model, number resamples, number tuning parameters, .","code":"library(kernlab) library(tidymodels) library(finetune) library(doParallel) ## ----------------------------------------------------------------------------- data(cells, package = \"modeldata\") cells <- cells %>% select(-case) ## ----------------------------------------------------------------------------- set.seed(6376) rs <- bootstraps(cells, times = 25) ## ----------------------------------------------------------------------------- svm_spec <- svm_rbf(cost = tune(), rbf_sigma = tune()) %>% set_engine(\"kernlab\") %>% set_mode(\"classification\") svm_rec <- recipe(class ~ ., data = cells) %>% step_YeoJohnson(all_predictors()) %>% step_normalize(all_predictors()) svm_wflow <- workflow() %>% add_model(svm_spec) %>% add_recipe(svm_rec) set.seed(1) svm_grid <- svm_spec %>% parameters() %>% grid_latin_hypercube(size = 25) ## ----------------------------------------------------------------------------- ## Regular grid search system.time({ set.seed(2) svm_wflow %>% tune_grid(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 741.660 19.654 761.357 ## ----------------------------------------------------------------------------- ## With racing system.time({ set.seed(2) svm_wflow %>% tune_race_anova(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 133.143 3.675 136.822 ## ----------------------------------------------------------------------------- ## Parallel processing setup cores <- parallel::detectCores(logical = FALSE) cores ## [1] 10 cl <- makePSOCKcluster(cores) registerDoParallel(cl) ## ----------------------------------------------------------------------------- ## Parallel grid search system.time({ set.seed(2) svm_wflow %>% tune_grid(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 1.112 0.190 126.650 ## ----------------------------------------------------------------------------- ## Parallel racing system.time({ set.seed(2) svm_wflow %>% tune_race_anova(resamples = rs, grid = svm_grid) }) ## user system elapsed ## 1.908 0.261 21.442"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"Kuhn, M 2014. \"Futility Analysis Cross-Validation Machine Learning Models.\" https://arxiv.org/abs/1405.6974.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_anova.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efficient grid search via racing with ANOVA models — tune_race_anova","text":"","code":"# \\donttest{ library(parsnip) library(rsample) library(dials) #> Loading required package: scales ## ----------------------------------------------------------------------------- if (rlang::is_installed(c(\"discrim\", \"lme4\", \"modeldata\"))) { library(discrim) data(two_class_dat, package = \"modeldata\") set.seed(6376) rs <- bootstraps(two_class_dat, times = 10) ## ----------------------------------------------------------------------------- # optimize an regularized discriminant analysis model rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- ctrl <- control_race(verbose_elim = TRUE) set.seed(11) grid_anova <- rda_spec %>% tune_race_anova(Class ~ ., resamples = rs, grid = 10, control = ctrl) # Shows only the fully resampled parameters show_best(grid_anova, metric = \"roc_auc\", n = 2) plot_race(grid_anova) } #> #> Attaching package: ‘discrim’ #> The following object is masked from ‘package:dials’: #> #> smoothness #> ℹ Evaluating against the initial 3 burn-in resamples. #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap05: All but one parameter combination were eliminated. # }"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":null,"dir":"Reference","previous_headings":"","what":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"tune_race_win_loss() computes set performance metrics (e.g. accuracy RMSE) pre-defined set tuning parameters correspond model recipe across one resamples data. initial number resamples evaluated, process eliminates tuning parameter combinations unlikely best results using statistical model. pairwise combinations tuning parameters, win/loss statistics calculated logistic regression model used measure likely combination win overall.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"","code":"tune_race_win_loss(object, ...) # S3 method for model_spec tune_race_win_loss( object, preprocessor, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() ) # S3 method for workflow tune_race_win_loss( object, resamples, ..., param_info = NULL, grid = 10, metrics = NULL, eval_time = NULL, control = control_race() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object multiple resamples (.e., validation set). param_info dials::parameters() object NULL. none given, parameters set derived arguments. Passing argument can useful parameter ranges need customized. grid data frame tuning combinations positive integer. data frame columns parameter tuned rows tuning parameter candidates. integer denotes number candidate parameter sets created automatically. metrics yardstick::metric_set() NULL. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). control object used modify tuning process. See control_race() details.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"object primary class tune_race standard format objects produced tune::tune_grid().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"technical details method described Kuhn (2014). Racing methods efficient approaches grid search. Initially, function evaluates tuning parameters small initial set resamples. burn_in argument control_race() sets number initial resamples. performance statistics current set resamples converted win/loss/tie results. example, two parameters (j k) classification model resampled three times: third resample, parameter k 2:1 win/loss ratio versus j. Parameters equal results treated half-win setting. statistics determined pairwise combinations parameters Bradley-Terry model used model win/loss/tie statistics. model can compute ability parameter combination win overall. confidence interval winning ability computed settings whose interval includes zero retained future resamples (since statistically different form best results). next resample used remaining parameter combinations statistical analysis updated. candidate parameters may excluded new resample processed. control_race() function contains parameter significance cutoff applied Bradley-Terry model results well relevant arguments.","code":"| area under the ROC curve | ----------------------------- resample | parameter j | parameter k | winner --------------------------------------------- 1 | 0.81 | 0.92 | k 2 | 0.95 | 0.94 | j 3 | 0.79 | 0.81 | k ---------------------------------------------"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) analyzed racing. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"Kuhn, M 2014. \"Futility Analysis Cross-Validation Machine Learning Models.\" https://arxiv.org/abs/1405.6974.","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_race_win_loss.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efficient grid search via racing with win/loss statistics — tune_race_win_loss","text":"","code":"# \\donttest{ library(parsnip) library(rsample) library(dials) ## ----------------------------------------------------------------------------- if (rlang::is_installed(c(\"discrim\", \"modeldata\"))) { library(discrim) data(two_class_dat, package = \"modeldata\") set.seed(6376) rs <- bootstraps(two_class_dat, times = 10) ## ----------------------------------------------------------------------------- # optimize an regularized discriminant analysis model rda_spec <- discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>% set_engine(\"klaR\") ## ----------------------------------------------------------------------------- ctrl <- control_race(verbose_elim = TRUE) set.seed(11) grid_wl <- rda_spec %>% tune_race_win_loss(Class ~ ., resamples = rs, grid = 10, control = ctrl) # Shows only the fully resampled parameters show_best(grid_wl, metric = \"roc_auc\") plot_race(grid_wl) } #> ℹ Racing will maximize the roc_auc metric. #> ℹ Resamples are analyzed in a random order. #> ℹ Bootstrap05: 1 eliminated; 9 candidates remain. #> ℹ Bootstrap07: 1 eliminated; 8 candidates remain. #> ℹ Bootstrap10: 1 eliminated; 7 candidates remain. #> ℹ Bootstrap01: 1 eliminated; 6 candidates remain. #> ℹ Bootstrap08: 1 eliminated; 5 candidates remain. #> ℹ Bootstrap03: 1 eliminated; 4 candidates remain. #> ℹ Bootstrap09: 1 eliminated; 3 candidates remain. # }"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":null,"dir":"Reference","previous_headings":"","what":"Optimization of model parameters via simulated annealing — tune_sim_anneal","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tune_sim_anneal() uses iterative search procedure generate new candidate tuning parameter combinations based previous results. uses generalized simulated annealing method Bohachevsky, Johnson, Stein (1986).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"","code":"tune_sim_anneal(object, ...) # S3 method for model_spec tune_sim_anneal( object, preprocessor, resamples, ..., iter = 10, param_info = NULL, metrics = NULL, eval_time = NULL, initial = 1, control = control_sim_anneal() ) # S3 method for workflow tune_sim_anneal( object, resamples, ..., iter = 10, param_info = NULL, metrics = NULL, eval_time = NULL, initial = 1, control = control_sim_anneal() )"},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"object parsnip model specification workflows::workflow(). ... currently used. preprocessor traditional model formula recipe created using recipes::recipe(). required object workflow. resamples rset() object. iter maximum number search iterations. param_info dials::parameters() object NULL. none given, parameter set derived arguments. Passing argument can useful parameter ranges need customized. metrics yardstick::metric_set() object containing information models evaluated performance. first metric metrics one optimized. eval_time numeric vector time points dynamic event time metrics computed (e.g. time-dependent ROC curve, etc). values must non-negative probably greater largest event time training set (See Details ). initial initial set results tidy format (result tune_grid(), tune_bayes(), tune_race_win_loss(), tune_race_anova()) positive integer. initial object sequential search method, simulated annealing iterations start last iteration initial results. control results control_sim_anneal().","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tibble results mirror generated tune_grid(). However, results contain .iter column replicate rset object multiple times iterations (limited additional memory costs).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"Simulated annealing global optimization method. model tuning, can used iteratively search parameter space optimal tuning parameter combinations. iteration, new parameter combination created perturbing current parameters small way within small neighborhood. new parameter combination used fit model model's performance measured using resampling (simple validation set). new settings better results current settings, accepted process continues. new settings worse performance, probability threshold computed accepting sub-optimal values. probability function sub-optimal results well many iterations elapsed. referred \"cooling schedule\" algorithm. sub-optimal results accepted, next iterations settings based inferior results. Otherwise, new parameter values generated previous iteration's settings. process continues pre-defined number iterations overall best settings recommended use. control_sim_anneal() function can specify number iterations without improvement early stopping. Also, function can used specify restart threshold; globally best results discovered within certain number iterations, process can restart using last known settings globally best.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"creating-new-settings","dir":"Reference","previous_headings":"","what":"Creating new settings","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"numeric parameter, range possible values known well transformations. current values transformed scaled values zero one (based possible range values). candidate set values sphere random radii rmin rmax generated. Infeasible values removed one value chosen random. value back transformed original units scale used new settings. argument radius control_sim_anneal() controls range neighborhood sizes. categorical integer parameters, changes pre-defined probability. flip argument control_sim_anneal() can used specify probability. integer parameters, nearby integer value used. Simulated annealing search may preferred method many parameters non-numeric integers unique values. cases, likely candidate set may tested .","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"cooling-schedule","dir":"Reference","previous_headings":"","what":"Cooling schedule","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"determine probability accepting new value, percent difference performance calculated. performance metric maximized, d = (new-old)/old*100. probability calculated p = exp(d * coef * iter) coef user-defined constant can used increase decrease probabilities. cooling_coef control_sim_anneal() can used purpose.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"termination-criterion","dir":"Reference","previous_headings":"","what":"Termination criterion","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"restart counter reset new global best results found. termination counter resets new global best located suboptimal result improved.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"parallelism","dir":"Reference","previous_headings":"","what":"Parallelism","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"tune finetune packages currently parallelize resamples. Specifying parallel back-end improve generation initial set sub-models (). iteration search also run parallel parallel backend registered.","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"censored-regression-models","dir":"Reference","previous_headings":"","what":"Censored regression models","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"dynamic performance metrics (e.g. Brier ROC curves), performance calculated every value eval_time first evaluation time given user (e.g., eval_time[1]) used guide optimization. Also, values eval_time less largest observed event time training data. many non-parametric models, results beyond largest time corresponding event constant (NA).","code":""},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"Bohachevsky, Johnson, Stein (1986) \"Generalized Simulated Annealing Function Optimization\", Technometrics, 28:3, 209-217","code":""},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/reference/tune_sim_anneal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Optimization of model parameters via simulated annealing — tune_sim_anneal","text":"","code":"# \\donttest{ library(finetune) library(rpart) #> #> Attaching package: ‘rpart’ #> The following object is masked from ‘package:dials’: #> #> prune library(dplyr) #> #> Attaching package: ‘dplyr’ #> The following object is masked from ‘package:MASS’: #> #> select #> The following objects are masked from ‘package:stats’: #> #> filter, lag #> The following objects are masked from ‘package:base’: #> #> intersect, setdiff, setequal, union library(tune) library(rsample) library(parsnip) library(workflows) library(ggplot2) ## ----------------------------------------------------------------------------- if (rlang::is_installed(\"modeldata\")) { data(two_class_dat, package = \"modeldata\") set.seed(5046) bt <- bootstraps(two_class_dat, times = 5) ## ----------------------------------------------------------------------------- cart_mod <- decision_tree(cost_complexity = tune(), min_n = tune()) %>% set_engine(\"rpart\") %>% set_mode(\"classification\") ## ----------------------------------------------------------------------------- # For reproducibility, set the seed before running. set.seed(10) sa_search <- cart_mod %>% tune_sim_anneal(Class ~ ., resamples = bt, iter = 10) autoplot(sa_search, metric = \"roc_auc\", type = \"parameters\") + theme_bw() ## ----------------------------------------------------------------------------- # More iterations. `initial` can be any other tune_* object or an integer # (for new values). set.seed(11) more_search <- cart_mod %>% tune_sim_anneal(Class ~ ., resamples = bt, iter = 10, initial = sa_search) autoplot(more_search, metric = \"roc_auc\", type = \"performance\") + theme_bw() } #> Optimizing roc_auc #> Initial best: 0.81147 #> 1 ♥ new best roc_auc=0.82651 (+/-0.004242) #> 2 ♥ new best roc_auc=0.83439 (+/-0.009983) #> 3 ♥ new best roc_auc=0.83912 (+/-0.007893) #> 4 ♥ new best roc_auc=0.84267 (+/-0.006609) #> 5 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 6 ◯ accept suboptimal roc_auc=0.83751 (+/-0.01065) #> 7 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 8 ◯ accept suboptimal roc_auc=0.83736 (+/-0.01063) #> 9 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 10 ◯ accept suboptimal roc_auc=0.84225 (+/-0.009044) #> There were 10 previous iterations #> Optimizing roc_auc #> 10 ✔ initial roc_auc=0.84267 (+/-0.006609) #> 11 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 12 ◯ accept suboptimal roc_auc=0.83736 (+/-0.01063) #> 13 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 14 ◯ accept suboptimal roc_auc=0.84225 (+/-0.009044) #> 15 ◯ accept suboptimal roc_auc=0.83751 (+/-0.01065) #> 16 + better suboptimal roc_auc=0.84225 (+/-0.009044) #> 17 ◯ accept suboptimal roc_auc=0.83949 (+/-0.008792) #> 18 ✖ restart from best roc_auc=0.8421 (+/-0.007206) #> 19 ◯ accept suboptimal roc_auc=0.83928 (+/-0.007538) #> 20 ─ discard suboptimal roc_auc=0.82378 (+/-0.01017) # }"},{"path":[]},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-120","dir":"Changelog","previous_headings":"","what":"finetune 1.2.0","title":"finetune 1.2.0","text":"CRAN release: 2024-03-21","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"new-features-1-2-0","dir":"Changelog","previous_headings":"","what":"New Features","title":"finetune 1.2.0","text":"finetune now fully supports models “censored regression” mode. models can fit, tuned, evaluated like regression classification modes. tidymodels.org information tutorials work survival analysis models. Improved error message tune_sim_anneal() values supplied param_info encompass values evaluated initial grid. often happens user mistakenly supplies different parameter sets function generated initial results tune_sim_anneal(). autoplot() methods racing objects now use integers x-axis breaks (#75). Enabling verbose_elim control option tune_race_anova() now additionally introduce message confirming function evaluating burn-resamples. Updates based new version tune, primarily survival analysis models (#104).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"bug-fixes-1-2-0","dir":"Changelog","previous_headings":"","what":"Bug Fixes","title":"finetune 1.2.0","text":"Fixed bug tune_sim_anneal() fail supplied parameters needing finalization. function now finalize needed parameter ranges internally (#39). Fixed bug packages specified control_race(pkgs) actually loaded tune_race_anova() (#74).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"breaking-change-1-2-0","dir":"Changelog","previous_headings":"","what":"Breaking Change","title":"finetune 1.2.0","text":"Ellipses (…) now used consistently package require optional arguments named. collect_predictions(), collect_metrics() show_best() methods previously ellipses end function signature moved follow last argument without default value. Optional arguments previously passed position now error informatively prompting named (#105).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-110","dir":"Changelog","previous_headings":"","what":"finetune 1.1.0","title":"finetune 1.1.0","text":"CRAN release: 2023-04-19 Various minor changes keep developments tune dplyr packages (#60) (#62) (#67) (#68). Corrects .config output save_pred = TRUE tune_sim_anneal(). function previously outputted constant Model1_Preprocessor1 .predictions slot, now provides .config values align .metrics (#57). eval_time attribute added tune objects produced finetune.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-101","dir":"Changelog","previous_headings":"","what":"finetune 1.0.1","title":"finetune 1.0.1","text":"CRAN release: 2022-10-12 racing: collect_metrics() collect_predictions() 'complete' argument returns results model configurations fully resampled. select_best() show_best() now show results model configurations fully resampled. tune_race_anova(), tune_race_win_loss(), tune_sim_anneal() longer error control argument isn’t corresponding control_*() object. work long object passed control includes elements required control_*() object. control_sim_anneal() got new argument verbose_iter used control verbosity iterative calculations. change means verbose argument passed tune_grid() control verbosity.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-100","dir":"Changelog","previous_headings":"","what":"finetune 1.0.0","title":"finetune 1.0.0","text":"CRAN release: 2022-09-05 informative error given enough resamples racing (#33). tune_sim_anneal() passing arguments tune_grid() (#40).","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-020","dir":"Changelog","previous_headings":"","what":"finetune 0.2.0","title":"finetune 0.2.0","text":"CRAN release: 2022-03-24 Maintenance release CRAN requirements. Use extract_parameter_set_dials() instead parameters() get parameter sets. Removed pillar-related S3 methods currently live tune.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-011","dir":"Changelog","previous_headings":"","what":"finetune 0.1.1","title":"finetune 0.1.1","text":"CRAN release: 2022-02-21 tune_sim_anneal() overwrites tuning parameter information originally contain unknowns.","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-010","dir":"Changelog","previous_headings":"","what":"finetune 0.1.0","title":"finetune 0.1.0","text":"CRAN release: 2021-07-21 check added make sure lme4 BradleyTerry2 installed (#8) Added pillar methods formatting tune objects list columns. Fixed bug random_integer_neighbor_calc() keep values inside range (#10) tune_sim_anneal() now retains finalized parameter set replaces existing parameter set finalized (#14) bug win/loss racing fixed cases one tuning parameter results bad broke Bradley-Terry model (#7)","code":""},{"path":"https://finetune.tidymodels.org/dev/news/index.html","id":"finetune-001","dir":"Changelog","previous_headings":"","what":"finetune 0.0.1","title":"finetune 0.0.1","text":"CRAN release: 2020-11-20 First CRAN release","code":""}]