You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to this #1409 issue, we have to do model.fit() on entire time series before cross validation so that the attributes are all fixed for all CV runs. I get that but would it not slightly defeat the purpose of cross validation while hyperparameter tuning. I am comparing two models on the basis of their CV results. One model is having a very high changepoint_prior_scale around 0.95 and another one is lower around 0.5. Other things are mostly the same. I am guessing since the models are first fit on entire data including CV data and first one is more flexible for trend (potentially overfitted?) it would probably give better cv results than the other. It does not make the first model better for unseen data.
According to this #1409 issue, we have to do model.fit() on entire time series before cross validation so that the attributes are all fixed for all CV runs. I get that but would it not slightly defeat the purpose of cross validation while hyperparameter tuning. I am comparing two models on the basis of their CV results. One model is having a very high changepoint_prior_scale around 0.95 and another one is lower around 0.5. Other things are mostly the same. I am guessing since the models are first fit on entire data including CV data and first one is more flexible for trend (potentially overfitted?) it would probably give better cv results than the other. It does not make the first model better for unseen data.
I was trying to follow recommended ranges from the documentation in which it is written for changepoint_prior_scale a range of [0.001, 0.5] would likely be about right. Link:
https://facebook.github.io/prophet/docs/diagnostics.html#hyperparameter-tuning
The text was updated successfully, but these errors were encountered: