regularization parameters cause huge bump in the loss function #1557
Unanswered
akosmaroy
asked this question in
Q&A - get help using NeuralProphet
Replies: 1 comment
-
after doing some more investigation, it seems that the bumps appears if one uses regularization parameters when calling model.add_lagged_regressor() or model.add_future_regressor(). setting 'trend_reg' for the model alaso seems to bring this effect to a lesser degree. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I have a question regarding the learning_rate parameter when creating a NeuralProphet instance. In my case, it seems that I get a 'bump' on my loss function at step 65 of the learning process (total epochs == 100), irrespective of the learning rate parameter. This seems somewhat counter intuitive. Can it be that there are learning rate parameters at some internal parts of the NerualProphet model?
My naive intuition would be that such a 'bump' would be due to using a learning rate that is too high and 'overshoots' the minimum. But if that was the case, using a lower learning rate should somewhat mitigate this, and would not have this 'bump' at the very same epoch / step in the process.
Please see my loss function at learning rate 1e-4, 1-e5 and 1-e6, all showing a visible 'bump' at step 65.
Beta Was this translation helpful? Give feedback.
All reactions