You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm training a model based on number of iterations instead of a number of epochs. The same model trains on datasets of different sizes, hence one epoch differs in the number of iterations. Let's say I want to train e.g. a model for 900 iterations which corresponds to 90 epochs on one of the datasets and want to have a stepwise lr scheduler on iteration 300 & 600. To my understanding this is not natively possible in the pytorch lightning environment.
I know that I can change the lr scheduler interval to "step" and then set the frequency, like so:
However this only applies the steps within one epoch. If I set the frequency larger than the number of iteration per epoch no scheduler step is applied. I would assume that the expected behaviour is to call the scheduler.step() every n frequency across multiple epochs.
Bug description
Description
I'm training a model based on number of iterations instead of a number of epochs. The same model trains on datasets of different sizes, hence one epoch differs in the number of iterations. Let's say I want to train e.g. a model for 900 iterations which corresponds to 90 epochs on one of the datasets and want to have a stepwise lr scheduler on iteration 300 & 600. To my understanding this is not natively possible in the pytorch lightning environment.
I know that I can change the lr scheduler interval to "step" and then set the frequency, like so:
'lr_scheduler': {"scheduler": sched, "interval": "step", "frequency": 300}
However this only applies the steps within one epoch. If I set the frequency larger than the number of iteration per epoch no scheduler step is applied. I would assume that the expected behaviour is to call the scheduler.step() every n frequency across multiple epochs.
What version are you seeing the problem on?
v2_0
How to reproduce the bug
The text was updated successfully, but these errors were encountered: