The Trainer freeze when I use the logger #16374
Unanswered
xugaoqi1993
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment 9 replies
-
I'm having the same issue too. Did you solve it ? |
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello:
I am training my model with pytorch lightning. If I set the logger as False there are no issues. However, if i use the logger, the training processing will freeze and no error will be reported. Here are the screenprints when this problem occurs. The gpu memory will be occupied by about 1700mb, which is abount 23000mb in a normal training processing. It seems like the training process is stuck when the data is transported into gpu memory.
Is there any solutions to slove this problems?
Thank you
Beta Was this translation helpful? Give feedback.
All reactions