You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I fine-tune the pre-trained model weights in SIIM segmentation tasks, the following error is reported:
| Name | Type | Params
------------------------------------
0 | model | Unet | 32.5 M
1 | loss | MixedLoss | 0
------------------------------------
32.5 M Trainable params
0 Non-trainable params
32.5 M Total params
Epoch 0: 0%| | 0/669 [00:01<?, ?it/s]
Traceback (most recent call last):
File "run.py", line 167, in <module>
main(cfg, args)
File "run.py", line 106, in main
trainer.fit(model, dm)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit
results = self.accelerator_backend.train()
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/accelerators/dp_accelerator.py", line 110, in train
results = self.train_or_test()
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test
results = self.trainer.train()
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 524, in train
self.train_loop.run_training_epoch()
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 572, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 730, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 513, in optimizer_step
using_lbfgs=is_lbfgs,
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1261, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py", line 286, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py", line 140, in __optimizer_step
trainer.precision_connector.backend.optimizer_step(trainer, optimizer, closure)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/plugins/native_amp.py", line 75, in optimizer_step
closure()
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 725, in train_step_and_backward_closure
self.trainer.hiddens
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 828, in training_step_and_backward
self.backward(result, optimizer, opt_idx)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 851, in backward
result.closure_loss, optimizer, opt_idx, *args, **kwargs
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 99, in backward
closure_loss, optimizer, opt_idx, *args, **kwargs
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/plugins/native_amp.py", line 47, in backward
model.backward(closure_loss, optimizer, opt_idx)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1158, in backward
loss.backward(*args, **kwargs)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/torch/autograd/__init__.py", line 126, in backward
grad_tensors_ = _make_grads(tensors, grad_tensors_)
File "/home/wentaochen/anaconda3/envs/gloria/lib/python3.7/site-packages/torch/autograd/__init__.py", line 50, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs
Hope you can give me some advice! Thank you very much!
The text was updated successfully, but these errors were encountered:
When I fine-tune the pre-trained model weights in SIIM segmentation tasks, the following error is reported:
Hope you can give me some advice! Thank you very much!
The text was updated successfully, but these errors were encountered: