You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_sft.py", line 97, in
main()
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_sft.py", line 69, in main
train_result = trainer.train()
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/transformers/trainer.py", line 1987, in inner_training_loop
self.accelerator.clip_grad_norm(
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_
self.unscale_gradients()
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients
self.scaler.unscale_(opt)
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_
raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
Checklist
I have provided all relevant and necessary information above.
I have chosen a suitable title for this issue.
The text was updated successfully, but these errors were encountered:
Traceback (most recent call last):
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_pt.py", line 81, in
main()
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_pt.py", line 53, in main
train_result = trainer.train()
File "/root/anaconda3/envs/baichuan-lora/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/anaconda3/envs/baichuan-lora/lib/python3.10/site-packages/transformers/trainer.py", line 1987, in inner_training_loop
self.accelerator.clip_grad_norm(
File "/root/anaconda3/envs/baichuan-lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_
self.unscale_gradients()
File "/root/anaconda3/envs/baichuan-lora/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients
self.scaler.unscale_(opt)
File "/root/anaconda3/envs/baichuan-lora/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_
raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
3%|███▎ | 1/30 [00:07<03:27, 7.17s/it]
Required prerequisites
Questions
Traceback (most recent call last):
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_sft.py", line 97, in
main()
File "/mnt/workspace/LLaMA-Efficient-Tuning/src/train_sft.py", line 69, in main
train_result = trainer.train()
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/transformers/trainer.py", line 1987, in inner_training_loop
self.accelerator.clip_grad_norm(
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_
self.unscale_gradients()
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients
self.scaler.unscale_(opt)
File "/home/pai/envs/llama_etuning/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_
raise RuntimeError("unscale_() has already been called on this optimizer since the last update().")
RuntimeError: unscale_() has already been called on this optimizer since the last update().
Checklist
The text was updated successfully, but these errors were encountered: