-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError: No inf checks were recorded for this optimizer. #4
Comments
Facing the same error, were you able to resolve the issue? |
same error here |
same error, possible solutions? |
Same error. Deep gratitude for any ideas |
I have the same problem. |
Based on the example code given, I believe that most of you would have seen the same log message wherein 0 param is trainable, which resulted in the
I manage to find another way to load and train from an existing adapter by using What you need to do is
|
@erjieyong not working, it starts fine-tuning LLAMA, instead of Alpaca. Any other ways? |
@erjieyong but have to point it out, it resolves the problem with 0 trainable params. Just not in a right way |
Do you mean replacing base model with alpaca? because this command finetunes llama @erjieyong |
1 similar comment
Do you mean replacing base model with alpaca? because this command finetunes llama @erjieyong |
Finally I solved it by initializing the model = get_peft_model(model, config) after model = PeftModel.from_pretrained(model, LORA_WEIGHTS, torch_dtype=torch.float16) and config = LoraConfig(...). So don't comment the config. Worked quite well for me. |
@d4nielmeyer could you make a pull request? |
@d4nielmeyer or, please send the code here? |
Finetune.py
|
@d4nielmeyer Thanks! Issue solved for me. |
In the parameters inside LoraConfig, I think you are writing inference_mode=True. config = LoraConfig( |
First of all, a great thank you for posting the article and youtube video, it was very insightful!
I've tried to run the code based on your article, however i keep facing the same assertion error. Any advice?
Note that i have been trying to run your code on colab with the free gpu
python version = 3.9.16
cuda version = 11.8.89
I've also noted the bug that you faced and run the same code (edited for colab) as follows:
cp /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cpu.so
The text was updated successfully, but these errors were encountered: