-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do we have to delete the PiSSA adapter after save_pissa_as_lora #1860
Comments
Okay, so if I understand correctly, you would like to skip the line Right now, the only way to really continue with this adapter is to reload the model, but I think we could make the change and if users want, they can delete the adapter themselves. Pinging @tokenizer-decode since they had some thoughts around this topic as well. |
sure, a typical case is to evaluate the model after we save and convert the pissa adapter. Regarding the issue of maintaining two adapters in memory, should we delete the |
I think the simplest approach that would work for you is just to comment out
If you want the use all of your memory, sure you can do that. I did not try that tbh, but since Regarding @BenjaminBossan
Sure. But the only way for user to talk to Also if you are just experimenting, we'd be happy if you try out OLoRA. It's similar to PiSSA and that's why you see a unified approach there. I don't know how would it work out for your case btw. I don't have a comprehensive comparison between these methods. @hiyouga |
You could first evaluate and then save the model, but I would also prefer first saving and then evaluating, just in case the process crashes during evaluation.
True. Honestly, I think I'd prefer not deleting any adapter and leaving this up to the user to decide. It does require more memory if the adapter is not deleted, but it doesn't change peak memory.
Yes, I agree that we should not add more options there. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
Not stale, I hope that I can work on this soon. |
Resolves huggingface#1860 As discussed in that issue, it's not user friendly to delete the default adapter of a PiSSA/OLoRA model after calling save_pretrained with weight conversion. Instead, it is much more intuitive to delete the initial adapter instead, since it is loaded inside the method and not by the user, so it's really an implementation detail. Apart from this, I made the following related changes: - Put everything in a try ... finally to ensure that the initial adapter does not hang around if there is an error (thus not hogging memory). - Renamed initial_adapter to initial_adapter_name, to make it clear that this is the name and not the adapter itself.
I created a PR to fix this: #1933. Hopefully I can a release soon and I thought it would be good to get this in before that, as technically, it's a breaking change. |
System Info
peft v0.11.1
torch 2.3.0
linux
Who can help?
@BenjaminBossan
Information
Tasks
examples
folderReproduction
I want to use the pissa adapter for other tasks after saving the model, but I cannot continue using the fine-tuned model.
peft/src/peft/peft_model.py
Lines 253 to 276 in 076561b
Expected behavior
The PiSSA adapter can remain for further use.
The text was updated successfully, but these errors were encountered: