-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
问题 #1
Comments
你这个是包的版本的问题, pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git 之前在qlora提的问题勘验: - lora weights are not saved correctly : Comment out the following code
# if args.bits < 16:
# old_state_dict = model.state_dict
# model.state_dict = (
# lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
# ).__get__(model, type(model))
model = AutoModel.from_pretrained(args["model_dir"],
trust_remote_code=True,
load_in_4bit=True,
device_map={"":0})
model = PeftModel.from_pretrained(model, args["save_dir"], trust_remote_code=True)
model.cuda().eval() <- DO NOT ADD THIS |
@wuguangshuo 请问你解决了吗? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
不知道大佬有没有遇到ValueError: paged_adamw_32bit is not a valid OptimizerNames这个错误
The text was updated successfully, but these errors were encountered: