We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我在运行该代码的过程中遇到了如下问题: 报错如上图,加载模型的代码如下:
from transformers import AutoModel from peft import PeftModel model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, load_in_8bit=True, device_map='auto', revision="v0.1.0") peft_model = PeftModel.from_pretrained(model, "Suffoquer/LuXun-lora")
尝试过v0.1.0和v1.1.0的THUDM/chatglm-6b均报错,请问这个问题是出在了哪里呢?
The text was updated successfully, but these errors were encountered:
自问自答一下,解决方案为: pip install peft==0.2.0
pip install peft==0.2.0
Sorry, something went wrong.
你好,想问你一下这个程序推理过程很慢吗,我的一直卡在generate函数这里,难道10个句子的推理也需要这么长时间吗
应该不会,建议用interactive模式推理一个简单的input试一下,可能是因为lora版本的问题
No branches or pull requests
您好,我在运行该代码的过程中遇到了如下问题:
报错如上图,加载模型的代码如下:
尝试过v0.1.0和v1.1.0的THUDM/chatglm-6b均报错,请问这个问题是出在了哪里呢?
The text was updated successfully, but these errors were encountered: