-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to load local code for model with trust_remote_code=True
?
#22260
Comments
Hi @LZY-the-boys, thanks for raising this issue, If I've understood correctly, the question being asked is how to load in a customized version of the model on the 'THUDM/glm-large-chinese' repo. When running: model = AutoModelForSeq2SeqLM.from_pretrained('THUDM/glm-large-chinese', trust_remote_code=True) The model being downloaded will be the one defined in THUDM/glm-large-chinese. If you wish to load a local model, then this model should be saved out to either the hub or locally and the path to its location passed to
There's more information about using models with custom code here. |
OK, the |
@amyeroberts hi, thanks for your suggestions. I have |
@dragen1860 Could you detail the code you're running to reproduce this problem and the output which is indicating that downloading is happening? It's possible that some files might be downloaded from the hub e.g. the config, depending wha's being called. However, calling a second time no downloads should happen provided there's no upstream changes. Could you try running with |
Feature request
When I use the model with
trust_remote_code=True
, I cannot directly change these remote codes because everytime I load model it will request new codes from remote hub. So how can I avoid that ? Can I custom these codes in local?example:
Motivation
The remote code is not always good to fit user needs. So the user should have ways to change the remote code.
Your contribution
if there is no other way I can sub a PR.
The text was updated successfully, but these errors were encountered: