Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use default HF HUB token when checking for base model info #428

Merged
merged 4 commits into from
Apr 19, 2024

Conversation

noyoshi
Copy link
Collaborator

@noyoshi noyoshi commented Apr 19, 2024

  • This is important now that the models are gated, we need to use the token which started the model, since that should have permissions to read the base model repo!

@tgaddair tgaddair merged commit 8060fe3 into main Apr 19, 2024
1 check passed
@tgaddair tgaddair deleted the fix-adapter-not-loading branch April 19, 2024 21:51
@bi1101
Copy link

bi1101 commented Apr 20, 2024

Hi, does this PR affect the Predibase API? I've been getting error with inferencing finetuned models through the OAI API since roughly 12 hours ago

{
    "error": "Request failed during generation: Server error: Unable to locate credentials",
    "error_type": "generation"
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants