-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support a new LLM Provider #12
Comments
Would be great to support gpt4all, since they seem to have the best local model set up these days. |
what about adding https://github.com/BerriAI/litellm ? |
What about allowing the OPENAI_BASE_URL environment variable to be used instead of api.openai.com...? It seems like pull request #691, continuedev/continue#691 would have addressed this; however, it was closed without explanation. Or support baseUrl if the provider is openai. I see commit: continuedev/continue@cb0c815 is related. Unsure how to use it or when it will be released. in config.json apiBase appears to be used, ex: {
"models": [
{
"title": "Qwen2.5 Coder32B mlxQ4",
"provider": "openai",
"model": "mlx-community/Qwen2.5-Coder-14B-Instruct-4bit",
"apiBase": "http://127.0.0.1:8080/v1",
"apiKey": "none",
"settings": {
"temperature": 0.7,
}
}
]
} |
Continue supports many different LLM providers by subclassing the
BaseLLM
class. If you know of an LLM provider that we don't support, adding it can be almost as simple as writing a single method. See CONTRIBUTING.md for a full walkthrough on adding a provider.Some providers we don't yet support:
The text was updated successfully, but these errors were encountered: