Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support a new LLM Provider #12

Open
sestinj opened this issue Sep 4, 2023 · 3 comments
Open

Support a new LLM Provider #12

sestinj opened this issue Sep 4, 2023 · 3 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@sestinj
Copy link
Contributor

sestinj commented Sep 4, 2023

Continue supports many different LLM providers by subclassing the BaseLLM class. If you know of an LLM provider that we don't support, adding it can be almost as simple as writing a single method. See CONTRIBUTING.md for a full walkthrough on adding a provider.

Some providers we don't yet support:

@sestinj sestinj converted this from a draft issue Sep 4, 2023
@sestinj sestinj added the enhancement New feature or request label Sep 4, 2023
@sestinj sestinj moved this from Plugins to Good First Issues (Code) in Continue Contribution Ideas Sep 4, 2023
@sestinj sestinj moved this from Good First Issues (Code) to Plugins in Continue Contribution Ideas Sep 4, 2023
@sestinj sestinj moved this from Plugins to Good First Issues (Code) in Continue Contribution Ideas Sep 4, 2023
@sestinj sestinj moved this from Good First Issues (Code) to Plugins in Continue Contribution Ideas Sep 4, 2023
@sestinj sestinj moved this from Plugins to Good First Issues (Code) in Continue Contribution Ideas Sep 4, 2023
@KingMob
Copy link

KingMob commented Nov 3, 2023

Would be great to support gpt4all, since they seem to have the best local model set up these days.

@TyDunn TyDunn added the good first issue Good for newcomers label Jan 3, 2024
@bjornjorgensen
Copy link

what about adding https://github.com/BerriAI/litellm ?

@yieme
Copy link

yieme commented Nov 24, 2024

What about allowing the OPENAI_BASE_URL environment variable to be used instead of api.openai.com...?

It seems like pull request #691, continuedev/continue#691 would have addressed this; however, it was closed without explanation.

Or support baseUrl if the provider is openai.

I see commit: continuedev/continue@cb0c815 is related. Unsure how to use it or when it will be released.

in config.json apiBase appears to be used, ex:

{
  "models": [
    {
      "title": "Qwen2.5 Coder32B mlxQ4",
      "provider": "openai",
      "model": "mlx-community/Qwen2.5-Coder-14B-Instruct-4bit",
      "apiBase": "http://127.0.0.1:8080/v1",
      "apiKey": "none",
      "settings": {
        "temperature": 0.7,
      }
    }
  ]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
Status: Good First Issues (Code)
Development

No branches or pull requests

5 participants