-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[.Net][Feature Request]: Ollama API support #2319
Comments
Oops! I just noticed this after submitting Pull Request #2512 I basically added support for Ollama via it's OpenAI compatible API.
|
@mikelor Thanks for the PR/contributing, it's very important to us. The ollama backend is not 100% identical with openai chat completion scheme, so we still want a 100% capacity ollama client in Maybe you can consider modifying your PR to add support for consuming third-party openai backend in
|
@LittleLittleCloud, I think modifying the PR to support an OpenAIClientAgent is the best route for Ollama. The Mistral client is used for consuming the Mistral model. Ollama is a local LLM server capable of serving multiple models Llama, Phi, and others (including Mistral). Therefore, leveraging the OpenAIClientAgent seems to make more sense. That's what I was trying to do with the Pull Request, but looking at your suggestion, I'll revisit the current implementation. Thoughts? |
@mikelor the OpenAI chat completion api support in ollama is in experimental and not 100% compatible, please check the following link for more information https://github.com/ollama/ollama/blob/main/docs/openai.md#openai-compatibility And I feel like it would be better for the AutoGen.Ollama to have a 100% support for ollama backend, which can’t be achieved by leveraging ‘OpenAIClientAgent’ and essentially means we need an ollama client. what you think |
I would agree, I'll take another look at your comments above, maybe I misunderstood, but easy to do given the medium of interaction. I tried to follow the pattern set forth in the LMStudio implementation because Ollama is closer to that (supports hosting multiple models). Also, given the experimental nature of OpenAI Support in Ollama, it was more important to get something working, and then iterate as support grows. I'm going to take some time to review autogen's agent/plugin architecture, before submitting any further changes. Given the time available, it will take a week or so. Maybe ask some questions on the discord as well, in the #dotnet channel I'm guessing. |
Sounds good, Your PR brings a very good point, which is how to consume a third-party openai endpoint in Update on 2024/05/10For those who wants to connect to an openai compatible API using |
Is your feature request related to a problem? Please describe.
No response
Describe the solution you'd like
Add a
AutoGen.Ollama
package for Ollama API supportTasks
AutoGen.Ollama
packageThe text was updated successfully, but these errors were encountered: