-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LangChain integration #4
Comments
Hi, I have been planning to work on this and was wondering if there was a way to just run the API server. |
additionally stop sequences seem to be an issue on the API side. |
Hey! Happy to hear you want to tackle this task. Currently you can't run just the API server. This used to be possible but now that the API & the web server are behind nginx, if you start nginx without the web server it will fail the health check and refuse to start. Shouldn't be too hard to fix though hopefully, I'll have a look at it. In the meanwhile you can still access the API at http://localhost:8008/api/docs. But regarding the LangChain integration I was thinking that it would also be interesting to make a custom LLM that is a wrapper calling this The custom LLM would only be working inside of the For interfacing with other projects indeed you will need to run the API server and make a custom LLM for that one. |
Did you looked at this repository @nsarrazin ? |
prebuilt compose file and updates to actions
Pretty low hanging fruit with the wrapper we have, would be great to create a custom LangChain LLM wrapper for
llama.cpp
.Then we could use it in the API and do all sorts of cool things with Serge.
The text was updated successfully, but these errors were encountered: