Invoke local.ai API via LangChain and/or OpenAI Python API? #67
-
I think the title covers the basic question. I've spent quite a bit of time experimenting with LangChain and OpenAI APIs via Python. After getting disgusted with continuing OpenAI timeouts, I've downloaded local.ai and have a model running there. How do I tie that server back into my existing Python code? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 8 replies
-
@Regular-Baf people NEED the WIKI hahahah We have a rolling issue since last week: #30 @mstanley103 you can take a look at that issue for pointer on how to implement the backend - you can prob just use openai API and replace the host name/openai endpoint with localai's localhost:8000/completions I will check-in with @Regular-Baf (he's the current assignee of this tix atm), will crank it out by TMR. @mstanley103 if you would like to contribute to the wiki LMK, we def need some docs on how to work using python |
Beta Was this translation helpful? Give feedback.
-
Also see: https://github.com/louisgv/local.ai/wiki/Using-the-local.ai-API We should add your python example once it's confirmed working :-? |
Beta Was this translation helpful? Give feedback.
-
See new wiki page at API-with-Python. |
Beta Was this translation helpful? Give feedback.
See new wiki page at API-with-Python.