You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we create requests to LLM APIs for local development. The purpose of this is to find a solution that doesn't involve making calls to LLM APIs
The text was updated successfully, but these errors were encountered:
* Remove unused RequestLiteLLM.completion() method
* Add LocalLLM class
* Install ollama
* Add makefile commands to run llama3.2 locally
* Add test for local LLM connection
* Add environment variable for LOCAL_LLM connection
* Patch LOCAL_LLM settings variable
* Move LOCAL_LLM host to env variable
* Create test specific to check if local_llm class is redirecting properly
* Trigger run-tests
* Trigger run-tests
Currently we create requests to LLM APIs for local development. The purpose of this is to find a solution that doesn't involve making calls to LLM APIs
The text was updated successfully, but these errors were encountered: