-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: ollama local and llama local #521
Conversation
This is good and was working locally however few issues:
|
Which external provider ? OpenAI ? Where did you define it? In the .env file or character? I tested the different variations with OLLAMA ? |
I tested with anthropic and it still inits OLLAMA env is within character file |
I updated defaultCharacter.ts with ANTHROPIC and it didn't try to download the llamalocal model. It attempts to call the antropic api. Do you have anything else in your .env related to llama definited ? Are you on the latest code ? |
in the logs it tells me it has initalised |
Ollama fix
fix: ollamaModel already defined
Ollama fix
Ollama local broken and local llama issues
Relates partially to issue #443
Risks
Low risk - affects only the Llama and Ollama - llama.ts
Background
Eliza is supposed to be able to work with Ollama local and it was broken in several ways. Wrong embeddings were being loaded. Local llama was being downloaded even when the Ollama local was configured.
Model can be selected in the character by setting the ModelProviderName to LLAMALOCAL or OLLAMA
ex: modelProvider: ModelProviderName.LLAMALOCAL
default ModelProvider can be set on the .env file by setting the OLLAMA_MODEL= env variable
What does this PR do?
What kind of change is this?
Bug fixes and improvements
Documentation changes needed?
No, but might want to document the VERBOSE=true for more detailed logging for debugging purposes
Testing
Tested for each of the providers LLAMALOCAL or OLLAMA
Where should a reviewer start?
Detailed testing steps
For ollama local, you will need to install Ollama software and install the models that you will be using.