Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: ollama local and llama local #521

Merged
merged 13 commits into from
Nov 23, 2024
Merged

Conversation

yodamaster726
Copy link
Contributor

@yodamaster726 yodamaster726 commented Nov 22, 2024

Ollama local broken and local llama issues

Relates partially to issue #443

Risks

Low risk - affects only the Llama and Ollama - llama.ts

Background

Eliza is supposed to be able to work with Ollama local and it was broken in several ways. Wrong embeddings were being loaded. Local llama was being downloaded even when the Ollama local was configured.

Model can be selected in the character by setting the ModelProviderName to LLAMALOCAL or OLLAMA
ex: modelProvider: ModelProviderName.LLAMALOCAL

default ModelProvider can be set on the .env file by setting the OLLAMA_MODEL= env variable

What does this PR do?

  • Fixes loading the correct embeddings file to be used depending on which llama model provider is selected
  • Improved logging in embedddings, generation.
  • Fixed bug in logger - was looking for lowercase environment variable instead of VERBOSE=true
  • Improved the download progress indicator for the local llama model and other files.
  • Added a new method to logger for download progress % indicator
  • I- Model provider validation in AgentRuntime and additional error logging
  • Improved download and error logging in the Image provider
  • Improved model provider in llama.ts

What kind of change is this?

Bug fixes and improvements

Documentation changes needed?

No, but might want to document the VERBOSE=true for more detailed logging for debugging purposes

Testing

Tested for each of the providers LLAMALOCAL or OLLAMA

Where should a reviewer start?

Detailed testing steps

  • update the model provider part for defaultCharacter.ts to one of the above,
  • pnpm build
  • pnpm start

For ollama local, you will need to install Ollama software and install the models that you will be using.

  • mxbai-embed-large:latest for the embeddings
  • hermes3:latest for the LLM

@ponderingdemocritus ponderingdemocritus changed the title fix ollama local and llama local fix: ollama local and llama local Nov 22, 2024
@ponderingdemocritus
Copy link
Contributor

This is good and was working locally however few issues:

  • LLAMA is still downloaded if external provider is used. Can you fix this.

@yodamaster726
Copy link
Contributor Author

This is good and was working locally however few issues:

Which external provider ? OpenAI ? Where did you define it? In the .env file or character?

I tested the different variations with OLLAMA ?

@yodamaster726 yodamaster726 mentioned this pull request Nov 22, 2024
@ponderingdemocritus
Copy link
Contributor

I tested with anthropic and it still inits OLLAMA

env is within character file

@yodamaster726
Copy link
Contributor Author

I tested with anthropic and it still inits OLLAMA

env is within character file

I updated defaultCharacter.ts with ANTHROPIC and it didn't try to download the llamalocal model. It attempts to call the antropic api. Do you have anything else in your .env related to llama definited ? Are you on the latest code ?

image

@ponderingdemocritus
Copy link
Contributor

in the logs it tells me it has initalised

@ponderingdemocritus ponderingdemocritus merged commit 527f649 into elizaOS:main Nov 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants