-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make sure 100% works with local models #69
Comments
Created fork that uses ollama for llama.ts instead of node-llama-cpp. lowers technical debt of having to build llama-cpp & download model. PR might not want to if instead an olllama.ts should be added & not remove the llama-cpp local options. |
can you please review the latest and add this as an additional provider option? If you search the code for 'ollama' you will see that there is already a comment. |
I'm trying the latest code and trying to configure it to use my local ollama setup, but eliza keeps wanting to download it's own model. I don't want llama models getting downloaded into my src tree - bad practice - if it's going to do that it should put it in off the root under some /models directory or something like that. At the moment I'm trying to get this to work and I'm willing to update the docs for others to benefit from this. Here is what I've configured on my .env OLLAMA_HOST=http://localhost:11434/ Here is the output when I do the pnpm run dev eliza>$ pnpm run dev
[nodemon] 3.1.7 |
previous version i had changed the llama.ts to use ollama instead. updated for the latest version ` // Create debug logger process.on('uncaughtException', (err) => { process.on('unhandledRejection', (reason, promise) => { interface QueuedMessage { class LlamaService { private constructor() { public static getInstance(): LlamaService { // Adding initializeModel method to satisfy ILlamaService interface async queueMessageCompletion( async queueTextCompletion( private async processQueue() {
} private async getCompletionResponse(
} async getEmbeddingResponse(input: string): Promise<number[] | undefined> {
} debug('LlamaService module loaded'); |
need to add support through the new updated model providers rather than just replace llama-cpp |
@o-on-x I believe it was you that shared that code with me and I got it running locally. But I need the .env settings that will connect to it. I've included what I used in the issue. There must be more than just setting the XAI_MODEL. I agree with @lalalune, are you looking to include this as an additional provider. |
I added a new OLLAMA model provider. Also there is a switch now for llama.ts if you are using local provider ollama or defaults to llama-cpp. u can set the Ollama mode provider to use a remote url if hosting remotely. Select the models and embedding models. Env variables to set are included in the .env.example (also the image posting handling is in this code just dont merge that part in discord messages.ts) https://github.com/o-on-x/eliza_ollama |
We have a local llama setup but we haven't used it since all this hype started, so we need to go through and make sure that all local models are working correctly.
The text was updated successfully, but these errors were encountered: