Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update to ollama 0.4.0 #12370

Open
Matthww opened this issue Nov 8, 2024 · 14 comments
Open

update to ollama 0.4.0 #12370

Matthww opened this issue Nov 8, 2024 · 14 comments

Comments

@Matthww
Copy link

Matthww commented Nov 8, 2024

Hi,

Is it possible that this project could be updated to support ollama 0.4.0. I want to try the new LLama Vision model but to run those models you need atleast version 0.4.0.

Thanks!

@user7z
Copy link

user7z commented Nov 8, 2024

i wonder why cant it just be integrated with upstream ollama ? just like nvidia & amd do

@tristan-k
Copy link

+1

@rnwang04
Copy link
Contributor

rnwang04 commented Nov 11, 2024

Hi @Matthww , could you please let us know which exactly model do you want to run ?

@user7z
Copy link

user7z commented Nov 11, 2024

@Matthww this one

@Matthww
Copy link
Author

Matthww commented Nov 11, 2024

Hi @Matthww , could you please let us know which exactly model do you want to run ?

Hi @rnwang04 like @user7z mentioned I'm talking indeed about the Llama 3.2-Vision collection that can be found on ollama's model page: https://ollama.com/library/llama3.2-vision .

Ollama was updated accordingly to support these new vision models with version 0.4.0 that's available on their github repository in the releases tab.

@user7z
Copy link

user7z commented Nov 11, 2024

@Matthww but were toaking here about the one provided with ipex-llm gpu-acceleration wich provides just 0.3.6 ollama , i think you know that ipex-llm is not integrated with the official ollama , so the context here is the accerated ollama provided with ipex-llm in this github repo

@rnwang04
Copy link
Contributor

rnwang04 commented Nov 12, 2024

We are planning for a new rebase, if there is any progress, we will update here to let you know.

@user7z
Copy link

user7z commented Nov 20, 2024

@wbste @rnwang04 iam not aware about the techncall details , but i think the propriatary way that ipex-llm works wouldnt make it with ollama , i mean Rocm is open , cuda mostlly open , so thats one high wall between ipex-llm & ollama , & even within intel ecosystem ipex-llm relies on oneAPI basekit , & it take lot of time for devs to be with the latest , i think for this project to continue it needs just two things :
1)_ rebase the designe
2)_ make it open as possible
Without these 2 things we cant toak about integrating this into the upstream ollama , its sad that the devs here do double work , developping ipex-llm & tinkring with ollama & every single AI utility that shows up , the devs needs to focus more on the core thing & eleminating the double work by integrating those stuff with community workflow , that would be the needed boost !

@jasonwang178
Copy link

We are planning for a new rebase, if there is any progress, we will update here to let you know.

I strongly recommend making it open source and merging the changes into the official Ollama project. This will attract more contributors and potential donations.

@qiuxin2012
Copy link
Contributor

Any updates? Or is it possible for the end user to manually just grab the latest zip (i.e. ollama-windows-amd64.zip for Windows) from ollama and replacing the existing one? https://github.com/ollama/ollama/releases

No, you can't. We are still working on it.

@yizhangliu
Copy link

So slow update, sad.
Intel , are you OK?

@user7z
Copy link

user7z commented Dec 17, 2024

@yizhangliu if your are serious about AI you should go for nvidia cuda, or amd rocm

@jason-dai
Copy link
Contributor

So slow update, sad. Intel , are you OK?

Our current version is consistent with v0.4.6 of ollama. See https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

@hkvision
Copy link
Contributor

hkvision commented Jan 13, 2025

Do the docs not auto-build? Seems the link above says something different than the official docs...maybe when the project name changed that was left running? Or do I have the wrong URL?

Hi, the link you are referring to is not the official docs actually, it is just a previous development version and is outdated. The official one should be the link given in the previous comment below:

Our current version is consistent with v0.4.6 of ollama. See https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants