-
Notifications
You must be signed in to change notification settings - Fork 786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ollama [non-OCI] registry pulling #2395
Comments
Note |
What is the actual feature you are requesting? From a minimal experiment, https://registry.ollama.ai/v2/ returns 404 I don’t know why we should eagerly add a hack instead of sending a patch, or trying to convince them, to modify the server to be a a compliant registry if that is what it is. (Or, if they explicitly don’t want third-party clients, long-term they are going to win, so I’m not inclined to start fighting them.) |
I don't think we will be able to define the ollama protocol. We could try and support it. It's not very complex as of today: https://registry.ollama.ai/v2/library/llama3/manifests/latest doesn't return null. We don't have to continue to be compatible if indefinitely it proves to constantly change, but as of today it's not incredibly complex. |
I just don't want to assume that Ollama will deliberately try and break third-party solutions. There's already at least 2 solutions that support it, LocalAI and ramalama. And the ramalama solution is curl, I think long-term ollama should be more consolidated with the other OCI transports. |
I mean
What intersection is there with this project? No credentials, no format conversion, no reason to use the same mirrors, no point in copying the data to other non-Ollama OCI registries, I think. Anyone can write a client using a HTTP client library, just as you did. That’s fine. Not all of those should be inside the Skopeo binary. |
This is where I don't agree:
I do think there is a point in this. Lets say a person in enterprise is not allowed to reach out to external OCI registries like Ollama, quay.io, etc. They might pick some models to copy/cache in an internal enterprise OCI registry like helm, etc. |
But yeah it doesn't have to be part of skopeo, but at the moment lets take ramalama, it shows different progress bars (and uses different clients) for huggingface, ollama (and the upcoming oci: pr shows download progress in a different way also) huggingface is completely different I think, but ollama and other OCI registries could be consolidated. |
And how is that going to be consumed? If the Ollama tools specifically required an Ollama server (and artifact formats), that doesn’t help. The way I think about it:
It amounts to the same thing for me either way. |
Yeah this is something that's under discussion @tarilabs has some ideas in this space.
|
I am very confused about why Ollama doesn't use OCI standards to store its models. So I created an alternative to find more answers. https://github.com/gpustack/gguf-packer-go |
We are doing very similar things: https://github.com/containers/ramalama I am hoping skopeo can be compatible with Ollama as it's one of our primary OCI registry tools. |
I think this went way too fast into an implementation suggestion without sufficient analysis. If no-one knows why they are using a registry-like but registry-incompatible protocol, shouldn’t the first step be to find out? |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
As you can see from the curl commands below from the ramalama project (toggle x = True in the ramalama python script to print the curl commands) the "Accept: application/vnd.docker.distribution.manifest.v2+json" header shows this format is an OCI transport protocol of sorts.
We should support this in our stack, it's becoming a very popular OCI repository for pulling models.
https://github.com/containers/ramalama
$ ./ramalama pull llama3
curl -f -s --header Accept: application/vnd.docker.distribution.manifest.v2+json -o /var/lib/ramalama/repos/ollama/manifests/registry.ollama.ai/library/llama3/latest https://registry.ollama.ai/v2/library/llama3/manifests/latest
curl -f -L -C - --progress-bar --header Accept: application/vnd.docker.distribution.manifest.v2+json -o /var/lib/ramalama/repos/ollama/blobs/sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa https://registry.ollama.ai/v2/library/llama3/blobs/sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
The text was updated successfully, but these errors were encountered: