You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am writing to inquire about the future support plans for the Granite 3B and 8B models in the llama-cpp-python library. While attempting to load the small GGUF models for these Granite models using llama-cpp-python, I encountered the following error: error loading model: done_getting_tensors: wrong number of tensors; expected 578, got 470
I suspect we get this issue because the small Granite models (3B and 8B) are not yet supported by this library. Are there any information on any plans to support these models in the future?
Thanks! :))
The text was updated successfully, but these errors were encountered:
Hi!
I am writing to inquire about the future support plans for the Granite 3B and 8B models in the llama-cpp-python library. While attempting to load the small GGUF models for these Granite models using llama-cpp-python, I encountered the following error:
error loading model: done_getting_tensors: wrong number of tensors; expected 578, got 470
I suspect we get this issue because the small Granite models (3B and 8B) are not yet supported by this library. Are there any information on any plans to support these models in the future?
Thanks! :))
The text was updated successfully, but these errors were encountered: