-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError: cannot import name 'LlamaTokenizer' from 'transformers.models.llama' #17
Comments
error can be fixed by installing the correct snowflake transformers pip install git+https://github.com/Snowflake-Labs/transformers.git@arctic and then also install , as the llama's conditional causes this error
|
@karthik-nexusflow better to add this information here because beginners like me may try to install the official hugging face transformers package instead of the fork one, which will lead to this issue ![]() |
@AllanOricil, with When did you download the weights? If you are running in an offline mode and downloaded them more than 5 days ago then The core issue is confusing me though, it's saying you can't import the |
@jeffra I don't even know where to use that trust variable. Is that when I run I just copied the simple example, created a virtual env, installed transformers 4.39.0 and deepspeed 0.14.2, then I tried to run the script with python 3, and it did not work. Got the same error that led me to open this issue. Then I decided to go to hugging face transformers repo to get the latest release of their package, updated my virtual env with it, tried to run the code again, and again the same issue happened. Then I opened this issue here. To get the list of dependencies I ran a command called freeze. I have also not download any weights. Isn't that suppose to happen automatically when I ran that example code? |
Another question. Can I run this on a M2 Max with 32Gb ram in AWS? This was my plan 😀 |
We had another user run into this same issue wrt @AllanOricil wrt to M2 Max, it's not on our exact roadmap but i believe this support was recently added in llama.cpp! :) ggerganov/llama.cpp#7020 Closing issue for now as i think the main issue is now resolved. |
I will give it another chance |
I tried the minimum example from https://huggingface.co/Snowflake/snowflake-arctic-instruct and it did not work. Can you help me to fix it?
Im using the latest trasnformers release commit.
snowflake-arctic-instruct.py
requirements.txt
The text was updated successfully, but these errors were encountered: