-
Notifications
You must be signed in to change notification settings - Fork 964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flag quantize-nbits returns error #303
Comments
Which model are you using? Have you tried using vanilla SD 1.5 or 2.1 as a test? |
Thanks for replying, I've tried with many different models and I always get the same type of error. I've tried with SD 1.5 and even just running the example code from https://huggingface.co/blog/fast-diffusers-coreml#converting-and-optimizing-custom-models which I guess should just work... |
This usually means that something is wrong with your environment. Could you post your macOS version, Mac model and the python environment you are using? |
MBP M2 Max, 64GB RAM, Sonoma v14.0, Python 3.10.10, scikit-learn 1.1.2. Edit: I just updated to macOS 14.1.1 and gave it another try but the results are the same. It creates the mlpackage files, then goes into: INFO:main:Converted safety_checker And then returns the error shortly after that: File "/Users/.../miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/utils/validation.py", line 146, in _assert_all_finite |
Downgrading from transformers 4.35.2 to 4.34.1 solved this problem. |
I still have a same error. The target model is runwayml/stable-diffusion-v1-5. My environment is, |
I also have the same environment and the same issues |
I found that I had forgot to remove previous *.mlpackage. transformers 4.34.1 works. So same as citruslab san's workaround, my current understanding is that transformers 4.35.0 or later does not work with ml-stable-diffusion in related with --quantize-nbits option. |
Do you not get this error when downgrading @y-ich ? ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. |
Getting error when using flag --quantize-nbits 6.
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
I tried downgrading python, torch and scikit-learn but nothing changes. Anyone has a tip on how to solve this?
The text was updated successfully, but these errors were encountered: