Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flag quantize-nbits returns error #303

Closed
citruslab opened this issue Nov 21, 2023 · 9 comments
Closed

Flag quantize-nbits returns error #303

citruslab opened this issue Nov 21, 2023 · 9 comments

Comments

@citruslab
Copy link

Getting error when using flag --quantize-nbits 6.
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
I tried downgrading python, torch and scikit-learn but nothing changes. Anyone has a tip on how to solve this?

@MenKuch
Copy link

MenKuch commented Nov 24, 2023

Which model are you using? Have you tried using vanilla SD 1.5 or 2.1 as a test?

@citruslab
Copy link
Author

Thanks for replying, I've tried with many different models and I always get the same type of error. I've tried with SD 1.5 and even just running the example code from https://huggingface.co/blog/fast-diffusers-coreml#converting-and-optimizing-custom-models which I guess should just work...

@MenKuch
Copy link

MenKuch commented Nov 25, 2023

This usually means that something is wrong with your environment. Could you post your macOS version, Mac model and the python environment you are using?

@citruslab
Copy link
Author

citruslab commented Nov 25, 2023

MBP M2 Max, 64GB RAM, Sonoma v14.0, Python 3.10.10, scikit-learn 1.1.2.

Edit: I just updated to macOS 14.1.1 and gave it another try but the results are the same.

It creates the mlpackage files, then goes into:

INFO:main:Converted safety_checker
INFO:main:Quantizing weights to 6-bit precision
INFO:main:Quantizing text_encoder to 6-bit precision
INFO:main:Quantizing text_encoder
Running compression pass palettize_weights:...

And then returns the error shortly after that:

File "/Users/.../miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/utils/validation.py", line 146, in _assert_all_finite
raise ValueError(msg_err)
ValueError: Input X contains infinity or a value too large for dtype('float64').

@citruslab
Copy link
Author

Downgrading from transformers 4.35.2 to 4.34.1 solved this problem.

@y-ich
Copy link

y-ich commented Dec 21, 2023

I still have a same error.
Any transformers version 4.29.2 or newer fails.

The target model is runwayml/stable-diffusion-v1-5.

My environment is,
M2 MBA 16GB RAM
macOS Sonoma 14.2.1
Python 3.8.17
ml-stable-diffusion commit 7449ce4
coremltools 7.1
torch 2.1.0
transformers 4.29.2, 4.30.2, 4.31.0, 4.32.1, 4.33.3,3.34.1,3.35.2,3.36.2
scipy 1.10.1
scikit-learn 1.1.3
pytest = 7.4.3
invisible-watermark 0.2.0
safetensors 0.4.1
matplotlib 3.7.4

@hubin858130
Copy link

I still have a same error. Any transformers version 4.29.2 or newer fails.

The target model is runwayml/stable-diffusion-v1-5.

My environment is, M2 MBA 16GB RAM macOS Sonoma 14.2.1 Python 3.8.17 ml-stable-diffusion commit 7449ce4 coremltools 7.1 torch 2.1.0 transformers 4.29.2, 4.30.2, 4.31.0, 4.32.1, 4.33.3,3.34.1,3.35.2,3.36.2 scipy 1.10.1 scikit-learn 1.1.3 pytest = 7.4.3 invisible-watermark 0.2.0 safetensors 0.4.1 matplotlib 3.7.4

I also have the same environment and the same issues

@y-ich
Copy link

y-ich commented Dec 21, 2023

I found that I had forgot to remove previous *.mlpackage.

transformers 4.34.1 works.

So same as citruslab san's workaround, my current understanding is that transformers 4.35.0 or later does not work with ml-stable-diffusion in related with --quantize-nbits option.

@jaycoolslm
Copy link

Do you not get this error when downgrading @y-ich ?

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
diffusers 0.29.0 requires huggingface-hub>=0.23.2, but you have huggingface-hub 0.17.3 which is incompatible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants