Replies: 4 comments 1 reply
-
I think it would not be appropriate to expect reproducible results for different floating point types. Eventually, they end up with different values during calculations, thus different pixels. For example, if we do just >>> import torch as th
>>> a = th.tensor(-2.3, dtype=th.float16)
>>> a ** 7
tensor(-341.2500, dtype=torch.float16)
>>> b = th.tensor(-2.3, dtype=th.float32)
>>> b ** 7
tensor(-340.4825) Are you sure that you get the same picture when not passing the |
Beta Was this translation helpful? Give feedback.
-
Ok, you can find my Jupyter notebook in here test_official_controlnet |
Beta Was this translation helpful? Give feedback.
-
And I don't expect float32 and float16 have the same values, but they should't be so different visually. |
Beta Was this translation helpful? Give feedback.
-
try it without controlnet - just do the most basic example you can. |
Beta Was this translation helpful? Give feedback.
-
I just use controlnet to test float32 and float16 precision. And I get different results.Here is code:
float32 result is:
float16 result is:
I found a solution for this question: not use generator parameter in pipeline. Modified code is:
With that setting, I got the same result like this:
Btw, my system is: windows 10 + torch2.0.0 + cuda116
Beta Was this translation helpful? Give feedback.
All reactions