UNet2DConditionModel gives inconsistent output with same input data #7748
-
Hi everyone, I'm experiencing an intriguing issue with a UNet2DConditionModel. Specifically, when I run the same input through the model with different batch sizes, I'm noticing inconsistencies in the output even after .eval(). It's not a large difference, but a noticeable amount of difference. Anyone have any idea what is going on? and know how to fix this? from diffusers import UNet2DConditionModel
import torch
device= "cuda" if torch.cuda.is_available() else "cpu" #cuda about 1e-4, cpu about 1e-7
unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2-1", subfolder="unet", use_safetensors=True).to(device)
unet.eval()
bsz = 4
N = 2 #Gives different results except for N=bsz
latent = torch.rand(bsz, 4, 28, 28, device=device, dtype=torch.float32)
timestep = torch.zeros(size=(bsz,), device=device, dtype=torch.long)
hidden = torch.rand(bsz, 128, 1024, device=device, dtype=torch.float32)
N_batch = unet(latent[:N], timestep[:N], hidden[:N], return_dict=False)[0]
All_batch = unet(latent, timestep, hidden, return_dict=False)[0][:N]
print(torch.max(N_batch-All_batch)) # Up to 0.0004 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi Jee Seok, |
Beta Was this translation helpful? Give feedback.
Definitely, see this discussion.