-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting shape mismatch error when using a different input #31
Comments
Can you show the full error message? One problem seems to be that your size
is 32 also for the channels. You need to change the c_in to your diffusion
model: model = UNet(c_in=32).to(device_val). But do you really want that?
This means you have images with 32 channels?
Am Fr., 30. Juni 2023 um 19:22 Uhr schrieb Raj Shekhor Roy <
***@***.***>:
… getting this error RuntimeError: mat1 and mat2 shapes cannot be multiplied
(128x128 and 256x128)
when I am using the code below which is basically an array of size
32x32x32 and timesteps of 128:
size = 32
device_val="cuda"
x_input = torch.tensor(np.random.rand(1,size, size,
size)).to(device_val).type(torch.cuda.FloatTensor)
diffusion = Diffusion(img_size=size, device=device_val)
t = diffusion.sample_timesteps(128).to(device_val)
model = UNet().to(device_val)
xmodel=model(x_input,t)
In addition to that can you please explain a bit more how is this time
embedding working especially in terms of the dimension?
—
Reply to this email directly, view it on GitHub
<#31>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOYRYBRHSIAQWUJPKQEOPLDXN4DL7ANCNFSM6AAAAAAZ2DQG7Y>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Here is the full error: Traceback (most recent call last): One more thing is that I changed the default parameter in the model to provide the input channel number, in this way: class UNet(nn.Module): |
getting this error RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x128 and 256x128)
when I am using the code below which is basically an array of size 32x32x32 and timesteps of 128:
size = 32
device_val="cuda"
x_input = torch.tensor(np.random.rand(1,size, size, size)).to(device_val).type(torch.cuda.FloatTensor)
diffusion = Diffusion(img_size=size, device=device_val)
t = diffusion.sample_timesteps(128).to(device_val)
model = UNet().to(device_val)
xmodel=model(x_input,t)
In addition to that can you please explain a bit more how is this time embedding working especially in terms of the dimension?
The text was updated successfully, but these errors were encountered: