-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
export_weights.py works on 1024 and 512 res, but not on 256 #6
Comments
I am getting a similar error trying to use this with the new restyle-encoder: yuval-alaluf/restyle-encoder#1 I am getting the same error trying to train the psp encoder. |
I managed to fix this issue by changing the number of mapping layers in
You can also just clone my fork of stylegan2-pytorch |
This likely happens if you use the |
My export weights code report a bug when deal with the ffhq-res1024-mirror-stylegan2-noaug.pkl: Hope to receive your kindly help. Thanks! |
@MissDores looks like your file wasn’t completely downloaded |
I've tried this solution. However, it doesn't work for my case. I trained StyleGAN-ADA 256x256 in the conditional setting on a custom dataset. I also used auto config. I'm confused because of this issue. The problem below is because of auto config or conditional setting? What do you think? @dvschultz @mycodeiscat
|
@hozfidan93 because of the auto config. There's something wrong with it. |
|
I did what you suggested. As a result of training with a custom dataset of 256*256 size and cfg-paper256, generate.py does not work properly. |
Just wanted to make a note that a similar error can occur if the |
I had the size mismatch issue and was able to solve it using |
Where is --channel_multiplier 1, please? |
I have been able to reproduce this colab's SG2_ADA_PT_to_Rosinality.ipynb test, but it seems to only work with 1024 and 512 resolutions.
It fails for all the 256 resolution pkl found here: transfer-learning-source-nets
This is the error I get for lsundog-res256-paper256-kimg100000-noaug.pkl -
Full Traceback
Traceback (most recent call last): File "generate.py", line 76, in <module> g_ema.load_state_dict(checkpoint["g_ema"]) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1224, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Generator: Missing key(s) in state_dict: "convs.12.conv.weight", "convs.12.conv.blur.kernel", "convs.12.conv.modulation.weight", "convs.12.conv.modulation.bias", "convs.12.noise.weight", "convs.12.activate.bias", "convs.13.conv.weight", "convs.13.conv.modulation.weight", "convs.13.conv.modulation.bias", "convs.13.noise.weight", "convs.13.activate.bias", "to_rgbs.6.bias", "to_rgbs.6.upsample.kernel", "to_rgbs.6.conv.weight", "to_rgbs.6.conv.modulation.weight", "to_rgbs.6.conv.modulation.bias", "noises.noise_13", "noises.noise_14"
Any idea how to fix this? I believe the model architecture is different for the smaller models.
On top of that issue I did notice that
n_mapping=8
andn_layers=7
for sundog-res256-paper256-kimg100000-noaug.pkl, but my custom 256 trained model using the original repo hasn_mapping=2
andn_layers=7
and gets converted without error in export_weights.py but produces this error, along with the one above when runningstylegan2-pytorch/python generate.py --size 256
Missing key(s) in state_dict: "style.3.weight", "style.3.bias", "style.4.weight", "style.4.bias", "style.5.weight", "style.5.bias", "style.6.weight", "style.6.bias", "style.7.weight", "style.7.bias", "style.8.weight", "style.8.bias".
Why is this different than the sample 256 models?
The text was updated successfully, but these errors were encountered: