Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the size larger than 64x64 does work for nemo model #18

Open
PapaMadeleine2022 opened this issue Nov 4, 2019 · 3 comments
Open

the size larger than 64x64 does work for nemo model #18

PapaMadeleine2022 opened this issue Nov 4, 2019 · 3 comments

Comments

@PapaMadeleine2022
Copy link

Hello, I get the pre-trained nemo model from [https://yadi.sk/d/EX7N9fuIuE4FNg], (https://yadi.sk/d/EX7N9fuIuE4FNg), but I get two problem:

  1. when I try the image size of 64x64, driving image for test/213_deliberate_smile_1.png of nemo dataset, source image for the first five frames of test/505_spontaneous_smile_4.png of nemo dataset, the nemo model works very well, but when I try image size 128x128, 256x256, 512x512(using resize) for the same driving image and source image, the result.gif is bad.

  2. when I try the driving image test/213_deliberate_smile_1.png of nemo dataset , source image for one 64x64 test.gif from common front face image, the result.gif is bad.

Can anyone give some advises to fix the above problem? @AliaksandrSiarohin
Thank you very much~

@AliaksandrSiarohin
Copy link
Owner

First of all, test/****.png is actually videos. The frames are stacked together for a simpler i/o.

  1. What do you mean resize? You resize some of your images to 64x64, or resize test/***.png to 128x128? If you want to use the model for higher resolution, the model should be trained on higher resolution dataset. For example 256x256 models on nemo dataset can be found here, 256x256 models on VoxCeleb here.

  2. What is the common face image? Please post your images and your results.

@PapaMadeleine2022
Copy link
Author

@AliaksandrSiarohin Thanks for your reply.

  1. I use resize. For example, for driving image test/213_deliberate_smile_1.png, I modify some codes in frames_dataset.py to :
        video_array = np.moveaxis(image, 1, 0)

        video_array = video_array.reshape((-1,) + image_shape)
        video_array = np.moveaxis(video_array, 1, 2)
        video_array = np.array([resize(frame, (256, 256)) for frame in video_array])

Of cause, I do the same resize operation of 256x256 with the source image for the first five frames of test/505_spontaneous_smile_4.png. The result image is blurred for resize of 128x128 or 256x256 ops.

Thanks for your 256x256 pre-trained model, but how to modify some configuration in config/nemo.yaml, because I get this error:

Traceback (most recent call last):
 ...
RuntimeError: Error(s) in loading state_dict for MotionTransferGenerator:
	Unexpected key(s) in state_dict: "appearance_encoder.down_blocks.5.conv.weight", "appearance_encoder.down_blocks.5.conv.bias", "appearance_encoder.down_blocks.5.norm.weight", "appearance_encoder.down_blocks.5.norm.bias", "appearance_encoder.down_blocks.5.norm.running_mean", "appearance_encoder.down_blocks.5.norm.running_var", "appearance_encoder.down_blocks.5.norm.num_batches_tracked", "appearance_encoder.down_blocks.6.conv.weight", "appearance_encoder.down_blocks.6.conv.bias", "appearance_encoder.down_blocks.6.norm.weight", "appearance_encoder.down_blocks.6.norm.bias", "appearance_encoder.down_blocks.6.norm.running_mean", "appearance_encoder.down_blocks.6.norm.running_var", "appearance_encoder.down_blocks.6.norm.num_batches_tracked", "video_decoder.up_blocks.5.conv.weight", "video_decoder.up_blocks.5.conv.bias", "video_decoder.up_blocks.5.norm.weight", "video_decoder.up_blocks.5.norm.bias", "video_decoder.up_blocks.5.norm.running_mean", "video_decoder.up_blocks.5.norm.running_var", "video_decoder.up_blocks.5.norm.num_batches_tracked", "video_decoder.up_blocks.6.conv.weight", "video_decoder.up_blocks.6.conv.bias", "video_decoder.up_blocks.6.norm.weight", "video_decoder.up_blocks.6.norm.bias", "video_decoder.up_blocks.6.norm.running_mean", "video_decoder.up_blocks.6.norm.running_var", "video_decoder.up_blocks.6.norm.num_batches_tracked".
	size mismatch for appearance_encoder.down_blocks.4.conv.weight: copying a param with shape torch.Size([1024, 512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 3, 3]).
	size mismatch for appearance_encoder.down_blocks.4.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for appearance_encoder.down_blocks.4.norm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
...
  1. I upload 4 test images here , the xxx_result.gif is corresponding to the xxx.gif that is source image for nemo-ckp.pth.tar model and meanwhile I use test/213_deliberate_smile_1.png as driving image.

@AliaksandrSiarohin
Copy link
Owner

  1. Yes the keypoints is learned to be extracted at resolution 64x64, I doubt it generalize to higher resolutions. Need another model, or model trained on different resolutions.
    Model params should be, same as in vox.yaml.:
model_params:
  common_params:
    num_kp: 10
    kp_variance: 'matrix'
    num_channels: 3
  kp_detector_params:
     temperature: 0.1
     block_expansion: 32
     max_features: 1024
     scale_factor: 0.25 
     num_blocks: 5
     clip_variance: 0.001 
  generator_params:
    interpolation_mode: 'trilinear'
    block_expansion: 32
    max_features: 1024
    num_blocks: 7
    num_refinement_blocks: 4
    dense_motion_params:
      block_expansion: 32
      max_features: 1024
      num_blocks: 5
      use_mask: True
      use_correction: True
      scale_factor: 0.25
      mask_embedding_params:
        use_heatmap: True
        use_deformed_source_image: True
        heatmap_type: 'difference'
        norm_const: 100
      num_group_blocks: 2
    kp_embedding_params:
      scale_factor: 0.25 
      use_heatmap: True
      norm_const: 100
      heatmap_type: 'difference'
  discriminator_params:
    kp_embedding_params:
      norm_const: 100
    block_expansion: 32
    max_features: 256
    num_blocks: 4
  1. Most likely nemo is too small to generalize to arbitrary faces. Try model trained on vox-celeb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants