You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello @justinpinkney
when i load the finetune model sd-clip-vit-l14-img-embed_ema_only.ckpt
it alert: KeyError: 'model.diffusion_model.input_blocks.0.0.weight'
so how to solve this problem?
Thanks!!!
my yaml is:
`model:
base_learning_rate: 1.0e-05
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: rgb_image
image_size: 64
channels: 4
cond_stage_trainable: False # Note: different from the one we trained before
# unet_trainable: attn
# unet_trainable: "attn"
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
hello
@justinpinkney
when i load the finetune model
sd-clip-vit-l14-img-embed_ema_only.ckpt
it alert: KeyError: 'model.diffusion_model.input_blocks.0.0.weight'
so how to solve this problem?
Thanks!!!
my yaml is:
`model:
base_learning_rate: 1.0e-05
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: rgb_image
image_size: 64
channels: 4
cond_stage_trainable: False # Note: different from the one we trained before
# unet_trainable: attn
# unet_trainable: "attn"
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
data:
target: main.DataModuleFromConfig
params:
batch_size: 2
num_workers: 8
num_val_workers: 0 # Avoid a weird val dataloader issue
train:
target: ldm.data.simple.FangHuaData
# image_key: image
params:
root_dir: /home/share/movie_dataset/fanghua/png
ext: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 256
interpolation: 3
- target: torchvision.transforms.RandomCrop
params:
size: 256
lightning:
find_unused_parameters: false
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 100
max_images: 8
increase_log_steps: False
log_first_step: True
log_images_kwargs:
use_ema_scope: False
inpaint: False
plot_progressive_rows: False
plot_diffusion_rows: False
N: 8
unconditional_guidance_scale: 3.0
unconditional_guidance_label: [""]
trainer:
benchmark: True
# val_check_interval: 5000000 # really sorry
num_sanity_val_steps: 0
accumulate_grad_batches: 2
`
The text was updated successfully, but these errors were encountered: