Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu #133

Open
TremendaCarucha opened this issue Jan 9, 2025 · 1 comment

Comments

@TremendaCarucha
Copy link

This happens every time, after I re-ran the workflow and the sampler runs again. I imagine somewhere in there is moving some bytes from VRAM to RAM or something? I chose CUDA for FaceAnalysis, tried CPU too. Same problem. I also don't have enabled the VAE to run on CPU. Anything I can do to solve? Thanks

!!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Traceback (most recent call last):
  File "/home/user/pinokio/api/comfy/app/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/home/user/pinokio/api/comfy/app/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/home/user/pinokio/api/comfy/app/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/user/pinokio/api/comfy/app/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/home/user/pinokio/api/comfy/app/nodes.py", line 1533, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "/home/user/pinokio/api/comfy/app/nodes.py", line 1500, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "/home/user/pinokio/api/comfy/app/comfy/sample.py", line 45, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 1031, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 921, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 907, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/user/pinokio/api/comfy/app/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 876, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 860, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "/home/user/pinokio/api/comfy/app/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 715, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/k_diffusion/sampling.py", line 161, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 380, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 840, in __call__
    return self.predict_noise(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 843, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 360, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 196, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
  File "/home/user/pinokio/api/comfy/app/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/samplers.py", line 309, in _calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "/home/user/pinokio/api/comfy/app/comfy/model_base.py", line 130, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
  File "/home/user/pinokio/api/comfy/app/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/model_base.py", line 159, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/comfy/ldm/flux/model.py", line 204, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
  File "/home/user/pinokio/api/comfy/app/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/pulidflux.py", line 148, in forward_orig
    img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/encoders_flux.py", line 52, in forward
    x = self.norm1(x)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 217, in forward
    return F.layer_norm(
  File "/home/user/pinokio/api/comfy/app/env/lib/python3.10/site-packages/torch/nn/functional.py", line 2900, in layer_norm
    return torch.layer_norm(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
@freezelion
Copy link

Same problem met after run several times of generation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants