-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On windows I am getting this error just when trying to edit any photo: RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' #84
Comments
D:\AI\MagicQuill\MagicQuill\pidi.py:334: UserWarning: The torch.cuda.DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=, device='cuda') to create tensors. (Triggered internally at ..\torch\csrc\tensor\python_tensor.cpp:85.) |
same error for me |
I think it is happening when CPU tries to perform a matrix multiplication (addmm is a combination of addition and matrix multiplication) with tensors that are in float16 (half-precision) format. This suggests that some parts of the model or data are being processed in float16 on the CPU, which is not well-supported. |
It is finally working for me. No more RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' Summary of Changes: Here's a breakdown of the modifications, categorized by file and the issues they addressed:
Global device Variable: Defined device globally to ensure consistent device usage throughout the code: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ScribbleColorEditModel Initialization: Passed the device to the constructor when creating the ScribbleColorEditModel instance: scribbleColorEditModel = ScribbleColorEditModel(device) prepare_images_and_masks: Convert the base64 to tensors using: total_mask = create_alpha_mask(total_mask).to(device) add_edge_mask = create_alpha_mask(add_edge_image).to(device) if add_edge_image else torch.zeros_like(total_mask).to(device) return add_color_image_tensor, original_image_tensor, total_mask, add_edge_mask, remove_edge_mask generate_image_handler: Removed the lines that were moving ms_data['total_mask'] and ms_data['original_image'] to the device because that is already being handled on the prepare_images_and_masks function.
ScribbleColorEditModel.init: Modified the constructor to take the device as an argument. Moved the model within the ModelPatcher to the device: self.model.model = self.model.model.to(self.device) Initialized the CLIPTextEncode object without passing the clip model initially: self.clip_text_encoder = CLIPTextEncode() # Initialize here Set the clip attribute of the clip_text_encoder after loading the CLIP model: self.clip_text_encoder.clip = self.clip # Set clip after loading ScribbleColorEditModel.load_models: Removed the dtype argument from the function definition. Correctly extracted the brushnet model from the dictionary returned by self.brushnet_loader.brushnet_loading(). Moved edge_controlnet, color_controlnet, and brushnet to the device. ScribbleColorEditModel.process: Moved the newly loaded models to the device if the ckpt_name changes. Ensured that self.clip_text_encoder.clip is set to the new self.clip when a new checkpoint is loaded. Correctly called the encode method of CLIPTextEncode: positive = self.clip_text_encoder.encode(positive_prompt, self.device)[0] Moved input tensors (image, colored_image, mask, add_mask, remove_mask) to the device with dtype=torch.float32 before using them.
CLIPTextEncode.encode: Removed the clip argument from the method definition. Used self.clip.tokenize(text) to tokenize the text, relying on the clip attribute set in ScribbleColorEditModel. Correctly converted the tokenized output (which is a dictionary) into tensors and moved them to the device: tokens = self.clip.tokenize(text)
BrushNetLoader.brushnet_loading: Removed the dtype argument from the method definition. Enforced torch_dtype = torch.float32. Moved the loaded brushnet_model to the device: brushnet_model = load_checkpoint_and_dispatch( Returned the dictionary containing model info in a tuple: ({"brushnet": brushnet_model, ...}, ) BrushNet.model_update: Passed is_SDXL and is_PP to check_compatibility function. check_compatibility: Modified to take is_SDXL and is_PP as arguments instead of a dictionary, to access the SDXL and PP values directly using the variables.
set_up_textual_embeddings: Added a check isinstance(y, torch.Tensor) to ensure that y is a tensor before accessing its shape. Added a check y.numel() > 0 to ensure that y is not an empty tensor. Added a check len(y.shape) > 0 to ensure that y has at least one dimension. Added an else branch to the shape check to log a more informative warning message if y is not a valid tensor or if it has an unexpected shape. Added a check if not tokens_temp: to handle cases where the input prompt might result in an empty list of tokens. In such cases, a padding token is added to ensure that the list is not empty. |
PLEASE READ BEFORE SUBMITTING AN ISSUE
MagicQuill is not a commercial software but a research project. While we strive to improve and maintain it, support is provided on a best-effort basis. Please be patient and respectful in your communications.
To help us respond faster and better, please ensure the following:
If the issue persists, fill out the details below.
Checklist
Issue/Feature Request Description
Type of Issue:
Summary:
Steps to Reproduce (For Bugs Only)
Expected Behavior:
Actual Behavior:
Additional Context/Details
Environment
Feature Request Specifics (If Applicable)
The text was updated successfully, but these errors were encountered: