Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SAMModelLoader Failure on Apple Silicon due to CUDA Deserialization Error #60

Open
icebergov opened this issue Apr 7, 2024 · 2 comments

Comments

@icebergov
Copy link

icebergov commented Apr 7, 2024

SAMModelLoader Failure on Apple Silicon due to CUDA Deserialization Error

Issue Description

When attempting to run SAMModelLoader for the segment anything functionality on an Apple Silicon Mac, an error is encountered indicating a problem with attempting to deserialize an object on a CUDA device, even though torch.cuda.is_available() returns False.

Environment

  • Operating System: macOS Sonoma 14.4.1 (M1 Max, Apple Silicon)
  • Python Version: 3.11
  • PyTorch Version: torch==2.1.2 ; torchvision==0.16.2

Error Message

Error occurred when executing SAMModelLoader (segment anything):

Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Expected Behavior

The model loader should detect the absence of CUDA and automatically adjust to use CPU for model deserialization and execution, allowing the functionality to proceed without error, or ideally switch to the GPU Apple Silicon provides.

Actual Behaviour

The process fails with an error message indicating an attempt to deserialize a CUDA object on a system where CUDA is not available, due to torch.cuda.is_available() returning False.

Screenshot

320283574-c9b53012-d557-415c-9979-31955607671a

@couleurs
Copy link

Running into the same issue. Were you ever able to resolve? Thanks!

@nianxi
Copy link

nianxi commented Jul 24, 2024

Same issue in Mackbook Pro M2... mps is available, but how to resolve?

Latest:
I solve it like this:

  1. The error file
    custom_nodes/comfyui_segment_anything/sam_hq/build_sam_hq.py

  2. Change the code

  • old: state_dict = torch.load(f)
  • new: state_dict = torch.load(f, map_location=torch.device("mps"))

rember to download the pytorch nightly. now it works !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants