Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the issue of CUDA tensors #64

Open
6 tasks
MikeAiJF opened this issue Nov 28, 2024 · 11 comments
Open
6 tasks

Regarding the issue of CUDA tensors #64

MikeAiJF opened this issue Nov 28, 2024 · 11 comments

Comments

@MikeAiJF
Copy link

PLEASE READ BEFORE SUBMITTING AN ISSUE

MagicQuill is not a commercial software but a research project. While we strive to improve and maintain it, support is provided on a best-effort basis. Please be patient and respectful in your communications.
To help us respond faster and better, please ensure the following:

  1. Search Existing Resources: Have you looked through the documentation (e.g., hardware requirement and setup steps), and searched online for potential solutions?
  2. Avoid Duplication: Check if a similar issue already exists.

If the issue persists, fill out the details below.


Checklist

  • I have searched the documentation and FAQs.
  • I have searched for similar issues but couldn’t find a solution.
  • I have provided clear and detailed information about the issue.

Issue/Feature Request Description

Type of Issue:

  • Bug
  • Feature Request
  • Question

Summary:


Steps to Reproduce (For Bugs Only)

Expected Behavior:

Actual Behavior:


Additional Context/Details


Environment

  • OS:
  • Version:
  • Any Relevant Dependencies:

Feature Request Specifics (If Applicable)

  • What problem does this solve?:
  • How will this feature improve the project?:
    Traceback (most recent call last):
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
    result = await app( # type: ignore[func-returns-value]
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
    return await self.app(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
    await super().call(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/applications.py", line 113, in call
    await self.middleware_stack(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in call
    raise exc
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in call
    await self.app(scope, receive, _send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 715, in call
    await self.middleware_stack(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
    response = await f(request)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
    File "/project_03/MagicQuill/gradio_run.py", line 320, in guess_prompt
    res = guess_prompt_handler(data['original_image'], data['add_color_image'], data['add_edge_image'])
    File "/project_03/MagicQuill/gradio_run.py", line 109, in guess_prompt_handler
    res = guess(original_image_tensor, add_color_image_tensor, add_edge_mask)
    File "/project_03/MagicQuill/gradio_run.py", line 90, in guess
    description, ans1, ans2 = llavaModel.process(original_image_tensor, add_color_image_tensor, add_edge_mask)
    File "/project_03/MagicQuill/MagicQuill/llava_new.py", line 95, in process
    mean_brightness = image_with_sketch[bool_add_mask].mean()
    RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
    INFO: 127.0.0.1:54310 - "POST /gradio_api/queue/join HTTP/1.1" 200 OK
    Requested to load SD1ClipModel
    Loading 1 new model
    INFO: 127.0.0.1:54310 - "GET /gradio_api/queue/data?session_hash=i8htzq0dpzk HTTP/1.1" 200 OK
    Traceback (most recent call last):
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
    response = await route_utils.call_process_api(
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
    result = await self.call_function(
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/blocks.py", line 1567, in call_function
    prediction = await anyio.to_thread.run_sync( # type: ignore
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
    File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
    File "/project_03/MagicQuill/gradio_run.py", line 152, in generate_image_handler
    res = generate(
    File "/project_03/MagicQuill/gradio_run.py", line 120, in generate
    latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process(
    File "/project_03/MagicQuill/MagicQuill/scribble_color_edit.py", line 56, in process
    mask = self.mask_processor.expand_mask(mask, expand=grow_size, tapered_corners=True)[0]
    File "/project_03/MagicQuill/MagicQuill/comfyui_utils.py", line 388, in expand_mask
    output = m.numpy()
    TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
@MikeAiJF
Copy link
Author

What's the reason? I didn't change the code

@zliucz
Copy link
Member

zliucz commented Nov 28, 2024

I haven't met this error before. Would you please show me the environment list by pip list? Let's see if there's any mismatch.

@MikeAiJF
Copy link
Author

accelerate 0.33.0
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.6.2.post1
bitsandbytes 0.44.1
certifi 2024.8.30
charset-normalizer 3.4.0
click 8.1.7
diffusers 0.31.0
einops 0.6.1
einops-exts 0.0.4
exceptiongroup 1.2.2
fastapi 0.115.5
ffmpy 0.4.0
filelock 3.16.1
fsspec 2024.10.0
gradio 5.4.0
gradio_client 1.4.2
gradio_magicquill 0.0.1
h11 0.14.0
httpcore 0.17.3
httpx 0.24.1
huggingface-hub 0.26.2
idna 3.10
importlib_metadata 8.5.0
Jinja2 3.1.4
joblib 1.4.2
latex2mathml 3.77.0
llava 1.2.2.post1 /project_03/MagicQuill/MagicQuill/LLaVA
markdown-it-py 3.0.0
markdown2 2.5.1
MarkupSafe 2.1.5
mdurl 0.1.2
mpmath 1.3.0
networkx 3.4.2
numpy 1.26.4
opencv-python 4.10.0.84
orjson 3.10.12
packaging 24.2
pandas 2.2.3
peft 0.13.2
pillow 11.0.0
pip 24.3.1
protobuf 4.25.4
psutil 6.1.0
pydantic 2.10.2
pydantic_core 2.27.1
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.12
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
ruff 0.8.0
safehttpx 0.1.1
safetensors 0.4.5
scikit-learn 1.2.2
scipy 1.14.1
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 75.6.0
shellingham 1.5.4
shortuuid 1.0.13
six 1.16.0
sniffio 1.3.1
starlette 0.41.3
svgwrite 1.4.3
sympy 1.13.3
threadpoolctl 3.5.0
timm 0.6.13
tokenizers 0.15.1
tomlkit 0.12.0
torch 2.1.2+cu118
torchaudio 2.1.2+cu118
torchsde 0.2.6
torchvision 0.16.2+cu118
tqdm 4.67.1
trampoline 0.1.2
transformers 4.37.2
triton 2.1.0
typer 0.13.1
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
uvicorn 0.32.1
wavedrom 2.0.3.post3
webcolors 1.13
websockets 12.0
wheel 0.45.1
zipp 3.21.0

@MikeAiJF
Copy link
Author

image

@MikeAiJF
Copy link
Author

MikeAiJF commented Nov 28, 2024

Is it because of my graphics card

@zliucz
Copy link
Member

zliucz commented Nov 28, 2024

@MikeAiJF Oh. Indeed. Try to use one and only GPU by running this command CUDA_VISIBLE_DEVICES=0 python gradio_run.py. Let me know if it works.

@MikeAiJF
Copy link
Author

MikeAiJF commented Nov 28, 2024

@MikeAiJF哦。确实如此。尝试通过运行此命令使用一个且仅一个 GPU CUDA_VISIBLE_DEVICES=0 python gradio_run.py。如果有效请告诉我。

can use it now, thank you very much

@MikeAiJF
Copy link
Author

Hello, may I only use the SD1.5 model? Can I use the Flux model for generation

@zliucz
Copy link
Member

zliucz commented Nov 28, 2024

Hi, currently, we have only supported SD v1.5. But we have released our canvas UI, you may modify the code to support Flux by your own.

@MikeAiJF
Copy link
Author

你好,目前我们只支持 SD v1.5。但是我们已经发布了我们的画布 UI,你可以自行修改代码以支持 Flux。

thanks

@aiikendoit
Copy link

aiikendoit commented Jan 16, 2025

I'm using radeon 7600xt

RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants