-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding the issue of CUDA tensors #64
Comments
What's the reason? I didn't change the code |
I haven't met this error before. Would you please show me the environment list by pip list? Let's see if there's any mismatch. |
accelerate 0.33.0 |
Is it because of my graphics card |
@MikeAiJF Oh. Indeed. Try to use one and only GPU by running this command |
can use it now, thank you very much |
Hello, may I only use the SD1.5 model? Can I use the Flux model for generation |
Hi, currently, we have only supported SD v1.5. But we have released our canvas UI, you may modify the code to support Flux by your own. |
thanks |
I'm using radeon 7600xt RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx |
PLEASE READ BEFORE SUBMITTING AN ISSUE
MagicQuill is not a commercial software but a research project. While we strive to improve and maintain it, support is provided on a best-effort basis. Please be patient and respectful in your communications.
To help us respond faster and better, please ensure the following:
If the issue persists, fill out the details below.
Checklist
Issue/Feature Request Description
Type of Issue:
Summary:
Steps to Reproduce (For Bugs Only)
Expected Behavior:
Actual Behavior:
Additional Context/Details
Environment
Feature Request Specifics (If Applicable)
Traceback (most recent call last):
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
return await self.app(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/applications.py", line 113, in call
await self.middleware_stack(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in call
raise exc
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in call
await self.app(scope, receive, _send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/project_03/MagicQuill/gradio_run.py", line 320, in guess_prompt
res = guess_prompt_handler(data['original_image'], data['add_color_image'], data['add_edge_image'])
File "/project_03/MagicQuill/gradio_run.py", line 109, in guess_prompt_handler
res = guess(original_image_tensor, add_color_image_tensor, add_edge_mask)
File "/project_03/MagicQuill/gradio_run.py", line 90, in guess
description, ans1, ans2 = llavaModel.process(original_image_tensor, add_color_image_tensor, add_edge_mask)
File "/project_03/MagicQuill/MagicQuill/llava_new.py", line 95, in process
mean_brightness = image_with_sketch[bool_add_mask].mean()
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
INFO: 127.0.0.1:54310 - "POST /gradio_api/queue/join HTTP/1.1" 200 OK
Requested to load SD1ClipModel
Loading 1 new model
INFO: 127.0.0.1:54310 - "GET /gradio_api/queue/data?session_hash=i8htzq0dpzk HTTP/1.1" 200 OK
Traceback (most recent call last):
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
result = await self.call_function(
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/home/kmks-server-02/miniconda3/envs/MagicQuill/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/project_03/MagicQuill/gradio_run.py", line 152, in generate_image_handler
res = generate(
File "/project_03/MagicQuill/gradio_run.py", line 120, in generate
latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process(
File "/project_03/MagicQuill/MagicQuill/scribble_color_edit.py", line 56, in process
mask = self.mask_processor.expand_mask(mask, expand=grow_size, tapered_corners=True)[0]
File "/project_03/MagicQuill/MagicQuill/comfyui_utils.py", line 388, in expand_mask
output = m.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The text was updated successfully, but these errors were encountered: