This repository has been archived by the owner on Oct 8, 2024. It is now read-only.
specified device interface or feature level is not supported on the system #113
lilknockstar
started this conversation in
General
Replies: 2 comments
-
Is this a laptop? It seems that the 7400 is an integrated GPU, and quite an old one, I don't think you will be able to run this on that hardware. |
Beta Was this translation helpful? Give feedback.
0 replies
-
it is a PC I can use my GPU to render things in Blender so I just figured
that was my graphics card maybe 🤦♂️
…On Sunday, February 19, 2023, JJGall ***@***.***> wrote:
Which I checked and I have DirectX12, Windows 10, and my GPU drivers are
updated I have an AMD 7400 series GPU
Is this a laptop? It seems that the 7400 is an integrated GPU, and quite
an old one, I don't think you will be able to run this on that hardware.
—
Reply to this email directly, view it on GitHub
<#113 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A53BGMLGJKIGWCGWJUDEHGTWYKHEZANCNFSM6AAAAAAVBFFR3A>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I followed the tutorial but can't seem to get anything to render to an output folder. It also doesn't help that I don't know where an output folder is but anyways was just hoping somebody could assist me on what to do next. I asked ChatGPT and they told me "This error message indicates that the specified device interface or feature level is not supported on the system where the code is running. Specifically, the error seems to be related to the DML (DirectML) provider for ONNX Runtime.
Possible solutions include:
Updating the version of the DML provider to a version that is supported on the system.
Switching to a different provider for ONNX Runtime, such as the CPU or CUDA provider, depending on the available hardware.
Checking that the system meets the requirements for running the DML provider, which include having Windows 10 version 2004 or later, a GPU with DirectX 12 Ultimate support, and the latest GPU drivers."
Which I checked and I have DirectX12, Windows 10, and my GPU drivers are updated I have an AMD 7400 series GPU
Traceback (most recent call last):
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\gradio\routes.py", line 384, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\gradio\blocks.py", line 1024, in process_api
result = await self.call_function(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\gradio\blocks.py", line 836, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\onnxUI.py", line 528, in generate_click
pipe = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 865, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\diffusers\pipelines\onnx_utils.py", line 205, in from_pretrained
return cls._from_pretrained(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\diffusers\pipelines\onnx_utils.py", line 172, in _from_pretrained
model = OnnxRuntimeModel.load_model(
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\diffusers\pipelines\onnx_utils.py", line 77, in load_model
return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 360, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\linkp\OneDrive\Documents\stable_diff\OnnxDiffusersUI-main\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 408, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a_work\1\s\onnxruntime\core\providers\dml\dml_provider_factory.cc(136)\onnxruntime_pybind11_state.pyd!00007FFF9E9A1583: (caller: 00007FFF9E9A12F1) Exception(1) tid(33fc) 887A0004 The specified device interface or feature level is not supported on this system.
Beta Was this translation helpful? Give feedback.
All reactions