-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-zero status code returned #11548
Comments
It may be a bug. Try the latest ORT release, or share with us your model to investigate. |
I've tried that,but it doesn't work |
model file https://github.com/pythondever/_onnx_demo_error |
@yuslepukhin Is there anything update? |
I did not get a chance to look at it yet. |
I have the same problem as well. I have tried everything I can do but still doesn't work. Could you give me some suggestions? Many thanks. |
Please, follow the issue reporting template and submit a program that reproduces the behavior, that would speed up things a lot. Please, attach the input data and your model if you are able to. Then we would be able to determine what the issue is and the best course of action. |
@yuslepukhin thanks reply! |
I have seen the link, it shows up as empty folder. |
really sorry,I copied the address again,It's ready to go, |
I got a similar error: |
Hi , I am really looking forward your answer.Many thanks. I have same problem with pythondever. I have tired everything that I could do for that problems, but it still remains. |
@yuslepukhin And my model can be get by the following link: Any help will be gratefully appreciated! |
This is a very different issue and problem is in the model. The two shapes can not be broadcasted. Please, report this separate. |
This still does not open. |
Got this, would be nice if you shared a script to drive this with inputs. Does this work on CPU? |
Many thanks for your immediate reply! |
I have been able to reproduce your issue. For an immediate workaround you enable only basic optimizations for CUDA runs. This will likely result in some performance loss, while you are waiting for a fix. |
Yes, it works now thanks! |
The problem was solved, thanks a lot. And looking forwaord the fix version! |
Temp skip hrnet-w18 ort_gpu microsoft/onnxruntime#11548
Has this been fixed? I'm getting the following error on mine: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Conv node. Name:'/stages.0/stages.0.1/dwconv/Conv' Status Message: X num_dims does not match W num_dims. X: {1,96,128,256} W: {96} |
Describe the bug
I exported an onnx model can run in CPUExecutionProvider,But when I use GPUExecutionProvider got Error
2022-05-17 17:30:35.309693323 [E:onnxruntime:Default, cuda_call.cc:118 CudaCall] CUDNN failure 3: CUDNN_STATUS_BAD_PARAM ; GPU=0 ; hostname=pc-Z390-GAMING-X ; expr=cudnnAddTensor(Base::CudnnHandle(), &alpha, Base::s_.z_tensor, Base::s_.z_data, &alpha, Base::s_.y_tensor, Base::s_.y_data);
2022-05-17 17:30:35.309720084 [E:onnxruntime:, sequential_executor.cc:364 Execute] Non-zero status code returned while running FusedConv node. Name:'Conv_34' Status Message: CUDNN error executing cudnnAddTensor(Base::CudnnHandle(), &alpha, Base::s_.z_tensor, Base::s_.z_data, &alpha, Base::s_.y_tensor, Base::s_.y_data)
Traceback (most recent call last):
File "pred_onnx.py", line 97, in
pred_onnx(model, im_path)
File "pred_onnx.py", line 80, in pred_onnx
pred = session.run([output_name], {input_name: image})[0]
File "/home/pc/anaconda3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 192, in run
return self.sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running FusedConv node. Name:'Conv_34' Status Message: CUDNN error executing cudnnAddTensor(Base::CudnnHandle(), &alpha, Base::s.z_tensor, Base::s_.z_data, &alpha, Base::s_.y_tensor, Base::s_.y_data)
how can i fix this error ?
System information
The text was updated successfully, but these errors were encountered: