Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INVALID_ARGUMENT : unsupported conv activation mode "LeakyRelu" #22947

Open
baishouwujianfei opened this issue Nov 26, 2024 · 4 comments
Open
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@baishouwujianfei
Copy link

Describe the issue

hello, I encountered an error while using onnxruntime-gpu to start the model service through CUDA: onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\contrib_ops\cuda\fused_conv.cc:19 onnxruntime::contrib::cuda::FusedConv<float>::FusedConv [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : unsupported conv activation mode "LeakyRelu". The model runs normally when switched to the CPU with other parameters unchanged. How can I resolve this issue?

To reproduce

none

Urgency

general

Platform

Windows

OS Version

win 11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

onnxruntime-gpu 1.19.2

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA12.2

@skottmckay skottmckay added the ep:CUDA issues related to the CUDA execution provider label Nov 27, 2024
@skottmckay
Copy link
Contributor

Seems like a bug in the Conv fusion optimizer. If the CUDA EP doesn't support LeakyRelu the optimizer shouldn't fuse the Conv and LeakyRelu if they're assigned to the CUDA EP.

@baishouwujianfei
Copy link
Author

Seems like a bug in the Conv fusion optimizer. If the CUDA EP doesn't support LeakyRelu the optimizer shouldn't fuse the Conv and LeakyRelu if they're assigned to the CUDA EP.

"Thank you for your reply. How can I avoid this bug, or should I wait for a fix?"

@skottmckay
Copy link
Contributor

It's actually setup to ignore the nodes and not fuse if it's assigned to CUDA (technically it could fuse with Relu but right now it's ignoring all activations).

if (node_ep == kCudaExecutionProvider) {
return std::nullopt;

How was the model created? Wondering if it was saved with CPU EP optimizations applied, and now when it's loaded with CUDA enabled it's invalid. i.e. the optimizer ran previously when the Conv + Activation were assigned to the CPU EP so they were fused.

@baishouwujianfei
Copy link
Author

It's actually setup to ignore the nodes and not fuse if it's assigned to CUDA (technically it could fuse with Relu but right now it's ignoring all activations).

onnxruntime/onnxruntime/core/optimizer/conv_activation_fusion.cc

Lines 118 to 119 in a24723d

if (node_ep == kCudaExecutionProvider) {
return std::nullopt;
How was the model created? Wondering if it was saved with CPU EP optimizations applied, and now when it's loaded with CUDA enabled it's invalid. i.e. the optimizer ran previously when the Conv + Activation were assigned to the CPU EP so they were fused.

Sorry, this is someone else's model that I downloaded from Hugging Face, so I'm not sure how the model was created.
link:https://huggingface.co/seasonstudio/openvoice_tone_clone_onnx/blob/main/tone_clone_model.onnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

2 participants