You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running in ONNXRuntime with default optimization, the result o2 is missing where it returns None instead of an actual tensor. While disabling all ORT optimizations by setting ort.GraphOptimizationLevel.ORT_DISABLE_ALL gets this model correctly executed. So I think it might be due to some miss-/over-optimization in some graph passes.
To reproduce
importonnxruntimeasortimportonnximportnumpyasnpmodel=onnx.load("test.onnx")
onnx.checker.check_model(model, full_check=True)
x=np.array([[[[5.69269, 3.9831784], [5.8404264, 5.823447]]]], dtype=np.float32)
sess=ort.InferenceSession(
model.SerializeToString(),
providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
)
results=sess.run(["o0", "o1", "o2"], {"i0": x})
print(f"{results[0] =}")
print(f"{results[1] =}")
print(f"{results[2] =}") # Results are incorrectly optimized out as `None`assertresults[2] isNoneoptions=ort.SessionOptions()
options.graph_optimization_level=ort.GraphOptimizationLevel.ORT_DISABLE_ALLsess=ort.InferenceSession(
model.SerializeToString(),
sess_options=options,
providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
)
results=sess.run(["o0", "o1", "o2"], {"i0": x})
print(f"{results[0] =}")
print(f"{results[1] =}")
print(f"{results[2] =}") # It works without optimizationassertresults[2] isnotNone
None. But it is very common to cache all intermediate outputs when using ONNXRuntime as a reference backend in testing (say in PyTorch and other ONNX supported engines).
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.14.0 (ort-nightly)
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered:
Describe the issue
The simplified model exported from PyTorch:
When running in ONNXRuntime with default optimization, the result
o2
is missing where it returnsNone
instead of an actual tensor. While disabling all ORT optimizations by settingort.GraphOptimizationLevel.ORT_DISABLE_ALL
gets this model correctly executed. So I think it might be due to some miss-/over-optimization in some graph passes.To reproduce
The model
"test.onnx"
is attached: test.zipUrgency
None. But it is very common to cache all intermediate outputs when using ONNXRuntime as a reference backend in testing (say in PyTorch and other ONNX supported engines).
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.14.0 (ort-nightly)
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: