We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Found this when lowering onnx opt-125M-awq model nod-ai/SHARK-TestSuite#258
opt-125M-awq.default.onnx.linalg.elide.mlir:1078:22: error: operand #0 does not dominate this use %extracted_365 = tensor.extract %282[] : tensor<i64> ^ opt-125M-awq.default.onnx.linalg.elide.mlir:23:3: note: called from func.func @main_graph(%arg0: tensor<?x?xi64>, %arg1: tensor<?x?xi64>, %arg2: tensor<?x12x?x64xf32>, %arg3: tensor<?x12x?x64xf32>, %arg4: tensor<?x12x?x64xf32>, %arg5: tensor<?x12x?x64xf32>, %arg6: tensor<?x12x?x64xf32>, %arg7: tensor<?x12x?x64xf32>, %arg8: tensor<?x12x?x64xf32>, %arg9: tensor<?x12x?x64xf32>, %arg10: tensor<?x12x?x64xf32>, %arg11: tensor<?x12x?x64xf32>, %arg12: tensor<?x12x?x64xf32>, %arg13: tensor<?x12x?x64xf32>, %arg14: tensor<?x12x?x64xf32>, %arg15: tensor<?x12x?x64xf32>, %arg16: tensor<?x12x?x64xf32>, %arg17: tensor<?x12x?x64xf32>, %arg18: tensor<?x12x?x64xf32>, %arg19: tensor<?x12x?x64xf32>, %arg20: tensor<?x12x?x64xf32>, %arg21: tensor<?x12x?x64xf32>, %arg22: tensor<?x12x?x64xf32>, %arg23: tensor<?x12x?x64xf32>, %arg24: tensor<?x12x?x64xf32>, %arg25: tensor<?x12x?x64xf32>) -> (tensor<?x?x50272xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>, tensor<?x12x?x64xf32>) { ^ opt-125M-awq.default.onnx.linalg.elide.mlir:1078:22: note: see current operation: %425 = "tensor.extract"(%426#1) : (tensor<i32>) -> i32 %extracted_365 = tensor.extract %282[] : tensor<i64> ^ opt-125M-awq.default.onnx.linalg.elide.mlir:1087:12: note: operand defined here (op in the same block) %289 = linalg.
iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu opt-125M-awq.default.onnx.linalg.elide.mlir > opt-125M-awq.default.vmfb
opt-125M-awq.default.onnx.linalg.elide.mlir
No response
candidate-20240630.940
The text was updated successfully, but these errors were encountered:
Redundant filed #17759
Sorry, something went wrong.
No branches or pull requests
What happened?
Found this when lowering onnx opt-125M-awq model nod-ai/SHARK-TestSuite#258
Steps to reproduce your issue
iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu opt-125M-awq.default.onnx.linalg.elide.mlir > opt-125M-awq.default.vmfb
opt-125M-awq.default.onnx.linalg.elide.mlir
What component(s) does this issue relate to?
No response
Version information
candidate-20240630.940
Additional context
No response
The text was updated successfully, but these errors were encountered: