We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For the given IR
module { func.func @main_graph(%arg0: !torch.vtensor<[?,?,?,?],f32>, %arg1: !torch.vtensor<[11,1,1,384],f32>, %arg2: !torch.vtensor<[?,?,?,?],f32>, %arg3:!torch.vtensor<[11,1,100,384],f32>, %arg4: !torch.vtensor<[?,?,?],f32>) -> !torch.vtensor<[11,1,?,384],f32> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.6.0"} { %136 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<11x1x1x384xf32>} : () -> !torch.vtensor<[11,1,1,384],f32> %137 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<11x1x100x384xf32>} : () -> !torch.vtensor<[11,1,100,384],f32> %138 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<11x384x32x54xf32>} : () -> !torch.vtensor<[11,384,32,54],f32> %139 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<2xsi64>} : () -> !torch.vtensor<[2],si64> %none = torch.constant.none %219 = torch.operator "onnx.Shape"(%arg0) : (!torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[4],si64> %220 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__1> : tensor<si64>} : () -> !torch.vtensor<[],si64> %221 = torch.operator "onnx.Gather"(%219, %220) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %223 = torch.operator "onnx.Shape"(%arg0) : (!torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[4],si64> %224 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__3> : tensor<si64>} : () -> !torch.vtensor<[],si64> %225 = torch.operator "onnx.Gather"(%223, %224) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %270 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__23> : tensor<si64>} : () -> !torch.vtensor<[],si64> %271 = torch.operator "onnx.Div"(%221, %270) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %274 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__24> : tensor<si64>} : () -> !torch.vtensor<[],si64> %275 = torch.operator "onnx.Div"(%225, %274) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %283 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__27> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %285 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__28> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %287 = torch.operator "onnx.Concat"(%283, %285) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2],si64> %302 = torch.operator "onnx.Cast"(%287) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[2],si64>) -> !torch.vtensor<[2],si64> %303 = torch.operator "onnx.Concat"(%139, %302) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[2],si64>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[4],si64> %304 = torch.operator "onnx.Resize"(%138, %none, %none, %303) {torch.onnx.coordinate_transformation_mode = "half_pixel", torch.onnx.cubic_coeff_a = -7.500000e-01 : f32, torch.onnx.mode = "cubic", torch.onnx.nearest_mode = "floor"} : (!torch.vtensor<[11,384,32,54],f32>, !torch.none, !torch.none, !torch.vtensor<[4],si64>) -> !torch.vtensor<[?,?,?,?],f32> %305 = torch.operator "onnx.Shape"(%304) : (!torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[4],si64> %306 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__33> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %307 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__34> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %308 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__35> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %309 = torch.operator "onnx.Slice"(%305, %307, %308, %306) : (!torch.vtensor<[4],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2],si64> %310 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__36> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %311 = torch.operator "onnx.Concat"(%309, %310) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[2],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[3],si64> %312 = torch.operator "onnx.Reshape"(%304, %311) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[?,?,?],f32> %313 = torch.operator "onnx.Transpose"(%312) {torch.onnx.perm = [0 : si64, 2 : si64, 1 : si64]} : (!torch.vtensor<[?,?,?],f32>) -> !torch.vtensor<[?,?,?],f32> %314 = torch.operator "onnx.Mul"(%271, %275) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %315 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__37> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %316 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__38> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %317 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__39> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %318 = torch.operator "onnx.Unsqueeze"(%314, %317) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> %319 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__40> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %320 = torch.operator "onnx.Concat"(%315, %316, %318, %319) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> %321 = torch.operator "onnx.Reshape"(%313, %320) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,?],f32>, !torch.vtensor<[4],si64>) -> !torch.vtensor<[?,?,?,?],f32> %322 = torch.operator "onnx.Concat"(%136, %321, %137) {torch.onnx.axis = 2 : si64} : (!torch.vtensor<[11,1,1,384],f32>, !torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[11,1,100,384],f32>) -> !torch.vtensor<[11,1,?,384],f32> return %322: !torch.vtensor<[11,1,?,384],f32> } } {-# dialect_resources: { builtin: { __1: "0x080000000200000000000000", __3: "0x080000000300000000000000", __23: "0x080000001000000000000000", __24: "0x080000001000000000000000", __27: "0x080000000000000000000000", __28: "0x080000000000000000000000", __33: "0x080000000000000000000000", __34: "0x080000000000000000000000", __35: "0x080000000200000000000000", __36: "0x08000000FFFFFFFFFFFFFFFF", __37: "0x080000000B00000000000000", __38: "0x080000000100000000000000", __39: "0x080000000000000000000000", __40: "0x080000008001000000000000" } } #-}
getting error as
error: expected sizes to be non-negative, but got -1 %322 = torch.operator "onnx.Concat"(%136, %321, %137) {torch.onnx.axis = 2 : si64} : (!torch.vtensor<[11,1,1,384],f32>, !torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[11,1,100,384],f32>) -> !torch.vtensor<[11,1,?,384],f32>
during iree-flow-canonicalization post CSE.
command:
iree-compile tt.mlir --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-cpu=host -o abc.vmfb
IREE version: IREE compiler version 3.1.0rc20241217 @ 362b554
model: From HF top 1000 most downloaded models(hf_yolos-small-finetuned-license-plate-detection)
dump with '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4'
dump.log
Compiler
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What happened?
For the given IR
getting error as
during iree-flow-canonicalization post CSE.
command:
IREE version: IREE compiler version 3.1.0rc20241217 @ 362b554
model: From HF top 1000 most downloaded models(hf_yolos-small-finetuned-license-plate-detection)
dump with '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4'
dump.log
Steps to reproduce your issue
What component(s) does this issue relate to?
Compiler
Version information
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: