Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[compile][cpu]: type of return operand 0 ('!torch.vtensor<[?,384],f32>') doesn't match function result type ('!torch.vtensor<[1,384],f32>') in function #18269

Closed
pdhirajkumarprasad opened this issue Aug 19, 2024 · 2 comments
Assignees
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work

Comments

@pdhirajkumarprasad
Copy link

pdhirajkumarprasad commented Aug 19, 2024

What happened?

for the given IR

module {
  func.func @"torch-jit-export"(%arg0: !torch.vtensor<[2],si64>, %arg3: !torch.vtensor<[?,384,2],f32>) -> (!torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>) attributes {torch.onnx_meta.ir_version = 6 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.8"} {
    %3217:2 = torch.operator "onnx.Split"(%arg3, %arg0) {torch.onnx.axis = -1 : si64} : (!torch.vtensor<[?,384,2],f32>, !torch.vtensor<[2],si64>) -> (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[?,384,1],f32>) 
    %3218 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<2> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %3219 = torch.operator "onnx.Squeeze"(%3217#0, %3218) : (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,384],f32> 
    %3220 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<2> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %3221 = torch.operator "onnx.Squeeze"(%3217#1, %3220) : (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,384],f32> 
    return %3219, %3221 : !torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>
  }
}

getting error as

t1.mlir:8:5: error: type of return operand 0 ('!torch.vtensor<[?,384],f32>') doesn't match function result type ('!torch.vtensor<[1,384],f32>') in function @torch-jit-export
    return %3219, %3221 : !torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>
    ^
// -----// IR Dump After DecomposeComplexOps Failed (torch-decompose-complex-ops) //----- //
"func.func"() <{function_type = (!torch.vtensor<[2],si64>, !torch.vtensor<[?,384,2],f32>) -> (!torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>), sym_name = "torch-jit-export"}> ({
^bb0(%arg0: !torch.vtensor<[2],si64> loc("t1.mlir":2:33), %arg1: !torch.vtensor<[?,384,2],f32> loc("t1.mlir":2:66)):
  %0 = "torch.constant.bool"() <{value = true}> : () -> !torch.bool loc(unknown)
  %1 = "torch.constant.int"() <{value = 2 : i64}> : () -> !torch.int loc(unknown)
  %2 = "torch.constant.int"() <{value = 0 : i64}> : () -> !torch.int loc(unknown)
  %3 = "torch.constant.int"() <{value = 1 : i64}> : () -> !torch.int loc(unknown)
  %4 = "torch.aten.slice.Tensor"(%arg0, %2, %2, %3, %3) : (!torch.vtensor<[2],si64>, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.vtensor<[1],si64> loc("t1.mlir":3:15)
  %5 = "torch.aten.item"(%4) : (!torch.vtensor<[1],si64>) -> !torch.int loc("t1.mlir":3:15)
  %6 = "torch.aten.slice.Tensor"(%arg0, %2, %3, %1, %3) : (!torch.vtensor<[2],si64>, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.vtensor<[1],si64> loc("t1.mlir":3:15)
  %7 = "torch.aten.item"(%6) : (!torch.vtensor<[1],si64>) -> !torch.int loc("t1.mlir":3:15)
  %8 = "torch.aten.add.int"(%2, %5) : (!torch.int, !torch.int) -> !torch.int loc("t1.mlir":3:15)
  %9 = "torch.aten.slice.Tensor"(%arg1, %1, %2, %8, %3) : (!torch.vtensor<[?,384,2],f32>, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.vtensor<[?,384,1],f32> loc("t1.mlir":3:15)
  %10 = "torch.aten.add.int"(%8, %7) : (!torch.int, !torch.int) -> !torch.int loc("t1.mlir":3:15)

IREE Version:

IREE compiler version 20240819.990 @ aeda149
LLVM version 20.0.0git

Steps to reproduce your issue

Command to reproduce the issue:

iree-compile --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-hal-target-backends=llvm-cpu model.torch_onnx.mlir

Also in above IR, If I replace 3218 and 3220 node with input argument like below

module {
  func.func @"torch-jit-export"(%arg0: !torch.vtensor<[2],si64>, %arg3: !torch.vtensor<[?,384,2],f32>, %arg4: !torch.vtensor<[1],si64>) -> (!torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>) attributes {torch.onnx_meta.ir_version = 6 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.8"} {
    %3217:2 = torch.operator "onnx.Split"(%arg3, %arg0) {torch.onnx.axis = -1 : si64} : (!torch.vtensor<[?,384,2],f32>, !torch.vtensor<[2],si64>) -> (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[?,384,1],f32>) 
    %3218 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<2> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %3219 = torch.operator "onnx.Squeeze"(%3217#0, %arg4) : (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,384],f32> 
    %3220 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<2> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %3221 = torch.operator "onnx.Squeeze"(%3217#1, %arg4) : (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,384],f32> 
    return %3219, %3221 : !torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32>
  }
}

then I am getting following error

t1.mlir:5:13: error: failed to legalize operation 'torch.prim.ListConstruct'
    %3219 = torch.operator "onnx.Squeeze"(%3217#0, %arg4) : (!torch.vtensor<[?,384,1],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,384],f32> 
            ^
// -----// IR Dump After AutoInputConversionPipelinePass Failed (iree-auto-input-conversion) //----- //
module {
  util.func public @"torch-jit-export$async"(%arg0: !hal.buffer_view, %arg1: !hal.buffer_view, %arg2: !hal.buffer_view, %arg3: !hal.fence, %arg4: !hal.fence) -> (!hal.buffer_view, !hal.buffer_view) attributes {inlining_policy = #util.inline.never, iree.abi.model = "coarse-fences", iree.abi.stub} {
    %c2_i64 = arith.constant 2 : i64 loc(#loc2)
    %c0_i64 = arith.constant 0 : i64 loc(#loc2)
    %c0 = arith.constant 0 : index loc(#loc2)
    %0 = hal.tensor.import wait(%arg3) => %arg0 : !hal.buffer_view -> tensor<2xi64> loc(#loc3)
    %1 = torch_c.from_builtin_tensor %0 : tensor<2xi64> -> !torch.vtensor<[2],si64> loc(#loc3)
    %2 = hal.buffer_view.dim<%arg1 : !hal.buffer_view>[0] : index loc(#loc4)
    %3 = hal.tensor.import wait(%arg3) => %arg1 : !hal.buffer_view -> tensor<?x384x2xf32>{%2} loc(#loc4)
    %4 = torch_c.from_builtin_tensor %3 : tensor<?x384x2xf32> -> !torch.vtensor<[?,384,2],f32> loc(#loc4)
    %5 = hal.tensor.import wait(%arg3) => %arg2 : !hal.buffer_view -> tensor<1xi64> loc(#loc5)
    %6 = torch_c.from_builtin_tensor %5 : tensor<1xi64> -> !torch.vtensor<[1],si64> loc(#loc5)

What component(s) does this issue relate to?

Compiler

Version information

No response

Additional context

No response

@zjgarvey
Copy link
Contributor

zjgarvey commented Sep 20, 2024

@pdhirajkumarprasad Can you indicate which model this comes from? I'd like to take a look at where arg3 is coming from.

In these reproducers, I wouldn't recommend converting constants of the IR into inputs. E.g. squeeze dims are almost never graph inputs and I wouldn't expect the second example to compile.

@zjgarvey zjgarvey self-assigned this Sep 20, 2024
@pdhirajkumarprasad
Copy link
Author

closing this issue as it's no more seen in nightly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work
Projects
Status: Done
Development

No branches or pull requests

4 participants