Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to legalize operation 'hal.interface.constant.load' #18487

Open
pdhirajkumarprasad opened this issue Sep 11, 2024 · 4 comments
Open

failed to legalize operation 'hal.interface.constant.load' #18487

pdhirajkumarprasad opened this issue Sep 11, 2024 · 4 comments
Assignees
Labels
bug 🐞 Something isn't working

Comments

@pdhirajkumarprasad
Copy link

What happened?

for the given IR

module {
  func.func @main_graph(%arg0: !torch.vtensor<[1,128],si64>, %arg1: !torch.vtensor<[1,128],f32>, %arg2: !torch.vtensor<[?,?,?,?],f32>, %arg3: !torch.vtensor<[127,127],f32>, %arg4:!torch.vtensor<[2,2],si64>, %arg5: !torch.vtensor<[1,64,12,64],f32>, %arg6: !torch.vtensor<[1,128,12,64],f32>, %arg7: !torch.vtensor<[1],si64>, %arg8: !torch.vtensor<[2],si64> ) -> !torch.vtensor<[?,?,?,?],f32>  attributes {torch.onnx_meta.ir_version = 8 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.1.0"} {
    %none = torch.constant.none
    %416 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__72> : tensor<7x7xf32>} : () -> !torch.vtensor<[7,7],f32> 
    %418 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__74> : tensor<4xsi64>} : () -> !torch.vtensor<[4],si64> 
    %419 = torch.operator "onnx.ConstantOfShape"(%arg7) {torch.onnx.value = dense_resource<__75> : tensor<1xsi64>} : (!torch.vtensor<[1],si64>) -> !torch.vtensor<[0],si64> 
    %420 = torch.operator "onnx.Concat"(%418, %419) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[0],si64>) -> !torch.vtensor<[4],si64> 
    %422 = torch.operator "onnx.Reshape"(%420, %arg8) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[2,2],si64> 
    %423 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__77> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %427 = torch.operator "onnx.Slice"(%422, %423, %423, %423, %423) : (!torch.vtensor<[2,2],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2,2],si64> 
    %428 = torch.operator "onnx.Transpose"(%427) {torch.onnx.perm = [1 : si64, 0 : si64]} : (!torch.vtensor<[2,2],si64>) -> !torch.vtensor<[2,2],si64> 
    %429 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__81> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %430 = torch.operator "onnx.Reshape"(%428, %429) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[2,2],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> 
    %431 = torch.operator "onnx.Cast"(%430) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[4],si64>) -> !torch.vtensor<[4],si64> 
    %432 = torch.operator "onnx.Pad"(%416, %431, %none) {torch.onnx.mode = "constant"} : (!torch.vtensor<[7,7],f32>, !torch.vtensor<[4],si64>, !torch.none) -> !torch.vtensor<[?,?],f32> 
    %957 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__273> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %959 = torch.operator "onnx.Slice"(%432, %957, %957, %957, %957) : (!torch.vtensor<[?,?],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,?],f32> 
    %960 = torch.operator "onnx.Concat"(%959, %432) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[?,?],f32>, !torch.vtensor<[?,?],f32>) -> !torch.vtensor<[?,?],f32> 
    %963 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__277> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %965 = torch.operator "onnx.Slice"(%960, %963, %963, %963, %963) : (!torch.vtensor<[?,?],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,?],f32> 
    %995 = torch.operator "onnx.Einsum"(%arg5, %arg6) {torch.onnx.equation = "bind,bjnd->bnij"} : (!torch.vtensor<[1,64,12,64],f32>, !torch.vtensor<[1,128,12,64],f32>) -> !torch.vtensor<[?,?,?,?],f32> 
    %1043 = torch.operator "onnx.Mul"(%arg2, %965) : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[?,?],f32>) -> !torch.vtensor<[?,?,?,?],f32> 
    %1076 = torch.operator "onnx.Add"(%995, %1043) : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[?,?,?,?],f32> 
    return %1076: !torch.vtensor<[?,?,?,?],f32>
  }
}

{-#
  dialect_resources: {
    builtin: {
      __72: "0x080000000000803F0000803F0000803F0000803F000080",
      __74: "0x080000000100000000000000000000000000000001000000000000000000000000000000",
      __75: "0x080000000000000000000000",
      __77: "0x080000000000000000000000",
      __81: "0x08000000FFFFFFFFFFFFFFFF",
      __273: "0x080000000100000000000000",
      __277: "0x08000000FFFFFFFFFFFFFFFF"
    }
  }
#-}

Getting following error:

model.mlir:22:13: error: failed to legalize operation 'hal.interface.constant.load'
    %1043 = torch.operator "onnx.Mul"(%arg2, %965) : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[?,?],f32>) -> !torch.vtensor<[?,?,?,?],f32> 
            ^
model.mlir:22:13: note: see current operation: %84 = "hal.interface.constant.load"() {layout = #hal.pipeline.layout<constants = 14, bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>, ordinal = 2 : index} : () -> i32

dump with flag : '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4' (due to file size restriction, initial part has been removed from log)
dump.log

Steps to reproduce your issue

command:

iree-compile --iree-hal-target-backends=llvm-cpu model.mlir

What component(s) does this issue relate to?

Compiler

Version information

No response

Additional context

No response

@nirvedhmeshram
Copy link
Contributor

@pdhirajkumarprasad I am not able to get the error mentioned in the issue, for me this is failing in convert-torch-onnx-to-torch the code for which lives in torch-mlir , here is the crash dump

@pdhirajkumarprasad
Copy link
Author

pdhirajkumarprasad commented Sep 27, 2024

@nirvedhmeshram we are seeing multiple crash. original issue is masked due to nod-ai/SHARK-ModelDev#852

@vinayakdsci
Copy link
Contributor

@pdhirajkumarprasad I am not able to get the error mentioned in the issue, for me this is failing in convert-torch-onnx-to-torch the code for which lives in torch-mlir , here is the crash dump

I have the IR failing with the same crash, @nirvedhmeshram. This is happening because the constant __72 does not have the right buffer size, that matches 7x7 elements of bitwidth 32.

@pdhirajkumarprasad is there a specific model that produces this IR? We could be looking at a possible import issue.

@vinayakdsci
Copy link
Contributor

@pdhirajkumarprasad Unable to reproduce the issue with any of the models mentioned in nod-ai/SHARK-ModelDev#812 on the respective tracker, with latest build of IREE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working
Projects
Status: No status
Development

No branches or pull requests

3 participants