Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI][XFAIL] test_shape_clip_start #18182

Open
PhaneeshB opened this issue Aug 9, 2024 · 0 comments
Open

[CI][XFAIL] test_shape_clip_start #18182

PhaneeshB opened this issue Aug 9, 2024 · 0 comments
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work

Comments

@PhaneeshB
Copy link
Contributor

What happened?

For Torch-mlir bump in IREE #18169
Runtime failure - zeros output for test_shape_clip_start on CPU, GPU AMD vulkan, GPU Nvidia (cuda+vulkan)

_ IREE compile and run: test_shape_clip_start::model.mlir::model.mlir::cpu_llvm_sync _
[gw1] linux -- Python 3.11.9 /home/runner/work/iree/iree/venv/bin/python
Error invoking iree-run-module
Error code: 1
Stderr diagnostics:

Stdout diagnostics:
EXEC @test_shape_clip_start
[FAILED] result[0]: element at index 0 (0) does not match the expected (3); expected that the view is equal to contents of a view of 3xi64
  expected:
3xi64=3 4 5
  actual:
3xi64=0 0 0

Compiled with:
  cd /home/runner/work/iree/iree/SHARK-TestSuite/iree_tests/onnx/node/generated/test_shape_clip_start && iree-compile model.mlir --iree-hal-target-backends=llvm-cpu --iree-input-demote-f64-to-f32=false -o model_cpu_llvm_sync_.vmfb

Run with:
  cd /home/runner/work/iree/iree/SHARK-TestSuite/iree_tests/onnx/node/generated/test_shape_clip_start && iree-run-module --module=model_cpu_llvm_sync_.vmfb --device=local-sync --flagfile=test_data_flags.txt
module {
  func.func @test_shape_clip_start(%arg0: !torch.vtensor<[3,4,5],f32>) -> !torch.vtensor<[3],si64> attributes {torch.onnx_meta.ir_version = 10 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
    %none = torch.constant.none
    %0 = torch.operator "onnx.Shape"(%arg0) {torch.onnx.start = -10 : si64} : (!torch.vtensor<[3,4,5],f32>) -> !torch.vtensor<[3],si64>
    return %0 : !torch.vtensor<[3],si64>
  }
}

Steps to reproduce your issue

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

What component(s) does this issue relate to?

No response

Version information

No response

Additional context

No response

@PhaneeshB PhaneeshB added the bug 🐞 Something isn't working label Aug 9, 2024
@ScottTodd ScottTodd added the integrations/onnx ONNX integration work label Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work
Projects
None yet
Development

No branches or pull requests

2 participants