Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onnx] Enable f64 backends for onnx test suite #18056

Closed
wants to merge 1 commit into from

Conversation

rsuderman
Copy link
Contributor

For rocm cpu and cuda we can support f64 types for the onnx backend. It makes more sense to support testing using these types on backends where it is available.

"--iree-input-demote-f64-to-f32"
"--iree-input-demote-f64-to-f32=false"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

false is the default right? You can just leave the flag off then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not. I have to set to false to disable.

Copy link
Member

@ScottTodd ScottTodd Jul 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Gate various type based demotion passes that run before anything else.
bool demoteI64ToI32 = false;
bool demoteF32ToF16 = false;
bool demoteF64ToF32 = true;
bool promoteF16ToF32 = false;
bool promoteBF16ToF32 = false;
ah. Weird that we have it explicitly set to false true in so many places then.

@rsuderman rsuderman requested a review from ScottTodd July 30, 2024 22:18
@ScottTodd
Copy link
Member

Test results appear mixed on https://github.com/iree-org/iree/actions/runs/10170532539/job/28130212435?pr=18056. Some newly passing, some newly failing to compile, some newly crashing while compiling :P.

I think we should just test with default flags, whatever those may be. The flags that change types at the input stage are generally sketchy though, so it isn't too surprising if enabling/disabling them changes behavior in unexpected ways.

@rsuderman
Copy link
Contributor Author

Once llvm/torch-mlir#3579 merges in and sync (along with a llvm bump). We should see an increase in testing coverage.

Its worth noting some of our failures are actually passing, the rtol is just too small for some tests (e.g. onnx.Pow)

@ScottTodd
Copy link
Member

Its worth noting some of our failures are actually passing, the rtol is just too small for some tests (e.g. onnx.Pow)

We had a discussion a while ago about updating the test suite(s) to support changing tolerances for individual tests / backends. Never followed up on that, I don't remember who was looking at it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants