Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SYCL] half math is using float precision for multiply/add/etc operators. #9809

Closed
JackAKirk opened this issue Jun 9, 2023 · 3 comments
Closed
Labels
enhancement New feature or request

Comments

@JackAKirk
Copy link
Contributor

sycl::half operators like

*this = operator float() + static_cast<float>(rhs);
are converting to float for operations.

We can specialize the cuda backend case in order to call appropriate instructions for f16 precision operators instead, but I opened this issue to raise awareness for other backends/ to get potentially a general solution that lowers to backend specific instructions.

@0x12CC
Copy link
Contributor

0x12CC commented Jul 16, 2024

Data += rhs.Data;

@JackAKirk, the conversions aren't there anymore. Is this issue still relevant or can we close it?

@AlexeySachkov
Copy link
Contributor

@JackAKirk, the code you are pointing to is host implementation for half. It does indeed makes a fallback to float math, because we have no half in standard C++.

However, on device side we should be using _Float16 for math operations and that has been the case for more than 4 years now, see #1089

@JackAKirk
Copy link
Contributor Author

Thanks for the clarification, my mistake.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants