Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comment update suggestion #12

Open
etasnadi opened this issue Aug 30, 2024 · 0 comments
Open

Comment update suggestion #12

etasnadi opened this issue Aug 30, 2024 · 0 comments

Comments

@etasnadi
Copy link

etasnadi commented Aug 30, 2024

In function runCublasTF32 the comment is misleading/incomplete. Based on the cublas docs, the effect of CUBLAS_COMPUTE_32F_FAST_TF32 is that it will use reduced precision TF32 math with tensor cores for faster GEMM. Based on the documentation of wmma ops in the CUDA programming guide, the input will be converted with __float_to_tf32 to a float (of numerically reduced tf32 precision).

As tensor cores in recent architectures support fp64 natively, I am curious what is the performance benefit of their usage over plain fp64 CUDA computation.

// This runs cuBLAS with mixed precision (performing the mul with operands

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant