-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLVM ERROR: mma16816 data type not supported #4922
Comments
Also, out of curiosity, can you post the ttgir? |
Thank you @lezcano ! The Here's the ttgir for the |
Just to make sure I understand. The repro in the OP still breaks with #5044 patched in? That's rather weird. What's the crash you see? Could you also run the script with |
I cannot reproduce it. Maybe the author of the PR is not using the correct triton version |
@Jokeren I just tried it on an A100, it does throw that error. I am using 3.1.0 since the nightly builds are broken. Let me build Triton from source and re-check |
note that you will not have to build master, but the commit linked above as it hasn't landed yet! Master will probably break. |
I can confirm that the build from https://github.com/triton-lang/triton/tree/keren/dot-mma-1 solves the issue. |
The latest Triton build (3.1.0) throws the following error when using bitpacked data inside a loop with
tl.dot
:with the build from source, I get a different error:
This error happens on Ampere and Hopper, but not on older gpus like the Titan RTX/2080 Ti.
The bitpacked data is read with indices in the form
offs_k[:, None] // num_elements
, something like[0,0,0...1,1,1...64,64,64]
.I have faced this error in the previous build and I found that replacing
for k in range(0, total_blocks_k, 1):
withfor k in tl.range(0, total_blocks_k, 1, num_stages=1):
solved the issue, but this trick no longer works with 3.1.0.Here's a full-script to reproduce it.
https://gist.github.com/mobicham/f9eba3c07f7e497ae622194a9c5e4822
The text was updated successfully, but these errors were encountered: