[Codegen][GPU] Let integer range optimization narrow GPU computations to i32 #19473
+354
−72
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Note: This PR is stacked on top of #19372, and so looks bigger than it is. The relevant changes are in the last commit.
Add an option to -iree-util-optimize-int-arithmetic to have it perform computations in i32 where possible, which is enabled when optimizing arithmetic for GPU codegen. This allows LLVM co correctly conclude that various computations don't need to be done at full 64-bit precision, thus saving registers and instructions. (LLVM has some rewrites for this, but they're, for example, gated on only having one use of the potentially-truncated value, which means that shared math stays in an over-wide data type).