-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding shape information for Torch MLIR LTC Backend #727
Comments
|
Ah that's good to hear -- I'll investigate what we can do upstream |
I opened an issue on the PyTorch side here: pytorch/pytorch#75217. Through some testing it seems like on the |
Closing this issue now that we have a PR open: #742 |
* Reorganize main function. * Follow review comments. * Emit constants are globals in Krnl and LLVM dialects. * Add inference for Range operation. * Add lowering to Krnl for f32. * Add range lowering. * Fix comment. * Add Range op lowering for all supported types. * Add backend tests. * Fix memref access. * Fix inference test.
After #725 lands, the next goal is to try to add shape information to the output MLIR.
lazy::Node
contains tensor shape information (retrieved by callingshapes()
); however, this information is not retained when it's converted to ajit::Node
. (This shows that the only thing extracted from thelazy::Node
during lowering is its symbol)It also looks like there is nothing on the MLIR generation side for adding shape information (as far as I could see).
I don't think there is anything on the JIT side that enables storing tensor shape information, which might make this more difficult. My immediate thought is to try to map
lazy::Node -> MlirOperation
to insert shape information, but that doesn't feel like the most proper way to do this.@silvasean, do you have any thoughts or ideas on how to proceed with this?
cc: @antoniojkim @ke1337
The text was updated successfully, but these errors were encountered: