-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate MLIR with shape information via LTC frontend #742
Generate MLIR with shape information via LTC frontend #742
Conversation
Thanks! Looks good. Can you also see if |
Hm I took a look at the code and it looks like the only place where Given that this is how this is setup, what's the expected way to generate value semantic tensors? I also looked through some of the Lazy IR code (including a file similar to the one you listed) and didn't see anything in there explicitly about value semantics. If I understand correctly though, operations with an underscore are done in place, and therefore shouldn't have value semantics? |
I think we would basically need to thread a bool through the relevant code. We could create an
To make things a bit more organized vs a loose bool.
I think this is a question for the LTC devs about the semantics of the lazy IR. They might prefer the term "purely functional" over "value semantics" but it means the same thing in this context. There are a few other cases besides the trailing underscore variants, such as batch_norm updating the running_mean/var in place. |
8eee430
to
b520c09
Compare
Does this mean you want us to assume tensors do have (i.e. default to) value semantics when generating MLIR, unless if we run into a case where we know that it shouldn't? |
Is there an authoritative list to lookup what should and should not have value semantics? I can see in the tablegen that certain ops are listed with edit: Hm I see that |
In general the alias annotations (which determine HasValueSemantics) are the source of truth. There are a few exceptions: pytorch/pytorch#73050 (comment) |
@silvasean Just to clarify before I proceed with implementing these changes, do you want all the tensors to be of type |
I would phrase my request as "where it is trivial to know by construction that the tensor has value semantics, use !torch.vtensor", which I think practically speaking means that if the Lazy IR provides a guarantee about all tensors having value semantics, then use !torch.vtensor, otherwise, just don't worry about it and MaximizeValueSemantics will do the best it can later. |
@silvasean Please have a look at the latest commit. I took a look at this discussion you had with @antoniojkim on Discord it looks like LTC makes all inline operations functional, so I've added a flag to use I previously misunderstood what you meant in your originally comment, but it makes sense now :) |
python/torch_mlir/dialects/torch/importer/jit_ir/csrc/import_options.h
Outdated
Show resolved
Hide resolved
python/torch_mlir/dialects/torch/importer/jit_ir/csrc/torch_to_mlir_utils.h
Outdated
Show resolved
Hide resolved
ede342b
to
cec0ecf
Compare
Previously, the intermediate JIT graph was generated without shape information, which caused generic !torch.tensor types to be scattered throughout the final MLIR. This PR improves the lowering by injecting each jit::Value of tensor type with its corresponding shape from the origin lazy::Node. Additionally, the MLIR parameter and return values are also generated with their respective shapes.
cec0ecf
to
aea7899
Compare
// calling-convention-impacting decisions, this flag should be interpreted as | ||
// a requirement to use a value-semantic tensor type (!torch.vtensor) in | ||
// signatures. | ||
bool assumeTensorHaveValueSemantics = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: assumeTensorsHaveValueSemantics
Any inplace ops should be made functional by LTC, so it should be safe to use value semantic tensors for everything.
aea7899
to
4d2aa26
Compare
* Generate MLIR with shape information via LTC frontend Previously, the intermediate JIT graph was generated without shape information, which caused generic !torch.tensor types to be scattered throughout the final MLIR. This PR improves the lowering by injecting each jit::Value of tensor type with its corresponding shape from the origin lazy::Node. Additionally, the MLIR parameter and return values are also generated with their respective shapes. * Use `!torch.vtensor` for MLIR from LTC Any inplace ops should be made functional by LTC, so it should be safe to use value semantic tensors for everything.
* change shape inference pass Signed-off-by: Tong Chen <[email protected]> * fix function name Signed-off-by: Tong Chen <[email protected]>
Previously, the intermediate JIT graph was generated without shape information, which caused generic
!torch.tensor
types to be scattered throughout the final MLIR. This PR improves the lowering by injecting eachjit::Value
of tensor type with its corresponding shape from the originlazy::Node
. Additionally, the MLIR parameter and return values are also generated with their respective shapes.Example output for an FC MNIST model:
Note: This PR is based on #725, but GitHub does not allow for setting the base to a fork while keeping this PR in the upstream repo, so there are several shared commits. Only the last one is specifically part of this PR.
Marked as draft while waiting for a dependency PR to land: #725
Resolves: #727
cc: @antoniojkim @ke1337