-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dtype functions for floating point ops #1813
Conversation
ecf5486
to
be7ad84
Compare
be7ad84
to
1926ff8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM
Invocation(ZeroDTensorWithDtype(torch.bool), *args), | ||
] | ||
|
||
def _get_invocations_for_fp_only_op_with_tensor_arg_followed_by(*args): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will be making a PR soon that improves the testing helper functions quite a bit, so if you're working on another set of ops, I would wait until that lands
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! I haven't started working on the next task. Will wait for your PR before further development.
@@ -277,7 +277,7 @@ function test_in_tree() { | |||
python -m e2e_testing.main --config=lazy_tensor_core -v | |||
|
|||
echo ":::: Run TorchDynamo e2e integration tests" | |||
python -m e2e_testing.main --config=torchdynamo -v | |||
python -m e2e_testing.main --config=torchdynamo -v --crashing_tests_to_not_attempt_to_run_and_a_bug_is_filed RandnDtypeDeviceModule_basic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, thanks!
commit bafe339 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon May 8 21:26:56 2023 +0000 Add dtype functions for aten.atan and prims.squeeze commit bebf695 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon May 8 21:26:10 2023 +0000 Remove duplicate code from merge with main commit 0d11895 Author: Ramiro Leal-Cavazos <[email protected]> Date: Fri May 5 21:39:02 2023 +0000 Update LLVM tag commit 73d5c07 Merge: 899d8bc eaaaeb6 Author: Ramiro Leal-Cavazos <[email protected]> Date: Fri May 5 21:30:09 2023 +0000 Merge remote-tracking branch 'upstream/main' into merge-main commit 899d8bc Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Mar 13 21:39:14 2023 +0000 Add dtype functions for `aten.ge.Tensor` and `aten.le.Tensor` commit f58f9c2 Merge: ce7abf4 4912c39 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Mar 13 21:32:00 2023 +0000 Merge branch 'main' into merge-main commit ce7abf4 Author: Jiahao Li <[email protected]> Date: Wed Feb 22 06:54:41 2023 +0800 Add dtype functions for ops that take dtype from 2nd operand (llvm#1891) commit 63945a2 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Feb 13 17:56:09 2023 -0800 Change dtype functions interface to take ints tuple for each tensor (llvm#1865) The original design for the dtype functions outlined in llvm#1462 was unable to properly handle ops that take optional tensors as an input when the optional tensor has a value of None. By the time the op gets imported into torch-mlir, if an optional value is None, all information about the original type is lost from the op type signature, preventing torch-mlir from knowing if a value of None was from an optional tensor or not, which was crucial in the original design since each tensor argument must be turned into two separate arguments for the dtype function. This commit changes the interface to dtype functions such that each tensor turns into a tuple of two ints, the first representing the rank of the tensor and the second the dtype of the tensor. Since now there is a one-to-one correspondence between the operands of an op and the operands of its dtype function, there is no ambiguity about which operand of the op corresponds with which operand of the dtype function. To test the implementation, this commit defines dtype functions for the convolution ops, all of which take one optional tensor as an argument. commit 981ac88 Author: Ramiro Leal-Cavazos <[email protected]> Date: Wed Feb 1 22:30:27 2023 +0000 Add dtype functions for two tensor promotion ops (llvm#1831) This commit adds dtype functions for ops in RefineTypes under the category of "Promote the two dtypes". The only ops not added here are convolution ops, since they take an optional tensor argument, and the dtype pipeline currently does not correctly handle that case. I will add a follow up patch fixing this. This commit also adds two helper functions that perform a very thorough testing of dtype functions. The helper function `_check_two_tensor_op` is able to independently test invalid input dtypes and invalid output dtypes. Lastly, this commit also XFAILs "MobilenetV3Module_basic". commit 83d4e89 Author: Jiahao Li <[email protected]> Date: Sat Jan 21 02:39:41 2023 +0800 Add dtype functions for floating point ops (llvm#1813) commit 8cae5ba Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Jan 16 14:32:23 2023 -0800 Add dtype functions for comparison ops (llvm#1806) This commit adds dtype functions for comparison ops that always return a tensor of dtype `i1`. commit 5b77c15 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Jan 16 20:27:49 2023 +0000 Add CI to `dtype-functions-staging` branch commit ac94ba2 Author: Ramiro Leal-Cavazos <[email protected]> Date: Thu Jan 12 22:41:04 2023 +0000 Move dtype functions into their own section in lib gen file In order to easily keep track of the dtype functions that have been moved to `abstract_interp_lib_gen.py` and make it easier to add new ones, this commit groups all the dtype functions together, rather than having them interspersed between the shape functions.
commit bafe339 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon May 8 21:26:56 2023 +0000 Add dtype functions for aten.atan and prims.squeeze commit bebf695 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon May 8 21:26:10 2023 +0000 Remove duplicate code from merge with main commit 0d11895 Author: Ramiro Leal-Cavazos <[email protected]> Date: Fri May 5 21:39:02 2023 +0000 Update LLVM tag commit 73d5c07 Merge: 899d8bc eaaaeb6 Author: Ramiro Leal-Cavazos <[email protected]> Date: Fri May 5 21:30:09 2023 +0000 Merge remote-tracking branch 'upstream/main' into merge-main commit 899d8bc Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Mar 13 21:39:14 2023 +0000 Add dtype functions for `aten.ge.Tensor` and `aten.le.Tensor` commit f58f9c2 Merge: ce7abf4 4912c39 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Mar 13 21:32:00 2023 +0000 Merge branch 'main' into merge-main commit ce7abf4 Author: Jiahao Li <[email protected]> Date: Wed Feb 22 06:54:41 2023 +0800 Add dtype functions for ops that take dtype from 2nd operand (llvm#1891) commit 63945a2 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Feb 13 17:56:09 2023 -0800 Change dtype functions interface to take ints tuple for each tensor (llvm#1865) The original design for the dtype functions outlined in llvm#1462 was unable to properly handle ops that take optional tensors as an input when the optional tensor has a value of None. By the time the op gets imported into torch-mlir, if an optional value is None, all information about the original type is lost from the op type signature, preventing torch-mlir from knowing if a value of None was from an optional tensor or not, which was crucial in the original design since each tensor argument must be turned into two separate arguments for the dtype function. This commit changes the interface to dtype functions such that each tensor turns into a tuple of two ints, the first representing the rank of the tensor and the second the dtype of the tensor. Since now there is a one-to-one correspondence between the operands of an op and the operands of its dtype function, there is no ambiguity about which operand of the op corresponds with which operand of the dtype function. To test the implementation, this commit defines dtype functions for the convolution ops, all of which take one optional tensor as an argument. commit 981ac88 Author: Ramiro Leal-Cavazos <[email protected]> Date: Wed Feb 1 22:30:27 2023 +0000 Add dtype functions for two tensor promotion ops (llvm#1831) This commit adds dtype functions for ops in RefineTypes under the category of "Promote the two dtypes". The only ops not added here are convolution ops, since they take an optional tensor argument, and the dtype pipeline currently does not correctly handle that case. I will add a follow up patch fixing this. This commit also adds two helper functions that perform a very thorough testing of dtype functions. The helper function `_check_two_tensor_op` is able to independently test invalid input dtypes and invalid output dtypes. Lastly, this commit also XFAILs "MobilenetV3Module_basic". commit 83d4e89 Author: Jiahao Li <[email protected]> Date: Sat Jan 21 02:39:41 2023 +0800 Add dtype functions for floating point ops (llvm#1813) commit 8cae5ba Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Jan 16 14:32:23 2023 -0800 Add dtype functions for comparison ops (llvm#1806) This commit adds dtype functions for comparison ops that always return a tensor of dtype `i1`. commit 5b77c15 Author: Ramiro Leal-Cavazos <[email protected]> Date: Mon Jan 16 20:27:49 2023 +0000 Add CI to `dtype-functions-staging` branch commit ac94ba2 Author: Ramiro Leal-Cavazos <[email protected]> Date: Thu Jan 12 22:41:04 2023 +0000 Move dtype functions into their own section in lib gen file In order to easily keep track of the dtype functions that have been moved to `abstract_interp_lib_gen.py` and make it easier to add new ones, this commit groups all the dtype functions together, rather than having them interspersed between the shape functions.
This PR is part of #1807. It added dtype functions for floating point ops that always return a tensor of dtype float32, except for input dtype float64, float16, or bfloat16.
@ramiro050 would you review this PR?
Update: the CI is crashing at
RandnDtypeDeviceModule_basic
in torchdynamo config, probably caused by the latest PyTorch as mentioned in #1792. I guess we should rebase dtype-functions-staging to the main branch to make the CI pass.Update: rebasing on master still doesn't work. I have temporarily disabled this test in torchdynamo by
--crashing_tests_to_not_attempt_to_run_and_a_bug_is_filed RandnDtypeDeviceModule_basic
. This test case will be decomposed into a complex series of ops as shown below. Dtype functions for some ops lie inRefineTypes
while others are inabstract_interp_lib
. They are interleaving and failing the compilation. When the migration is finished, I will re-enable it.