-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SymIntify torchrec variable batch size path #2394
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Differential Revision: D54554735
d1848ef
to
2fb1f95
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
2fb1f95
to
e702c36
Compare
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
e702c36
to
f572fb8
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
f572fb8
to
0cf36ec
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
0cf36ec
to
6ebe9f5
Compare
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing. SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. Removing int() convertions for SymInt. Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
6ebe9f5
to
f7daeea
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing: 1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit. Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits. We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B. For Dynamo we will use constant values for now. 2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 3/ Removing int() convertions for SymInt. 4/ Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
f7daeea
to
55c95c4
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing: 1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit. Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits. We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B. For Dynamo we will use constant values for now. 2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 3/ Removing int() convertions for SymInt. 4/ Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
55c95c4
to
f770083
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
Summary: Variable Batch parameters are SymInt in dynamo tracing: 1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values. The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit. Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits. We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B. For Dynamo we will use constant values for now. 2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 3/ Removing int() convertions for SymInt. 4/ Adding torch._check() for VB parameters Reviewed By: ezyang Differential Revision: D54554735
f770083
to
6dd5241
Compare
This pull request was exported from Phabricator. Differential Revision: D54554735 |
This pull request has been merged in 9751a61. |
Summary:
Variable Batch parameters are SymInt in dynamo tracing.
SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.
fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum.
Removing int() convertions for SymInt.
Adding torch._check() for VB parameters
Differential Revision: D54554735