Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SymIntify torchrec variable batch size path #2394

Closed
wants to merge 1 commit into from

Conversation

IvanKobzarev
Copy link

Summary:
Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum.

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Differential Revision: D54554735

Copy link

netlify bot commented Mar 5, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 6dd5241
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/65ea119dd79f3b0008b53dff
😎 Deploy Preview https://deploy-preview-2394--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 5, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 6, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 6, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 6, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 7, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 7, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing.

SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.

fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

Removing int() convertions for SymInt.

Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 7, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing:

1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.
The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit.
Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits.

We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B.

For Dynamo we will use constant values for now.

2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

3/ Removing int() convertions for SymInt.

4/ Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

IvanKobzarev pushed a commit to IvanKobzarev/FBGEMM that referenced this pull request Mar 7, 2024
Summary:

Variable Batch parameters are SymInt in dynamo tracing:

1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.
The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit.
Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits.

We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B.

For Dynamo we will use constant values for now.

2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

3/ Removing int() convertions for SymInt.

4/ Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

Summary:

Variable Batch parameters are SymInt in dynamo tracing:

1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.
The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit.
Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits.

We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B.

For Dynamo we will use constant values for now.

2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 

3/ Removing int() convertions for SymInt.

4/ Adding torch._check() for VB parameters

Reviewed By: ezyang

Differential Revision: D54554735
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54554735

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 9751a61.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants