Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SymIntify torchrec variable batch size path #2394

Closed
wants to merge 1 commit into from

Commits on Mar 7, 2024

  1. SymIntify torchrec variable batch size path (pytorch#2394)

    Summary:
    
    Variable Batch parameters are SymInt in dynamo tracing:
    
    1/ SymInt does not support bit shifts => Skipping adjust_info_B_num_bits logic for dynamo case (when SymInt arrive into kernel) defaulting the values.
    The main idea of `adjust_info_B_num_bits` is to fit BatchSize (B) and number of Tables (T) into 32 bit.
    Default value for B_bits == 26, T_bits == 6. `adjust_info_B_num_bits` will at runtime change those bit numbers to fit specified T and B using control flow and bit shifts to check that T and B fits in bits.
    
    We have flows with T ~ 600, so picking default values for dynamo T_bits == 10 and B_bits == 22, guarding to fail in runtime if those bit numbers does not fit specified T and B.
    
    For Dynamo we will use constant values for now.
    
    2/ fbcode/deeplearning/fbgemm/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops_training.py - changing List comprehension with python sum to torch.sum. 
    
    3/ Removing int() convertions for SymInt.
    
    4/ Adding torch._check() for VB parameters
    
    Reviewed By: ezyang
    
    Differential Revision: D54554735
    Ivan Kobzarev authored and facebook-github-bot committed Mar 7, 2024
    Configuration menu
    Copy the full SHA
    6dd5241 View commit details
    Browse the repository at this point in the history