Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable int4 to int4 CPU STBE in fbgemm_gpu TBE API #2994

Closed
wants to merge 3 commits into from

Commits on Aug 22, 2024

  1. Add a CPU nbit to float dequantization op that supports torch.quintMx…

    …N type and QuantizedCPU backend
    
    Differential Revision: D61305979
    wsu authored and facebook-github-bot committed Aug 22, 2024
    Configuration menu
    Copy the full SHA
    0432ecd View commit details
    Browse the repository at this point in the history
  2. Add int4 to int4 CPU Sequence TBE kernel

    Differential Revision: D61305980
    wsu authored and facebook-github-bot committed Aug 22, 2024
    Configuration menu
    Copy the full SHA
    ea074b4 View commit details
    Browse the repository at this point in the history
  3. Enable int4 to int4 CPU STBE in fbgemm_gpu TBE API (pytorch#2994)

    Summary:
    Pull Request resolved: pytorch#2994
    
    X-link: facebookresearch/FBGEMM#89
    
    Enable int4 to int4 sequential CPU TBE in codegen template so that fbgemm_gpu's `IntNBitTableBatchedEmbeddingBagsCodegen` could support it
    
    Reviewed By: sryap
    
    Differential Revision: D61305978
    excelle08 authored and facebook-github-bot committed Aug 22, 2024
    Configuration menu
    Copy the full SHA
    e971c91 View commit details
    Browse the repository at this point in the history