Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor LXU cache logic in TBE fwd training #1295

Closed
wants to merge 1 commit into from

Commits on Sep 10, 2022

  1. Refactor LXU cache logic in TBE fwd training (pytorch#1295)

    Summary:
    Pull Request resolved: pytorch#1295
    
    The LXU cache logic is in the critical path of the forward TBE kernel.
    Even when the LXU cache is not used, the kernel still checks whether a
    row should be fetched from the cache or HBM at runtime.  The branching
    logic should be harmless for the memory (subsystem) bound case.
    However, it could add significant overhead if TBE is conditional
    bound.  (We have observed that FP16 weight type is generally compute
    or conditional bound, while FP32 weight type is memory bound.)
    
    This diff adds a static conditional in the forward TBE kernel to
    enable/disable the LXU cache code path at compile time.  At runtime,
    the host selects the kernel with/without cache enabled based on
    whether the LXU cache is present.
    
    This diff also moves the conditional outside the D loop.  It should
    add a small benefit for the large D cases when cache is used.
    
    Reviewed By: jspark1105
    
    Differential Revision: D39353035
    
    fbshipit-source-id: bfd3d842971091e954e49c6c8fad034db1fcbc9b
    sryap committed Sep 10, 2022
    Configuration menu
    Copy the full SHA
    9aff411 View commit details
    Browse the repository at this point in the history