Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Support int32_t indices/offsets for caching handling logics (pytorch#811
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Differential Revision: D33045589 fbshipit-source-id: 42ebcd899bb5dc6735eaf67cad48ac3b168d60ca
- Loading branch information