-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support int32_t indices/offsets for caching handling logics #811
Conversation
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Differential Revision: D33045589 fbshipit-source-id: 9f2f31110a0a070f7dadfe643b8585789303f145
4eb066c
to
c98d353
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Differential Revision: D33045589 fbshipit-source-id: 65bb1de7503e5c076f54cb964f16aaf75f8c0047
c98d353
to
7a294a5
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Differential Revision: D33045589 fbshipit-source-id: d2ccf90fe4d0a5ee40627bc9ec591c683fbfc993
7a294a5
to
d19ac06
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Differential Revision: D33045589 fbshipit-source-id: 42ebcd899bb5dc6735eaf67cad48ac3b168d60ca
d19ac06
to
ec55e96
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 4cdc7cec15e07c51af999276bf5366199eb216b5
ec55e96
to
ae22b20
Compare
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 89047f7cc09aee2f4ff18aa5ed3fbd0a86c16dd2
ae22b20
to
90e777e
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 5f8566797413989cf19d354f583f6a6fc87385bc
This pull request was exported from Phabricator. Differential Revision: D33045589 |
90e777e
to
8d43129
Compare
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 02fa0be6c6917d82258204f0de8d15266af29c77
8d43129
to
4a28321
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D33045589 |
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 14a32f4c7d6ecb3673c7db6216d1771f995e5aa8
4a28321
to
b01c2b1
Compare
) Summary: Pull Request resolved: pytorch#811 In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t. This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa. Reviewed By: jspark1105 Differential Revision: D33045589 fbshipit-source-id: 6b96a8333c35ae1151149fb3c5c2ad11f7f6507d
b01c2b1
to
acdd4b2
Compare
This pull request was exported from Phabricator. Differential Revision: D33045589 |
Summary:
In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.
This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.
Differential Revision: D33045589