Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase code reuse between FP32, FP16, INT8, INT4 embedding types for infer TBE #833

Closed
wants to merge 1 commit into from

Conversation

jianyuh
Copy link
Member

@jianyuh jianyuh commented Dec 29, 2021

Summary: We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Differential Revision: D33343450

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Differential Revision: D33343450

fbshipit-source-id: f676636b3895412e9401058d77065c8122c1e853
jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Differential Revision: D33343450

fbshipit-source-id: 059d08b0f05437fff275687692552649d2354058
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: 3765be6c2024e3f7a74f6c1b92cb82194e3bf5eb
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: d1c8be3b6cdfca809828f19dd87d4d2744f9111f
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: 9b54359e1126a1c8cf31bcc50daaefbbcb00ead1
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: 9eb432dae1a5bb0ec33b8be5088ebe85d08f236c
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 30, 2021
…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: 92ae814798b82a47cf6e301932f7949a334ab864
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

…r infer TBE (pytorch#833)

Summary:
Pull Request resolved: pytorch#833

We merge the implementation for {FP32, FP16, INT8, INT4} weights in inference TBE into one unified template and increase the code reuse between these implementations. This will pave the way for the future enhancements (no need to change all 4 implementations for one new feature).

Reviewed By: rweyrauch

Differential Revision: D33343450

fbshipit-source-id: 830d5039b8262b44d96da6b8192d628dd0a1f4a7
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33343450

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants