Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bf16 support issues #2238

Closed
wants to merge 2 commits into from
Closed

Conversation

jianyuh
Copy link
Member

@jianyuh jianyuh commented Dec 28, 2023

Summary:
For bf16 related cuda code, we have the following macro to distinguish between v100 vs. a100 (pre-a100 cuda/NV GPU doesn't support BF16):

#if !(                                                  \
    ((defined(CUDA_VERSION) && CUDA_VERSION < 11000) || \
     (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ < 800))))

macro.

For AMD GPU (rocm), it will lead to always false. However, on the MI250 / MI300 GPU we have in house, they have BF16 supports. We re-enable BF16 for RoCM related usages.

Reviewed By: jiawenliu64

Differential Revision: D52438898

Copy link

netlify bot commented Dec 28, 2023

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 0f82766
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/658d0fc67f7fed0008c3d444
😎 Deploy Preview https://deploy-preview-2238--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D52438898

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-export?

Summary:

- Switch to hip related TARGETS (w/ _hip suffix) when AMD GPU build is used.
- Add "supports_python_dlopen = True," to support dlopen on related deps.
- Add missing deps like `"//deeplearning/fbgemm/fbgemm_gpu:split_table_batched_embeddings_hip",`

Reviewed By: q10, zoranzhao

Differential Revision: D52435932
Summary:

For bf16 related cuda code, we have the following macro to distinguish between v100 vs. a100 (pre-a100 cuda/NV GPU doesn't support BF16):
```
#if !(                                                  \
    ((defined(CUDA_VERSION) && CUDA_VERSION < 11000) || \
     (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ < 800))))
```
macro.

For AMD GPU (rocm), it will lead to always false. However, on the MI250 / MI300 GPU we have in house, they have BF16 supports. We re-enable BF16 for RoCM related usages.

Reviewed By: houseroad, jiawenliu64

Differential Revision: D52438898
jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2023
Summary:

For bf16 related cuda code, we have the following macro to distinguish between v100 vs. a100 (pre-a100 cuda/NV GPU doesn't support BF16):
```
#if !(                                                  \
    ((defined(CUDA_VERSION) && CUDA_VERSION < 11000) || \
     (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ < 800))))
```
macro.

For AMD GPU (rocm), it will lead to always false. However, on the MI250 / MI300 GPU we have in house, they have BF16 supports. We re-enable BF16 for RoCM related usages.

Reviewed By: houseroad, jiawenliu64

Differential Revision: D52438898
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D52438898

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2023
Summary:

For bf16 related cuda code, we have the following macro to distinguish between v100 vs. a100 (pre-a100 cuda/NV GPU doesn't support BF16):
```
#if !(                                                  \
    ((defined(CUDA_VERSION) && CUDA_VERSION < 11000) || \
     (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ < 800))))
```
macro.

For AMD GPU (rocm), it will lead to always false. However, on the MI250 / MI300 GPU we have in house, they have BF16 supports. We re-enable BF16 for RoCM related usages.

Reviewed By: houseroad, jiawenliu64

Differential Revision: D52438898
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D52438898

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D52438898

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 9cd944a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants