Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[build] add compute_86 capability #577

Merged
merged 2 commits into from
Dec 7, 2020
Merged

Conversation

stas00
Copy link
Collaborator

@stas00 stas00 commented Dec 5, 2020

RTX-30 series are compute_86

python -c "import torch; print(torch.cuda.get_device_capability())"
(8, 6)

This PR adds support for this compute capability.

I verified that it works.

Reference: https://developer.nvidia.com/cuda-gpus

RTX-30 series are compute_86
```
python -c "import torch; print(torch.cuda.get_device_capability())"
```
This PR adds support for this compute capability.

Reference: https://developer.nvidia.com/cuda-gpus
@jeffra jeffra merged commit e8b126d into microsoft:master Dec 7, 2020
@stas00 stas00 deleted the patch-1 branch December 7, 2020 21:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants